------art_10389_31838492.1202223764353
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

On Feb 5, 2008 3:09 AM, tho_mica_l <micathom / gmail.com> wrote:

> Also I'm wondering if this isn't an artefact
> of
> the benchmark. The full run looks like this:
>
>      user     system      total        real
> ...
> 8 24142  0.541000   0.000000   0.541000 (  0.601000)
> 9 25988  0.621000   0.000000   0.621000 (  0.641000)
> 10 588993 246.555000  93.324000 339.879000 (345.657000)
> 1703 chars/second
>

My baseline assumption was that runtime was relatively linear with respect
to the data size.  This assumption is broken with the above case (I think I
noticed this too at some point).  Going from a depth of 9 to 10 increased
the length by ~20X, but the runtime went up by ~400X.  There is obviously an
O(n*n) component in there (20*20@0).  Sounds like there is a ruby1.9problem.

In the benchmark, you could move the print of the performance to inside the
loop, right before the break.  If there is a consistent downward trend in
chars/second, you may have an O(n*n) solution and chars/second makes no
sense (for arbitrary data size).  Otherwise, maybe we should be looking at
the best performance between the longest two data sizes so that there is no
penalty for a solution getting to a larger but possibly more difficult
dataset.  Running this test multiple times (maybe with 4.times{} around the
whole benchmark - including creating the generator) would also be good.

What would the size of an average json snippet an ajax app has to deal
> with be? I'm not in the webapp development buisness but from my
> understanding this would be rather small, wouldn't it?


Maybe, but then making a fast parser wouldn't be any fun :)

------art_10389_31838492.1202223764353--