```-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Sat, 4 Aug 2001 14:28:29 +0900, Ned Konz <ned / bike-nomad.com> wrote:
>On Friday 03 August 2001 10:08 pm, you wrote:
>> On Sat, 04 Aug 2001 03:40:12 GMT, Harry Ohlsen <harryo / zip.com.au> wrote:
>> >> For those wondering why I did this, here is a mod_exp routine
>> >> for calculating z**n % q for very large numbers such as are used
>> >> in public key encryption:
>> >
>> >Sorry, I can't answer your question, but thanks for the code.  I was
>> >literally just about to write the same thing!
>>
>> Oh, okay. I've noticed an interesting thing about [] though -- it is
>> almost as slow as doing a " % 2" on the number!
>
>Wouldn't it be faster to just do '| 1'  if all you're doing is testing
>evenness?

You mean "& 1", right? As in, " if (a & 1)==0 then
print 'even' ; else print 'odd' ; end;".

Unfortunately, on 1024-bit bigints, that did not make a significant
difference (approximately 1-2% improvement, and that may have been
lost in the noise). Between Ruby promoting the 1 to be a 1024-bit bigint,
and the comparison, we still end up virtually the same.

But hey, don't believe me, try it yourself. Try:

10000.times { if
(5213451352345234523452345234523452345234523612345123412341234123412341234 &
1) == 0 ; then a=true; else a=false; end }

vs.:
10000.times { if
5213451352345234523452345234523452345234523612345123412341234123412341234[0]
== 0 ; then a=true; else a=false; end }

Use the Unix 'time' command or the profiler, either way.

The basic problem with bigints appears to be all the promotions and
etc.  needed to do the operation or comparison. By going down to the
low level representation, we just need to look at the very lowest bit
of the representation, rather than needing to do an actual comparison.
Believe me, doing anything with 1024-bit numbers takes time unless
you're *very* careful. Ruby's bigint code is very efficient, but I
have run into this very same problem writing bigint code in "C" before
(i.e., code that theoretically should have been faster actually
running slower because of the overhead of slinging 1024-bit numbers
around through a procedure call interface, even though we're just
passing a reference and not the whole bloody number).

Bigint code is a hairy business! I'm going to keep fiddling and tuning some
more and see what I can do to make this faster on the Ruby 'virtual
machine', the speed of [] or & or == is interesting but not what is
the problem here. Even negating a number (e.g. the '-v' in the binary
gcd algorithm) takes a significant amount of time when you're dealing
with 1024-bit numbers, 0.05ms per call in fact and there were over
700 of them in each call to gcdb. A twiddling of the algorithm to
reduce the number of operations may result in a speed improvement even
if the operations themselves are more expensive!