On Jan 24, 2006, at 5:31 PM, Mauricio Fernandez wrote:

> I also generalized Daniel's memoize.rb to support class-level
> memoization some time ago[1]:
>
>  http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/173587

I should have mentioned that.  Daniel did point me to this message  
during our conversation.

>> class Fibonacci
>>   extend Memoizable
>>
>>   def fib( num )
>>     return num if num < 2
>>     fib(num - 1) + fib(num - 2)
>>   end
>>   memoize :fib, FileCache.new("fib_cache.txt")
>> end
>
> I kept memoize.rb's interface, but I think I prefer this approach;  
> maybe
> special-casing for strings could make sense though...

Honestly, I wasn't planning to provide any caches at all, just let  
users make their own.  Now that you mention it though, pre-loading  
some is a neat idea.  People could have access to Memoize::WeakCache  
and Memoize::FileCache for example.

I haven't seen a file cache I like yet though.  :)

You use a file if you want persistence or to save memory.  The file  
cache used by memoize.rb gets you persistence, but really only for  
one instance of your program running simultaneously.  It doesn't save  
memory at all.

If we want to go all the way, we have to start reading from the file  
cache too.  Then we need to make the check for an entry in the file  
cache and storing it if it's not there atomic, which I guess we could  
do with flock().  Of course, then another process using the file  
could halt for a good period while we calculate what to add.

I guess there's no harm in checking for a value and finding it not  
there, then starting our calculation.  If another process added it  
before we were done, we could just save over it.  (The whole point of  
memoization is that we should get the same answer.)  For that though,  
we really need a database format of some sort.  We could probably use  
SQLite or KirbyBase here.

My own file example does read from the cache, but is incredibly  
trivial.  It also can't deal with multiple processes using the same  
cache.  My point in showing it was that it's not too difficult to  
roll your own, knowing what tradeoffs you can accept.

What do you (and others) think?  Should we provide default cache  
objects?  If so, how should the file cache work?

>> You can find an example using weak references and the actual library
>> code in the "Memoization" section of the following article from my  
>> blog:
>>
>> http://blog.grayproductions.net/articles/2006/01/20/caching-and- 
>> memoization
>>
>> The point of posting all this here is to give people a chance to
>> express concerns over my implementation.  Daniel was avoiding going
>> down this road because of issues raised by this community.  Raise
>> away.  ;)
>
> I'm not sure about the
>   ([Class, Module].include?(self.class) ? self : self.class)
> part; that way, you cannot memoize singleton methods from Module/Class
> objects.

I agree.  That's my least favorite part.

> In my implementation, I distinguished between
> Module#instance_memoize and Object#memoize (after including Memoize).

If we're going to go ahead and add a method to Module, why don't we  
just eliminate the need for include/extend altogether?  If I put a  
method in Module and a method in Objet, they can have identical names  
and interface, just work differently under the hood.

Is there any reason not to do this?

> Also,
>     original =   "_#{name}"
> would need a longer prefix IMO.

How does "__unmemoized_#{name}__" grab you?

> Finally, not that it really matters, but the WeakCache is fairly
> inefficient (one lambda per key); see that other thread (WeakRef hash)
> or
>   http://eigenclass.org/hiki.rb?weakhash+and+weakref
> for other implementations.

My goal was to keep the examples very simple, since it's really about  
memoize().  However, I loved SimpleWeakCache from your blog and am  
now using that.  Thanks!

James Edward Gray II