I was just thinking last night of working on a TCP/IP server which would 
create, restore, and save ruby objects to a Ruby DBI compatible database, 
and optionally to a series of ruby hashes that would temporarily cache 
objects in cases where that behavior would be desirable.

What I was thinking of was a module that could be mixed in, would provide 
the hooks into create, restore, and save, and would call a remote TCP/IP 
server which would provide the functionality for handling these calls.

The server would implement a socket listener which would receive connections 
and then pass them off to separate stateful, reliable threads which would 
handle the rest of the transaction.  

The mapping between RDBMS and object layer would be expressed in an XML 
file, and would allow for two levels of object lookups.  The first would be 
direct queries on the database, the default being an equality to an OID 
generated by a global sequence (postgresql) or equivalent, or equality to a 
field in the table corresponding to that object.  The second would be 
lookups on the return values of attr_readers on the objects, probably in 
some sort of objects.each.collect type of statement.

For create, restore, and save, a boolean flag for whether or not the cache 
could be consulted would be required.  The behavior of each action would be 
different depending on the setting of this flag.  

The XML mapping file would be loaded along with the server.  At the same 
time it would do a 'require' on each business object in question.  The real 
win here is for web apps, which must 'require' these business objects at 
run time, and even in the event of require caching due to persistence owing 
to mod_ruby, new apache processes still must do these 'requires' again.

Upon object creation, a factory would set the @mapping hash on the object, 
which it could carry with it.  A save or restore would consult this 
mapping.  This is a big win again because objects creating these mapped 
objects would not need to do their own 'require'.

Restore would accept either a scalar which would be interpreted as the 
special field OID (which would be included on all tables) or as a hash of  
field name, operator, and value, expressed like "foo=" => 5, "bar<" => 3, 
"baz=" => "alliwantforchristmasismytwofrontteeth"

Without too much trouble, the persistence objects could probably also allow 
for persistable collections, so long as the objects in those collections 
are homogeneous and are themselves mapped by the xml mapping file.  So an 
attribute of object Foo could be a list of objects of type Baz, so long as 
Baz itself is mapped.

The connection between the OPS client and the OPS server would be in the 
form of a message stating the required action, followed by a 
content-length, and then the serialized object, if these are appropriate.

Having said all of this, does anyone have something similar, or are there 
existing technologies I should use while implementing this, or have I come 
up with a completely silly idea?

Additional ideas include adding on a web-accessible monitor for the cache 
and object creation, as well as perhaps a separate naming service.

--Gabriel Emerson