On Fri, 8 Oct 2004 01:36:37 +0900, Brian Candler wrote
> 
> That's another O<->R mapping engine? Looks like there's a big 
> ecosystem to choose from :-)

In alphabetical order:

Active Record, Criteria, Kansas, Lafcadio, ndb are the five that pop to mind 
immediately.
 
> My main worry is users A and B both pulling up the same record onto a
> screen, making changes, and then writing back both; the one who gets 
> there first risks having their changes overwritten. Checking the old 
> values have not been changed as part of an atomic update is simple 
> and robust, and doesn't require record locking.

(*nod*)  This makes a great deal of sense to me.
 
> OK, I think perhaps we're talking about something else. With AR 
> (which I've only looked at briefly), you can make changes to an 
> object within a transaction, and if it fails, the objects themselves 
> are rolled back to their state at the start of the transaction. This 
> is done using Austin Ziegler's Transaction::Simple library, which 
> just keeps a copy of the object in marshalled form in an instance 
> variable. It rolls it back using, in outline:
> 
>     r = Marshal.restore(@__foo__)
>     self.replace(r) if respond_to?(:replace)
>     r.instance_variables.each do |i|
>       instance_variable_set(i, r.instance_variable_get(i))
>     end
> 
> I was just thinking that if you're keeping the properties in a hash, 
> and have a separate hash for their snapshot values, then you get 
> this capability for free:
> 
>     @props = @oldprops.dup
> 
> But it's not clear to me whether the best approach is to have 'obj'
> containing both old and new values, or whether you should just have two
> separate objects representing then and now:
> 
>     obj1 = $db.get(n)
>     obj2 = obj1.dup
>     ... make changes to obj2
>     obj2.save(obj1)   # => generates SQL to update obj1 to obj2 atomically

Ah.  Yeah, what I was talking about was when one wants to query data from a 
db, and then do things with that data, possibly changing the data in the 
objects, but without serializing the changes back to the db, or at least 
without serializing them back immediately and without having to do 
everything within a transaction.

You are just talking about the rollback implementation.  Kansas' rollback 
implementation needs some work, actually, but right now it keeps on hand 
every value for every field between the start of the transaction and the 
commit.  There's just no interface for, at the user level, accessing those.

In the case of an object rollback, either done explicitly by one's code, or 
because there was an exception thrown in the update, Kansas rolls each of 
the fields back to the original values.  So it's similar to your first 
example.  It's simple, but in practice it works well, in my experience.

> Dunno if anyone is interested in this... I was going to add a 
> mechanism to convert XPATH queries into SQL queries, and was 
> starting to think about how different element types could be mapped 
> to different tables, at which point it starts to look like an OR 
> mapping solution. At the moment there are just global 'elements' and 
> 'attributes' tables. I will look at Kansas, AR, NDB and others and 
> see what good ideas I can steal from them :-)
> 
> But in principle an object with attributes should map quite nicely to
>    <class attr1=val1 attr2=val2...>
> Having a hierarchy can be useful too, e.g. for access control, where 
> a user can only "see" objects which are below them in the tree. And 
> XML is still useful as an export/import tool, even if most people on 
> this list use YAML anyway :-)

I'm interested.  It's always good to see what other ideas are floating 
around that one can learn from or piggy back off of!


Kirk Haines