On Oct 27, 2007, at 4:31 PM, Tom Machinski wrote:

> Hi group,
>
> I'm running a very high-load website done in Rails.
>
> The number and duration of queries per-page is killing us. So we're
> thinking of using a caching layer like memcached. Except we'd like
> something more sophisticated than memcached.
>
> Allow me to explain.
>
> memcached is like an object, with a very limited API: basically
> #get_value_by_key and #set_value_by_key.
>
> One thing we need, that isn't supported by memcached, is to be able to
> store a large set of very large objects, and then retrieve only a few
> of them by certain parameters. For example, we may want to store 100K
> Foo instances, and retrieve only the first 20 - sorted by their
> #created_on attribute - whose #bar attribute equal 23.
>
> We could store all those 100K Foo instances normally on the memcached
> server, and let the Rails process retrieve them on each request. Then
> the process could perform the filtering itself. Problem is that it's
> very suboptimal, because we'd have to transfer a lot of data to each
> process on each request, and very little of that data is actually
> needed after the processing. I.e. we would pass 100K large objects,
> while the process only really needs 20 of them.
<snip>

i'm reading this as

   - need query
   - need readonly
   - need sorting
   - need fast
   - need server

and thinking: how isn't this a readonly slave database?  i think that  
mysql can either do this with a readonly slave *or* it cannot be done  
with modest resources.

my 2cts.



a @ http://codeforpeople.com/
--
it is not enough to be compassionate.  you must act.
h.h. the 14th dalai lama