"M. Edward (Ed) Borasky" <znmeb / cesmail.net> writes:

> ara.t.howard wrote:
>>
>> On Oct 18, 2007, at 9:32 AM, Yohanes Santoso wrote:
>>
>>> I don't favour the long-running process model for server. I prefer to
>>> fork() for each request. So I'm rarely bothered by whatever ruby's GC
>>> quirknesses that I may have triggered. I understand that this approach
>>> is not trendy anymore and RoR does not support this model, but I'm
>>> just throwing it out in the open for an alternative work-around where
>>> possible.
>>>
>>
>> that's quite interesting because, while i'm not the memory expert
>> you are, i've settled on exactly that model for the many many server
>> process i've written for 24x7 systems: the robustness simply cannot
>> be beaten.
>>

Ara, my knowledge is limited to whatever few ad-hoc experimentations
I've done.

Ed, 

> fork() (or clone() in Linux) is cheap ... it's actually
> *instantiating* the thread or process that costs! 

What do you mean by 'instantiating'? When you fork() a new process is
created and scheduled. That seems instantiated enough for me.

> Depending how smart your kernel is, you could be doing it one page
> fault at a time. 



> And no matter *how* smart your kernel is, above a certain ratio of
> virtual process size over real process size, it's going to start
> thrashing.

Do you have an example? I don't quite get what you meant. I'm not sure
why the ratio value of VSZ over real process size (I assume it's not
RSZ which is the resident size) matters. Can the approximate value of
this ratio be determined?

My understanding is a process thrashes because its working set during
the thrashing period cannot be paged in in its entirety. This could be
because of limited resources (pressure from other processes, etc.) or
bug in the kernel.


> That's what's so attractive about lightweight communicating
> processes -- emphasis on *lightweight*. It doesn't cost much to
> start them up, move them around, kill them, etc.



Regards,
YS.