> -----Original Message-----
> From: Tobias Reif [mailto:tobiasreif / pinkjuice.com]
> Sent: Saturday, November 03, 2001 8:02 PM
> To: ruby-talk ML
> Subject: [ruby-talk:24273] multiple serverside processes
>
>
> Hi,
>
> with Ruby serverside (not mod_ruby):
> An http request is made requesting a Ruby program. This starts Ruby, and
> the Ruby program is executed. Let's say it creates, deletes, and updates
> files, among other things.
> Now a second http request is made to the same Ruby proram.
>
> 1. Will a new Ruby interpreter be started?
It totally depends on server software. If it's plain Apache with mod_cgi or
mod_ruby (or something other using CGI-like interface), then the answer is
yes.
> 2. Can the two running Ruby programs interfer each other?
>     For example: could the one instance try to update a file, while the
>     other instance it @ it, and the whole shebang goes wrong?
Yeah, this is the case.

> 3. If so: how to solve it? (mutex is inside one program, isn't it?)
If concurrent file access is your only problem, use File#flock (see man for
flock(2)) or fcntl (often flock is implemented as a call to fcntl).
Though simple file locking won't fix the problem with one application
reading the file and other one unlinking it (though descriptor should be
valid until it's closed).
Also, opening files for writing and truncating them may introduce subtle
problems - you cannot lock the file before it's truncated, so you need to
try open it in read-write mode, then lock and then truncate it, and if it's
not found - only then you should open it for real writing.

A piece of sample code (from real piece of software):
#       Open file, locking it
#       Handles 'truncating' open modes ('w..') specially to prevent
EOFErrors on multiple reading/writing processes
    def File.open_locked(name, mode)
        new_mode = mode.dup
        if new_mode.sub! /\Aw\+?/, 'r+'
            begin
                (file = File.open name, new_mode).flock LOCK_EX
                file.truncate 0
            rescue
                (file = File.open name, mode).flock LOCK_EX
            end
        else
            (file = File.open name, mode).flock LOCK_EX
        end
        if block_given?
            begin
                yield file
            ensure
                file.close
            end
        else
            file
        end
    end

Also, to be safe, I recommend you to unlink files using this way:
File.open_locked 'file', 'r' do |file| File.unlink file.path end

On the other hand, any locking may cause a race condition, so data integrity
is at risk, especially if an integral set of data strucutres are spreaded
amongst different files.
Also, locking isn't even close to being fast, needs additional OS resources,
isn't implemented in the same way everywhere (i.e. copperative use shared vs
exclusive locks in Linux and FreeBSD), or could be not implemented at all.
Also, you can use semaphores and other IPC things, but I don't thenk it's a
viable solution if you want something really good for future.

If you're trying to make something more complex than a CGI script, I'll
recommend you to use some sort of an single-process application server
technology - it works good under heavy load, easier to maintain, could use
caching effectively, usually less sensitive to disk performance, and just
plainly faster and better. The only real drawback is low scalability -
unless you use multithreading. For a simple example of the thing, look at
FastCGI (www.fastcgi.com if I'm not too dumb).

_____________________________________________________________
Zagorodnikov Aristarkh | Lead Programmer | xm / bolotov-team.ru
bolotov.ru creative group
http://www.bolotov.ru/