"Robert Klemme" <bob.news / gmx.net> writes:

> "Mikael Brockman" <mikael / phubuh.org> schrieb im Newsbeitrag
> news:87pt1zav3o.fsf / igloo.phubuh.org...
> 
> >> Your example is quite special.  Usually, when writing servers that
> >> serve huge chunks of data (like HTTP servers that also serve binary
> >> content, e.g. for download) then the usual (and proper) approach is to
> >> copy the file in chunks.  Nobody writes a server that reads a 1GB file
> >> into memory first before sending it over the line.
> >
> > True.  The files I'm sending are only a couple of megabytes.  Still
> > takes a long time to send to, say, someone on 56K.
> 
> I'm sorry, what do you mean by this?  Typical buffer sizes are usually
> much smaller than "a couple of megabytes" so these files would be sent
> in chunks, too.

Yeah, you're right.  But sending any data to a high-latency or timed-out
connection takes a potentially long time.

> Btw, if the only problem is blocking while sending huge chunks of
> data, then what *I* would do is this: I'd override send() (and others
> that might be necessary) to do just that.  Then one can still use the
> simple threaded approach and does not have to care about thread
> blocking. (Maybe this should even be part of the std lib's socket
> implementation?)

If blocking on read() is a problem, too, then Multiplexer is essentially
what you get when you solve it and refactor away the (in my case)
redundant multi-threading.