On 19/10/2007, M. Edward (Ed) Borasky <znmeb / cesmail.net> wrote:

> In the high-level view, most "modern" operating systems -- Solaris,
> Windows, Linux and BSD/MacOS -- work the same way. There are minor
> variations on what things are called and various tuning knobs, but
> essentially you have pages on disk, page frames in RAM,
> page-fault-driven on-demand movement of code and data into RAM and some
> background processes/daemons/kernel threads that try to maintain a
> balance of all the many demands for page frames.
>
> When it works, it works well, and when it doesn't work, it fails
> spectacularly -- disk thrashing, out-of-memory process killers, response
> times on the order of minutes for one-second tasks, freezing screens,
> etc. And the solution is to add more RAM or have the software use less RAM.

Well, the memory subsystem is quite underdeveloped on the "general
purpose" OSes. You normally do not get resource accounting unless you
do realtime or some specialized OS but you at least get priorities for
cpu time. Nothing like that for memory. It is all just best effort,
distributed more or less proportionally to the amount of pages the
process has touched recently, and when it runs out something randomly
breaks.

>
> Now the killer is this: the platform (hardware and OS) designers make a
> bunch of compromises so that you can get "acceptable" performance for a
> lot of different languages -- compiled or interpreted, static memory
> allocation or dynamic memory allocation, explicit memory
> allocation/deallocation or garbage collection, etc. And the language
> designers make a bunch of compromises so that you can get "acceptable"
> performance on modern operating systems. It's almost as if the two types
> of designers communicate with each other only every fifteen years or so.

I cannot imagine what else you can do when you want an OS that runs
pretty much all languages. All that the OS can do is hand out pages,
and only the language runtime can manage the data inside those pages.
Unless you tailor the OS to one specific language or virtual machine
you cannot get anything more.

The POSIX interface might make it easier to allocate through growing
the heap rather than allocating individual pages. But still mapping
individual pages only helps in the situation when you have one huge
hole (which can be swapped out anyway), and data at the end of the
heap. This is a just special case of fragmentation. Clever allocators
can make fragmentation less likely and less severe but in the end you
cannot completely fix it unless you have a means of condensing your
data on your heap. And that you must do yourself, the OS cannot do
that. A VM may do it for you if you use an interpreted language. You
could even modify your C compiler and runtime to use indirect pointers
but then you would lose the single benefit of C - binary
compatibility.

>
> What's even more interesting is that proposals to change this -- to
> integrate language design and platform design -- almost always fall back
> to an experiment that was tried and failed (commercially, not
> technically): Lisp machines. :)

Well, that's where you get if you manage the language objects in the
OS (assuming that a lisp machine is the thing where you basically run
lisp runtime on the bare metal). It's perfectly integrated but you
lose the ability to run other languages easily because you have to map
them somehow to your chosen language. For some that are similar enough
it might be easy, for others difficult, and for some (near)
impossible.

It's been done for several languages already. You get a nice toy and
perhaps an environment for embedded or specialized systems. But not a
general purpose desktop system because you want the ability to run any
language in which a piece of software happens to e written.

Thanks

Michal