Sorry in advance for going to come across as a raving lunatic :|

> (windowmanagers
> conforming to the ICCCM, not applications themselves, should decide
> how they are drawn.  but the applications convey to the windowmanager
> *what* needs to be drawn.)

Kind of like X(HT)ML + Cascading Style Sheets. You get the abstract UI 
object
from the network, pipe it through an abstract UI -> specific UI 
-converter, which looks through the UI's style metadata, does some hard 
decisions based on user preferences and system configuration, and 
creates a UI object conforming to the GUI toolkit of the day.


> though again, if there's an elegant way for the code to
> know when something won't connect properly, that's better for the
> programmer / user i think.

Yes. If we have the types for a pipe's input and output and a bunch of 
info about how the pipe command works, it's perfectly feasible to build 
an RDF ontology for describing a pipe command, and then describe each 
command using that ontology, thus creating a conversion graph, and then 
using that to build programs based on wanted input and output types.

The hard part is making a weighing function for the graph edges to get 
semantically correct results. And a type detection function to deduct a 
type for a file. The concrete file format is quite easy with the 
extension metadata and magic glob matching (using e.g. 
freedesktop.org's shared-mime-info database), but semantic type is a 
good deal more work.
In the beginning it would likely be human-generated metadata, mostly, 
but it is quite likely that some combination of bayesian classification 
and old-fashioned heuristics would yield good enough accuracy for 
classifying (text) documents, as seen with spam filters. And I suppose 
there's a wealth of data that can be scraped off all the little nooks 
and crannies of many files (EXIF data, ID3 tags, creation data, place 
in directory structure, files that are often open at the same time, 
usual location on the screen, network traffic patterns...)

> i need to look over this code in more detail, but as an aside i would
> like to avoid ruby-threads (and threads in general) as a matter of the
> problems with shared-state-concurrency.

Agreed, state is a pain. As are loops. And flow control in general.

> a longer-term goal i am
> seeking is to extend ruby from pure-OO to pure-actors (as in carl
> hewitt's actors model, 'concepts techniques and models of computer
> programming' elaborates on this where the SICP does not).

The actor language paradigm is very interesting. Especially the parts 
about doing things with async message passing, assigning successors and 
passing computations around.

I'm writing a sort of domain language with actor influences it seems. 
Centered around events, search, list processing and automatic data 
conversions. Probably using Gnome Beagle for search and Ruby + friends 
for rest.

e = Eventhandler.new("Email")
def e.new(headers, body)
   if People["work"].matches? headers.sender
     Devices[Audio].play body
   end
end
MessageDaemon.Email.new(some_header, some_body)

Data conversion and events are somewhat done, Enumerable is good for 
list processing, no search yet. Who knows if that ever gets off the 
ground.


> and the interpreter (VM, compiler, whatever...) should already
> know and automate the optimal solutions to problems like "should this
> be threaded?"

And whether it should run it on the local computer or send it to 
somewhere else on the local cluster or even use remote services to 
achieve the goal.

Ideal programming language: think out loud "I wish I had some pizza 
here.." and the interpreter weighs the strength of your wish against 
the negative effects that getting a pizza would incur (cost in money, 
health effects, delivery time, psychological effects), and also takes 
into account outcomes of possible historical incidents regarding 
surprise pizza.
And then you'd just get a pizza. Or not.


> to quote from the TAoUP however, "computer time
> is cheap.  human time is expensive."

Hence you must tax both the computer and human component to the 
extremes to get to the global time usage minimum. Tax in a way that is 
most natural and effective for both. Human prowling the savannah of 
info looking for edible fruits, predators, prey and other humans, 
computer living in a world of high-speed serial computation. Computer 
communicating to human with 50MBps audio-video feed of as much data 
visible at once as the computer can display (show it all, let the 
visual center sort it out), human communicating to computer by talking, 
pointing, drawing and gesturing.

> my point is, any ruby code that actually sticks around to implement
> something like late-binding / lazy-evaluation streams / dataflow
> should rely on the transparency of the interpreter / VM to decide the
> optimal utilization of the underlying hardware

Yea. Anything to let me do more things in the same time. :)