In article <10625.206.157.248.34.1118348101.squirrel / www.netrox.net>,
Ryan Leavengood <mrcode / netrox.net> wrote:
>Gavri Fernandez said:
>>
>> So you're saying that when the performance requirements crosses a
>> certain threshold, interpreted languages should not be used.
>> The idea here is that if Ruby is faster, that threshold where should a
>> switch needs to be made is shifted.
>
>Of course, and I'm all for that. I want Ruby to be as fast as possible. In
>fact I have some application domains in mind I'd like to use Ruby for that
>may run into this exact problem (sound processing.)
>
>But still, at this point in the state of computing, I would not use Ruby
>in certain applications:
>
>- operating system level code.
>- heavy duty 3D rendering.
>- device drivers.
>- any major number crunching (math, video processing, low-level image
>manipulation.)
>
>But hey, maybe with some special hardware (a Ruby Chip?), 

RubyChip: Introduced in 2010.  Apple adopts it in 2012, moves away from 
Intel. ;-)

>all the above
>would be possible and fast with Ruby. That may be the next level of
>computing: hardware accelerated high level languages.

Well, maybe not so new.  There was a Lisp machine back in the 80's as I 
recall.  There was also a Forth chip made by Chuck Moore back then too.   
Of course there are also HW implementations of the JVM (PicoJava).

Now that we've got fairly inexpensive FPGAs (like the Spartan 3 family 
from Xilinx) it's possible that you can do this sort of thing in FPGAs.
It's certainly an intriguing idea. 

Phil