On 2010-03-21, David Masover <ninja / slaphack.com> wrote:
> In this case, that's an assumption based on the device, and it's the reason we 
> have all this stupidity about tethering. If the bandwidth is an issue, charge 
> per bit or raise the monthly fee. (There's also the issue that not all 
> smartphone users want or need Internet on their phone.)

> Instead, we get potentially a lower monthly fee, plus some arbitrary and 
> asinine restrictions on how we use it, in the _hope_ that we use less 
> bandwidth. Why not simply have a lower-bandwidth plan?

Probably because you can't cap bandwidth -- the phone is effectively broken
if you do -- and people freak out if you charge them huge amounts of money
for bandwidth.

> I've done this sort of thing before. It's not as hard as it's made out to be, 
> and even a Ruby without everything it'd "benefit from having" would likely be 
> better than raw C.

Maybe, but not enough to justify the effort, especially since most of the
stuff I want is already written and working in C or shell.

> Think about it -- Ruby without readline support (so IRB sucks), or raw C? For 
> the cases where I'd be considering Ruby at all, the lack of readline wouldn't 
> stop me.

The point is, it's an example of a case where I can use a program already
existing, written in C, but I can't use one already existing, written in
Ruby.

I'm talking about *using* programs -- because I write programs in order that
people could use them.

So I tend to write things in C for portability if I want people to have
access those things.

>> My guess would be virtually none.  Certainly, virtually none could be done
>> with acceptable efficiency.

> Configuration, at the very least.

I don't see any to do.

> Also, I'm not sure I see efficiency as a 
> burden here, for the most part -- it looks like the intended use here is 
> compiling, which is already slow anyway,

Yeah, that's the thing.  Right now, a build of the whole shebang this is
being used with takes over an hour on an 8-core machine with 48GB of
memory and fast disks.  Running stuff inside my current library adds
about 16% to everything, give or take -- more with lots of small files.
Running inside a larger/slower language would be crippling.

> and is a domain where tools like make 
> (or even rake) are used also. The part which actually builds an image/package 
> is going to be IO-bound anyway.

You'd think, but no.  Running the same tar command can take SUBSTANTIALLY
longer under a wrapper like this.  Adding more processing complexity noticably
affects that.

Yes, it's I/O bound, but I'm adding a lot of I/O to many I/O operations...

-s
-- 
Copyright 2010, all wrongs reversed.  Peter Seebach / usenet-nospam / seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!