> Anyway, below is the code. I ran it through the profiler, but the top
> two most costly ops were Dir.foreach, which I don't see any way to
> optimize*, and the loop that gathers environment information, which I
> again see no way to optimize.

Could you post your profiling? If you run using "time", how much user 
CPU versus system CPU are you using?

Have you tried using Dir.open.each instead of Dir["/foo/*"]? Maybe 
globbing is expensive.

Your environment loop does a fresh sysread(1024) for each var=val pair, 
even if you've only consumed (say) 7 bytes from the previous call. You 
would make many fewer system calls if you read a big chunk and chopped 
it up afterwards. This may also avoid non-byte-aligned reads.

I would also be tempted to write one long unpack instead of lots of 
string slicing and unpacking. The overhead here may be negligible, but 
the code may end up being smaller and simpler. e.g.

  struct = ProcTableStruct.new(*psinfo.unpack(<<PATTERN))
i i i i
i i i i
i i L L
L x4i ss
...etc
PATTERN

Perhaps you could combine it with your struct building, e.g.

      FIELDS = [
         [:flag,"i"],      # process flags (deprecated)
         [:nlwp,"i"],      # number of active lwp's in the process
         ...
         [:size,"s"],      # size of process in kbytes
         [:rssize,"s"],    # resident set size in kbytes
         [nil,"X4"],       # skip pr_pad1
         ... etc

HTH,

Brian.
-- 
Posted via http://www.ruby-forum.com/.