On 8/28/05, Timothy Hunter <cyclists / nc.rr.com> wrote:
> Joe Van Dyk wrote:
> > On 8/28/05, Timothy Hunter <cyclists / nc.rr.com> wrote:
> >
> >>Joe Van Dyk wrote:
> >>
> >>>On 8/26/05, Ara.T.Howard <Ara.T.Howard / noaa.gov> wrote:
> >>>
> >>>
> >>>>On Sat, 27 Aug 2005, Joe Van Dyk wrote:
> >>>>
> >>>>
> >>>>
> >>>>>Hm, I'm starting to think there's something wrong with my magick
> >>>>>installation.  :(
> >>>>
> >>>>i've found that the only way to go with imagemagick is to compile from source.
> >>>>the redhat fedora and enterprise rpms are totally hosed: for example loseless
> >>>>jpeg2000 compression isn't - unless you build from source, which takes hours
> >>>>due to all the bloody dependancies.  in any case i thought i'd let you know
> >>>>that in case your installation is a redhat package.
> >>>>
> >>>>note:  this info above is about 4 months old - it could be fixed...
> >>>
> >>>
> >>>Thanks... this is on a imagemagick installation compiled from scratch
> >>>last night though.  6.2.4.  So I'm not sure what's going on.
> >>>
> >>>If I have a bunch of RGB values from 0-256, do you know a way to
> >>>create an image for them?  Apparently the MaxRGB on my installation is
> >>>around 65000 or so.  The image is around 10kx10k pixels, so speed is
> >>>sorta important.
> >>>
> >>>
> >>
> >>If speed is important then the best thing to do is to build a new
> >>ImageMagick using the --with-quantum-depth=8 option. Using 8-bit depth
> >>images considerably reduces IM's memory and CPU requirements and it
> >>makes the channel intensities range from 0-255 instead of 0-65535, more
> >>in line with your expectations. For 100-million-pixel images I think it
> >>would be worth the trouble.
> >>
> >>However, if you don't want to re-install IM and you don't mind paying
> >>for a couple extra bit operations per channel, you can convert 8-bit
> >>channels to 16-bit channels like this:
> >>
> >>red16 = (red8 << 8) | red8
> >>
> >>Lastly, take a look at the #store_pixels method. This method lets you
> >>replace pixels in an image a section at a time, where a section can be a
> >>single row or column of pixels, or for that matter any rectangle. This
> >>might be a good compromise between pixel_color and constitute.
> >
> >
> > I'll try the bitshifting approach, thanks.
> >
> > Previously, I had builtup a lookup array that looked like (I think):
> > color_lookup_table = Array.new
> > 256.times { |i| color_lookup_table << Magick::MaxRGB / i }
> >
> > And then did a lookup on that table for each color.  You think the
> > bitshifting approach will be faster than an array lookup?
> >
> >
> I don't know. My gut feel is yes but if you're going to be working on
> 100,000,000 pixel images then it would be worth the trouble to actually
> compare the two approaches. A little bit of difference would mount up
> quickly :-)
> 
> Of course 100,000,000 pixel images are going to have resource
> constraints besides CPU. At 16-pixels per channel each pixel will
> require 8 bytes plus some per-image overhead, so each image will occupy
> a bit over 800MB of memory. Using quantum depth=8 cuts the memory
> requirement in half.
> 
> No matter which approach you take let me know how it goes so I'll be
> able to make recommendations to other RMagick users who are working with
> very large images. Thanks!

Yes, my Ruby program was taking up about 500 MB of memory (for a
8700x6000 pixel image).  Memory's not a problem though, all of our
machines have more than 2 gigabytes.

I'll report back tomorrow after I try the bitshifting and 8-bit
imagemagick approach instead of the current array lookup.

Unit tests are really coming in handy on this type of application. 
Especially the benchmark library.  It's awesome to make a change and
then build up some sample data and do automated tests and benchmarking
on it.