On Wed, 1 Nov 2006 ara.t.howard / noaa.gov wrote:

> i'm not quite clear why you wouldn't just setup a rails site and crawl it to
> generate the static site?  since speed wouldn't be an issue on the rails end
> you could just use webrick and an sqlite db to keep things simple and
> dependancy free.  this gives you a huge toolset to work from and a mailing
> list to ask questions on - the only custom work you'd need to do is write a
> dedicated crawler script, and even that might turn out to be nothing but a
> single wget command if you planned your site carefully.

You can do that with a relatively new IOWA feature, too (introduced in 
this 0.99.2.x development chain).

Set docroot_caching=true in the config and IOWA will write the generated 
content into the docroot.  Depending on your webserver config, that file 
will, on subsequent requests, either be served as a static file by the 
webserver, or the IOWA handler will serve it statically itself.  If you 
need to regenerate the static files, just delete them, and the next 
request for them will be processed dynamically and saved.

Right now this is an all or nothing feature, but I have plans to enable 
one to specify that only certain urls or regions of a site be dynamically 
cached in this way.


Kirk Haines