On Aug 11, 2006, at 7:00 PM, Francis Cianfrocca wrote:

> I was so interested in this idea that I cobbled up a quick test on a
> workstation machine with a medium-bandwidth IO channel. 100,000 1K  
> files
> brought the shared-memory filesystem to its knees almost  
> immediately. On an
> oxide-based  journalling filesystem, it seemed to do 10 to 20  
> thousand files
> a second pretty consistently. That's a decent but not huge fraction  
> of the
> available channel bandwidth. Haven't tried a million files. I'm still
> interested though.

In the Java thing we wrote we had 100 of thousands of files in a  
directory, nothing shared though. No problems until you did a 'ls' by  
mistake :-) Backups are also a problem. External fragmentation can be  
a problem. I know we experimented with the kind of thing that I think  
Kirk was suggesting. By building a hierarchy you can keep the numbers  
of files/directories down, and this avoids the backup (and 'ls')  
problems.

Cheers,
Bob

----
Bob Hutchison                  -- blogs at <http://www.recursive.ca/ 
hutch/>
Recursive Design Inc.          -- <http://www.recursive.ca/>
Raconteur                      -- <http://www.raconteur.info/>
xampl for Ruby                 -- <http://rubyforge.org/projects/xampl/>