If you guys are talking about the same CDB format Bernstein created,
then I don't know that it would be a good match for this kind of task.

CDB has one major weakness: you can make updates in memory quickly, but
persisting them to disk requires time proportional to the total dataset
size. Note that's dataset size, not number of keys -- CDB requires you
to copy the entire old DB to a new file to persist changes.

It *is* rediculously fast for lookups of mostly constant data (hence
the initial 'C'), but probably isn't a good match for any sort of
regularly-updated dataset.

I'd second the recommendation for BDB -- it's reasonably fast, very
scalable, and gets hammered on by all kinds of applications day in and
out.