On Monday 13 July 2009 15:15:16 Pito Salas wrote:
> Here are three representations. Which one will be faster? Which one will
> be smaller?

Thousands of records, each with two fields is a very small amount of data.

With "Name" and "Age" as an example, if you have a name like "Apu 
Nahasapeemapetilon" and set aside 4 bytes for the age, 100,000 records would 
take up roughly: 21+4 * 100,000 => 25 * 100,000 bytes, or 2.5 megabytes.  
There will be some overhead as well, but even if you double that to 5 
megabytes, it's tiny by the standards of modern memory.

Speed will probably also not be a major issue.  All you're doing is looking up 
a value stored in memory.

Rather than worry too much about whether to use structures, objects or hashes 
for speed, why not choose the data type that makes the rest of the program 
easiest to write and to read.  If later you decide it needs to be faster or 
smaller, you can adjust it, but unless you're running on an embedded system, 
it's unlikely you'll have to.

Ben