Martin DeMello wrote:
> James Britt <jamesUNDERBARb / neurogami.com> wrote:
> 
>>2.  If a predictable hashing system is used that assures each source URL 
>>maps to only one Ruby URL, and that algorithm is published, then people 
>>can manually decode Ruby URLs if need be (should, say, the site go 
>>away).  For example, if you see this:
>>   http://rubyurl.com/2OJCU
>>
>>you should be able to reverse-engineer it to this
>>   http://www.ruby-doc.org/
> 
> 
> This comes down to an issue of how compressible urls are. I think most
> (if not all) of the url shortening sites use the fact that urls people
> submit are sparse in the space of all urls, and just keep assigning them
> arbitrary generated symbols; I don't think an algorithmically reversible
> scheme would be possible when you consider some of the huge
> server-state-carrying urls that webapps generate. 

Quite true; this occurred to me after I posed that question.  As nice as 
it may be to have this reversibility, it doesn't seem quite feasible. 
You aren't going to get the size reduction available when using 
arbitrary keys handed to URLs as they are needed.