James Britt <jamesUNDERBARb / neurogami.com> wrote:
> 
> 2.  If a predictable hashing system is used that assures each source URL 
> maps to only one Ruby URL, and that algorithm is published, then people 
> can manually decode Ruby URLs if need be (should, say, the site go 
> away).  For example, if you see this:
>    http://rubyurl.com/2OJCU
> 
> you should be able to reverse-engineer it to this
>    http://www.ruby-doc.org/

This comes down to an issue of how compressible urls are. I think most
(if not all) of the url shortening sites use the fact that urls people
submit are sparse in the space of all urls, and just keep assigning them
arbitrary generated symbols; I don't think an algorithmically reversible
scheme would be possible when you consider some of the huge
server-state-carrying urls that webapps generate. As a quick experiment
I catted this example from upthread:

http://www.mapquest.com/maps/map.adp?ovi=1&mqmap.x=300&mqmap.y=75&map...
z8OOUkZWYe7NRH6ldDN96YFTIUmSH3Q6OzE5XVqcuc5zb%252fY5wy1MZwTnT2pu%252bNMj
OjsHjvNlygTRMzqazPStrN%252f1YzA0oWEWLwkHdhVHeG9sG6cMrfXNJKHY6fML4o6Nb0Se
Qm75ET9jAjKelrmqBCNta%252bsKC9n8jslz%252fo188N4g3BvAJYuzx8J8r%252f1fPFWk
PYg%252bT9Su5KoQ9YpNSj%252bmo0h0aEK%252bofj3f6vCP 

into a file and tried running some standard compression routines on it -
they didn't do very much.

martin