First, I want to say there is no such thing as perfect security.

You always have to balance security with usability.

> -----Original Message-----
> From: Dan Sugalski [mailto:dan / sidhe.org]
> Sent: Sunday, January 06, 2002 11:23 PM
> To: ruby-talk ML
> Subject: [ruby-talk:30426] Re: snippet exchange (was: Re: Re: chomp for
> arrays?)
>
> [SNIP]
>
> That does the end-user no good, though. One of the ways to attack
> this sort
> of setup is to co-opt things such that the user never contacts your host.
> Another is to steal the key either from your system or from the developer
> and to upload a properly signed kit that's dangerous.

OK.  So if the user has an application on their PC that downloads a Gem from
a server and checks for the integrity of that Gem (automatically using
SHA/PK) that will either succeed or fail...period.  DNS has nothing to do
with it.  It has to do with verifying that the files were not changed from
point A (developer) to point B (user).  If someone spoofs the server and
loads nasty Gems on it they (the user) will get Gems that do not validate.
If they do validate because the private key(s) of valid developers are
stolen then that is no different that hypothesizing what would happen if
some dink in your local CompUSA replaced the Red Hat CDs with Trojan horse
infested versions and you go pick one up and install.  If physical security
is thwarted, you are screwed.

>
> >   Now, the method by which I download the
> >public keys of developers I "trust" is definately an issue but there are
> >emerging systems that are being developed to [help] solve this
> problem in a
> >distributed (rather than centralized) fashion that fall under the name
> >"reputation networks".
>
> While there might ultimately be some way to do this, as the
> network stands
> now the methods are insufficient. Unverified DNS is a huge danger here.

Why are you fixated on DNS.  I realize that DNS is not secure.  I am
speaking of creating cryptographically signed pieces of content.  Where the
content resides is not an issue.  The content is self-validating if the
public key is known by the receiver.  Now, this does not prevent a denial of
service attack (and you cannot prevent that in today's internet) but is does
prevent content from being changed and nasty code introduced.

>
> > > being trustworthy (they aren't), DNS being trustworthy (it
> > > isn't), that the
> > > signing entity is trustworthy (they aren't), and that the
> source you're
> > > fetching is safe to use sight unseen (it isn't).
> > >
> > > Someone could poison your DNS cache. The remote repository can be
> > > compromised.   The keyserver can be compromised. A proxy in
> the middle of
> > > the transaction can be compromised or poisoned. The person
> providing the
> > > code can be less trustworthy than you think they are.
> > >
> > > Yeah, these are all potential issues when installing any chunk of
> > > code from
> > > the net, but at least with a manual install you have a chance to check
> > > things out even if you choose not to. With automagic loading, you
> > > take all
> >
> >So, for every file you download, source or binary do you check
> it line for
> >line to verify that it does nothing wrong and has not been compromised?
>
> On production machines I manage? Generally yes, I do. It does limit the
> number of kits that get installed. If I've reason to believe that the
> remote host hasn't been compromised and

Whoa.  What OS do you run?  You check EVERY source line of EVERY OS, library
and package you use? As for believing in the integrity of a remote host,
that gets back to your very own argument about co-opting
IP/DNS/TCP/whatever.

>
> >Security is based more on perception than reality.
>
> Nope, that turns out not to be the case. Security's based on trust and
> trustworthiness. Anything outside reasonable physical control needs to be
> held to a higher level of trust, and there's a lot in the loop that's
> inherently untrustworthy.

Trust is based on perception ;-)  We trust what we perceive is acceptable to
trust, from things that we perceive earn our trust but our perceptions can
be compromised.  What I am saying is that people feel secure when they are
convinced (through their perceptions) that they are secure.  But that does
not, in reality, mean they are secure.

>
> >But hey, if we want to go secure how about this:
> >
> >Start a central site/group to issue (physical) hardware
> cryptographic tokens
> >(for a fee $$) like the Dallas Semiconductor iButton
> >(http://www.ibutton.com/ibuttons/java.html) and have those
> hardware tokens
> >sign the Gems.  That way each would contain a x.509 certificate that was
> >signed by the central (known) autority (public key).  So, unless the
> >physical device was stolen (and the PIN known to the person who
> stole it) it
> >could not be used to sign code.
>
> Nope, not secure. Yes, the central server could have reasonable
> guarantees
> that the archives it has came from who it's said to come from.

The server guarantees nothing but that some data is transferred.  The
content (if digitally signed) is what validates who something came from.
The hardware token prevents the theft of a private key over the network
(because it never leaves the token).  PIN activation on the hardware token
prevents physical theft and use without knowledge of the PIN.  That would
create a secure identity of who a piece of content came from.

>
> Also, for this to work the code potentially needs to be downloaded every
> time it's used. (Yes, there are cache options here, but you'd want
> per-user, or potentially per-process caches) That is a much larger window
> of vulnerability--if an attacker knew you did this, it'd be reasonably
> simple to watch and intercept attempts. One-shot installs have a much
> smaller window.

Why does code have to be downloaded every time.  I am really lost in this
statement.