At 02:34 PM 1/7/2002 +0900, Rich Kilmer wrote:
>First, I want to say there is no such thing as perfect security.
>
>You always have to balance security with usability.

And the more usable, or at least convenient, something is, generally the 
less secure.

> > From: Dan Sugalski [mailto:dan / sidhe.org]
> >
> > [SNIP]
> >
> > That does the end-user no good, though. One of the ways to attack
> > this sort
> > of setup is to co-opt things such that the user never contacts your host.
> > Another is to steal the key either from your system or from the developer
> > and to upload a properly signed kit that's dangerous.
>
>OK.  So if the user has an application on their PC that downloads a Gem from
>a server and checks for the integrity of that Gem (automatically using
>SHA/PK) that will either succeed or fail...period.

Right. Success here means nothing, unfortunately. It guarantees that the 
archive was successfully received, which is good for other reasons (IP 
packet checksums are vulnerable in several ways, and TCP does no end-to-end 
stream checksumming) but it doesn't say anything for the origins of the 
file you receive.

>DNS has nothing to do
>with it.

DNS is an easy point of attack.

>It has to do with verifying that the files were not changed from
>point A (developer) to point B (user).

But it doesn't. It guarantees that the file wasn't changed from Point A 
(the machine you got it from) to point B (the user). There aren't any 
guarantees that Point A was the original developer.

>If someone spoofs the server and
>loads nasty Gems on it they (the user) will get Gems that do not validate.

That's not guaranteed. Where does the user's program get the keys to 
validate the Gems from?

>If they do validate because the private key(s) of valid developers are
>stolen then that is no different that hypothesizing what would happen if
>some dink in your local CompUSA replaced the Red Hat CDs with Trojan horse
>infested versions and you go pick one up and install.  If physical security
>is thwarted, you are screwed.

Electronic security is a lot easier to breach than physical security. The 
developer's machine, the server, and the connection between them are all 
vulnerable to compromise in some form or other.

> > >   Now, the method by which I download the
> > >public keys of developers I "trust" is definately an issue but there are
> > >emerging systems that are being developed to [help] solve this
> > problem in a
> > >distributed (rather than centralized) fashion that fall under the name
> > >"reputation networks".
> >
> > While there might ultimately be some way to do this, as the
> > network stands
> > now the methods are insufficient. Unverified DNS is a huge danger here.
>
>Why are you fixated on DNS.  I realize that DNS is not secure.  I am
>speaking of creating cryptographically signed pieces of content.  Where the
>content resides is not an issue.  The content is self-validating if the
>public key is known by the receiver.

That's the point. The user needs to fetch the key from somewhere. That 
somewhere is off-machine, and thus as vulnerable to compromise as the 
archive itself. DNS's insecurity makes things really easy, but there are 
plenty of other ways to do this as well.

>Now, this does not prevent a denial of
>service attack (and you cannot prevent that in today's internet) but is does
>prevent content from being changed and nasty code introduced.

No, it doesn't, that's the point. Public key cryptography isn't as secure 
as might be hoped, and doesn't guarantee the sorts of things we'd like it to.

> > > > being trustworthy (they aren't), DNS being trustworthy (it
> > > > isn't), that the
> > > > signing entity is trustworthy (they aren't), and that the
> > source you're
> > > > fetching is safe to use sight unseen (it isn't).
> > > >
> > > > Someone could poison your DNS cache. The remote repository can be
> > > > compromised.   The keyserver can be compromised. A proxy in
> > the middle of
> > > > the transaction can be compromised or poisoned. The person
> > providing the
> > > > code can be less trustworthy than you think they are.
> > > >
> > > > Yeah, these are all potential issues when installing any chunk of
> > > > code from
> > > > the net, but at least with a manual install you have a chance to check
> > > > things out even if you choose not to. With automagic loading, you
> > > > take all
> > >
> > >So, for every file you download, source or binary do you check
> > it line for
> > >line to verify that it does nothing wrong and has not been compromised?
> >
> > On production machines I manage? Generally yes, I do. It does limit the
> > number of kits that get installed. If I've reason to believe that the
> > remote host hasn't been compromised and
>
>Whoa.  What OS do you run?

For production, VMS and Solaris mostly. There's an assumption of trust 
there, certainly--I don't look at all the source for them. (Just some, but 
only for fun)

>You check EVERY source line of EVERY OS, library
>and package you use? As for believing in the integrity of a remote host,
>that gets back to your very own argument about co-opting
>IP/DNS/TCP/whatever.

For packages I install off the net onto production machines, yes I do 
check. I read the install scripts, I scan the source, and I look at the 
data files. That's part of the job of administrating production systems. 
(One which I luckily don't do much at the moment)

And yes, the potential for coopted DNS, IP snooping, and whatnot are 
something I keep in mind while doing it. As I said, the risks are 
significantly lower when downloading things once, as I can generally 
validate the IP address of the remote host and the snoop/intercept 
likelyhood is lower, and I can watch what's going on when the code runs 
through its test suite and what it does on the test system.

> > >Security is based more on perception than reality.
> >
> > Nope, that turns out not to be the case. Security's based on trust and
> > trustworthiness. Anything outside reasonable physical control needs to be
> > held to a higher level of trust, and there's a lot in the loop that's
> > inherently untrustworthy.
>
>Trust is based on perception ;-)

No, it isn't. It's based on an assessment of risk, and that's an assessment 
that can be made without necessarily having to trust the other end. There 
is always risk, and there is never complete security.

>We trust what we perceive is acceptable to
>trust, from things that we perceive earn our trust but our perceptions can
>be compromised.  What I am saying is that people feel secure when they are
>convinced (through their perceptions) that they are secure.  But that does
>not, in reality, mean they are secure.

People's feelings of security have nothing to do with whether they are 
secure or not. This is definitely true. Perception isn't reality. (Neither 
is truth fiction) I'm not discussing how people feel. I'm discussing system 
security. That's a much more concrete thing.

> > >But hey, if we want to go secure how about this:
> > >
> > >Start a central site/group to issue (physical) hardware
> > cryptographic tokens
> > >(for a fee $$) like the Dallas Semiconductor iButton
> > >(http://www.ibutton.com/ibuttons/java.html) and have those
> > hardware tokens
> > >sign the Gems.  That way each would contain a x.509 certificate that was
> > >signed by the central (known) autority (public key).  So, unless the
> > >physical device was stolen (and the PIN known to the person who
> > stole it) it
> > >could not be used to sign code.
> >
> > Nope, not secure. Yes, the central server could have reasonable
> > guarantees
> > that the archives it has came from who it's said to come from.
>
>The server guarantees nothing but that some data is transferred.  The
>content (if digitally signed) is what validates who something came from.
>The hardware token prevents the theft of a private key over the network
>(because it never leaves the token).  PIN activation on the hardware token
>prevents physical theft and use without knowledge of the PIN.  That would
>create a secure identity of who a piece of content came from.

That still doesn't do the end user any good, since they have to fetch the 
validation key from somewhere. It's only good for a point-to-point system 
where both points have ends of the secure transaction. The end user doesn't 
have that.

> > Also, for this to work the code potentially needs to be downloaded every
> > time it's used. (Yes, there are cache options here, but you'd want
> > per-user, or potentially per-process caches) That is a much larger window
> > of vulnerability--if an attacker knew you did this, it'd be reasonably
> > simple to watch and intercept attempts. One-shot installs have a much
> > smaller window.
>
>Why does code have to be downloaded every time.  I am really lost in this
>statement.

I said 'potentially'. Where are you going to put it once you download it? 
Some local cache area. Caches are volatile--they get cleaned out for a 
variety of reasons. You do *not* want to install it in your ruby install 
tree, that's really unsafe. (And not just for malicious reasons) You have 
to throw it in some private sandbox somewhere, and sandboxes should get 
cleaned out at regular intervals. (Otherwise you open yourself up to other 
problems)

Besides the security issues, you're trusting that the code:

1) Works the way it used to every time you fetch it
2) Is available when you need it

The remote servers (and you need more than one--you'll find this scheme, if 
used widely, will place a pretty heavy load on things. Talk to the people 
who run perl's CPAN archive if you want, and that's all one-shot access for 
things) may not be up, making the code that uses remote modules fail.

The archives you get may be corrupt (developers will sign and upload 
corrupt archives--it happens).

Releases will break things. Bugs happen, and you can't guarantee a stable 
code base to test against this way.

*Upgrades* will break things. This has happened in the past--for example, 
the GD module for perl used to generate GIF images. (Hence the G part) One 
release they switched over to generating PNG images. (Courtesy of lawyers 
waving patents, but this isn't the place for that) If you fetched code 
dynamically, you'd find yourself with a program that's suddenly doing 
things much different than what you wanted.

People are sometimes obnoxious, and sometimes get rather aggressively odd 
ideas about what's good or not. This scheme basically gives other people 
who you can't verify license to do what they want within very broad limits 
and at a time when you can't monitor what's happening. For a personal 
system that might be OK (though given the number of root/administrator 
hacks that just need unpriv'd user access I'd be wary of that one--heck, a 
quick "rm -rf ~" is bad enough) but for something in production use it adds 
a lot of risk. To judge whether it's an acceptable thing to do you *must* 
evaluate those risks--dismissing them won't help.

					Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski                          even samurai
dan / sidhe.org                         have teddy bears and even
                                      teddy bears get drunk