Hugh Sasse Staff Elec Eng [mailto:hgs / dmu.ac.uk] wrote:

> On Thu, 7 Aug 2003, Nathaniel Talbott wrote:
>
> > Really? Can you tell me a bit more about that? Perhaps I 
> > can avoid SSL altogether.
> 
> It doesn't encrypt the message, but does a checksum with data 
> that is never transmitted.  Thus you can only forge the 
> checksum if you have that data, so you can trust it.

Ah... that isn't enough for me. I want information hiding as well.


> From the comments I wrote:
> 
> # A nonce is a word that is used only once (according to concise
> # Oxford Dictionary.)  The purpose is that it is generated, and a
> # password is added to it, and the hash of the whole string is
> # generated.  Thus a hash is passed across the network so that the
> # password can be checked against this hash without having to send
> # the password across the network.  This is used in CRAM-MD5, see
> # RFC2195 and RFC2104.  CRAM == Challenge Response Authentication
> # Mechanism, MD5 is the message digest format. An Alternative to MD5
> # is SHA1.

I still don't quite understand... is the nonce generated somehow? If so, how
do both sides use the same nonce?


> I'd rather not post my code, because of exposing weaknesses 
> in it. These will exist because I find cryptographic systems 
> full of subtleties, one of the reasons I have not got to 
> grips with writing SSH code.  This is slightly better, I 
> suppose, than thinking I can write such things and have them secure!

Security through obscurity, eh? ;-)

I can understand your sentiments. Actually, I was thinking about putting
together a 'locked-down' version of DRb, and submitting it for peer review.
As Bruce Schneier said (pardon the long quote), 

  "Security engineering is not like any other type of
  engineering. An engineer who's building something
  will spend all night to make it work.
  That's quintessentially what a good hack is. It
  works, it's functional. In a normal product, it's
  what it does that's impressive.

  "But security products are not useful because of
  what they do; they're useful precisely because of
  what they don't allow to happen. Security has
  nothing to do with functionality.

  "If you were to build a word processor and
  wanted to know if it printed, you could plug a
  printer in, push the print button, and see if a
  printed document came out. If you're building a
  encryption product, you can put a file in, watch
  it encrypt and decrypt. You know it works, but
  you have no idea if it's secure or not. And that's
  a big deal. What it means is that you can't tell if
  a product's secure simply by examining it, simply
  by running it through functional tests.

  "No amount of beta testing will find a security
  flaw. In many ways, security engineering is similar
  to safety engineering. But there is a difference.
  Safety engineering has to do with making
  something work in the presence of random or
  transient faults (i.e., Murphy's Law). Security
  programming involves making sure something
  works even in the presence of a malicious adversary
  who will make exactly the wrong thing
  fail at exactly the wrong time and do it again,
  and again, and again, and again to break the security.
  That's why I call it programming Satan's
  computer. You program a computer with the assumption
  that a malicious adversary intent on
  defeating the system is living inside the system.
  Security is supposed to provide some way to encapsulate
  him."

  from "Security in the Real World: How to Evaluate Security Technology"
  by Bruce Schneier
  http://www.counterpane.com/real-world-security.pdf

Which scares me a bit, since it means the software I write can work great
for my users, and yet be totally insecure - and that insecurity won't be
discovered either until it's compromised, or until I discover it myself.

So my basic strategy at this point is to assume that user's passwords are
insecure, and thus carefully lock down my server-side interface so that
remote users can't do anything unsafe on the server. SSL is basically just
for information hiding, so that the data that's being passed can't be
trivially sniffed on the network. The data is of the type that shouldn't be
shared, but if it were somehow decrypted, there aren't any corporate secrets
or anything.

As for locking down the server-side interface, I've done a couple of things.
First of all, I'm running at $SAFE = 1, meaning that tainted strings can't
be used for insecure operations. Second, I've locked down DRb such that only
methods that I specifically allow may be called, as opposed to the normal
strategy of any method except those you specifically deny (i.e. make
private). I plan to keep an eye on it as I continue, and see if there's
anything else I need to do.

One thing I'd love to see happen is an easy to use, easy to understand, well
documented suite of ruby security libraries and tools built around the
OpenSSL library, so that security is easy(er) to set up and use. Currently
it's tempting to do something less than secure because it's quite complex to
get something secure going. For instance, I toyed with setting up a
certificate authority and distributing signed certificates to each client of
my app, but the documentation and tools for doing that are, at least to this
idiot, obscure, convoluted and complex. It'd be nice if Ruby emerged as a
solution for doing this simply and well. I know that there are those that
fear making these things too easy, as some will be lulled in to a false
sense of security, but I can't see it being worse than it is now, with the
issue all too often ignored.

Anyhow, sorry for the long email. If anyone has any further ideas for how to
secure things, I'm all ears.


Nathaniel

<:((><