Martin.Bosslet / gmail.com wrote:
>  
> B Kelly wrote:
>>  
>>  The Debian maintainer _removed lines of code_ from the OpenSSL PRNG
>>  implementation. [1]
>>  
>>  This is hardly in the same category as tightening the defaults to exclude
>>  specific ciphers or protocol features already known to be weak or exploitable.
> 
> And it is. It doesn't matter if you remove something or if you think (!)
> you are improving the situation. The final patch we all agree on might be
> perfect. It might also be broken. The problem is that it is our custom patch.
> Things like this need to be dealt with in one spot and one spot only. It's
> taken for granted in every other aspect of software development that DRY is
> the way to go. Yet somehow when it comes to security, it shall suddenly be
> better for everyone to do their own thing?

I think we're talking at cross-purposes.  Your arguments focus on what would
be ideal: an upstream patch by OpenSSL.  I think nobody disagrees that would be
ideal, and presumably most among us are familiar with the downsides of maintaining
downstream patches.


>>  > It hurts even more that in such cases everyone will start pointing fingers,
>>  > asking: "Why didn't you stick to the library defaults???"
>>  
>>  As opposed to asking: "Why didn't you remove known weak ciphers and exploitable
>>  protocol features from the defaults when you were warned about them???"
>>  
> 
> Because it is a very bad idea trying to fix OpenSSL inside of Ruby!

The phrasing seems dramatic.  Are we fixing OpenSSL inside of Ruby?  Or are we
adopting a policy for Ruby that stipulates our defaults should favor security over
maximum compatibility?


> We need to ask OpenSSL developers to fix it centrally, so that everyone can
> benefit from the change.

Ideally.  Though one can imagine the possibility that OpenSSL may always prefer
maximum compatibility by default.  In which case, the Ruby policy might simply
differ.


>>  > I would prefer a whitelisting approach instead of blacklisting as in the
>>  > patch that was proposed. Blacklisting is never airtight, as it doesn't protect
>>  > us from future shitty algorithms creeping in.
>>  
>>  I wonder.  In the blacklisting case, we're not required to make guesses about
>>  the future.  We're merely switching off already-known weak or exploitable
>>  features.
>>
>>  Whitelisting goes a step further, gambling that what we know today about the
>>  subset of defaults considered superior will continue to hold true down the road.
>>  
>>  It's not clear to me that's better than the more conservative step of simply
>>  blacklisting specific defaults already known to be problematic.
> 
> Whitelisting is the preferred approach. Because at every point, you know what
> you're dealing with [1].
[...]
> [1] http://www.testingsecurity.com/whitelists_vs_blacklists

Sorry, I didn't explain myself properly here.

I get whitelisting vs. blacklisting in principle.

The thrust of your arguments had seemed to be that we should be biased toward
trusting upstream (OpenSSL) to get things right in general, and my reasoning
was that a blacklist seemed most conservatively aligned with that approach.

However, if our position instead is we don't trust upstream at all, and we will
be actively maintaining our own whitelist, then sure: whitelist sounds good.


Regards,

Bill