Issue #8468 has been updated by headius (Charles Nutter).


Student (Nathan Zook) wrote:
> headius (Charles Nutter) wrote:
> Part of the issue here is the balance of pain.  You have the pain of the core team verses the pain of platform providers verses the pain of end developers verse the pain of the organizations that use the code in question.  We now have a Rails server botnet thanks to $SAFE = 0.  How much pain is that going to be in the world?

I'm not arguing that there shouldn't be any security system at all...I'm just arguing that the current coarse-grained, blacklisting system is too flawed to be the model we follow.

> Do you have specifics about the performance costs?

I have attempted to remove tainting checks in the pasts, and in some cases the performance of small operations improved by 10-15%. Now of course most operations that are not creating new objects or mutating existing ones don't hit this, but it's a pretty big hit when it's required.

JRuby has been systematically removing our broken implementation of $SAFE level checks as well, and it's always an improvement in performance.

> > * It provides a very coarse-grained security, where many secured features are only secured at levels that prevent most applications from working at all (due to other secured features being needed.
> 
> I have a real hard time with your vagueness here.  All SAFE=1 does is prevent tainted data from being executed, for a broad definition of executed.  Unless you are doing something like tryruby.org, there is no reason at all that this should ever be a problem.  If you are, then of course, you are going to need a more robust security model than SAFE=1 can provide.

The problem is that you have to really, really trust that strings are not getting untainted incorrectly, and that all possible paths for user strings are tainting those strings. It's a bad model, and one bad apple spoils the whole thing.

> I don't know how the implementation is done, but it seems to me that internally, it is all about keeping up with the taint bit, and that it is only at the edges (I/O & eval) that $SAFE=1 comes into play.

Yes. Keeping up with the taint bit in some thousands of lines of code that must properly propagate it, with very sparse tests. Bad model.

Security should not be a characteristic of objects in the system but of execution contexts. A given thread should either be able to evaluate new code or not be able. Allowing applications to get around that hard limitation by gaming tainting is just asking for trouble. Enable or disable specific privileges, and be done with it. NEVER trust any object and allow it to skip security, ever.

> > The security model provided on the JVM or on operating systems with access control lists are both better options. If you run with security on, everything is forbidden; you must explicitly turn *on* the permissions you want and whitelist those capabilities. Those permissions are fine-grained, allowing you to disable only code evaluation or filesystem access or dynamic library loading, rather than having to choose from four pre-determined blacklists.
> 
> NOW we have an alternative suggestion.  But I'm not thrilled with what you are suggesting.  Perl's safe mode is very close to our $SAFE=1, and they make use of it.  We don't.  Now you suggest requiring end developers, who can't be bothered to worry about which of 5 options to use, to make a dozen or so correct fine-grained choices on things they might not understand at all?
> 
> In real life, this looks something like the following:  Write code.  See code fail.  See the security mechanism.  Disable the security mechanism.  Repeat.  This mentality regarding SAFE is already here.  Given the response to bug January, I don't expect things to get better by making devs work more to achieve the result.

Although I didn't write it up here, I think it would be simple to provide the existing SAFE levels as pre-defined sets of permissions on e.g. JVM. The weirdness would be mapping the safe levels' blacklists to security policy's whitelists, but it's doable. So we could have a set of pre-existing new whitelist-based SAFE level policies (with explicit configurations), but users could mix and match as desired. So you get your big red buttons, but many (most?) users will customize appropriate to their needs.

> > Regarding the Rails exploit...SAFE=1 may or may not have helped, but the real problem was allowing arbitrary code to be embedded and executed from a *data* format in the first place.
> 
> I would say that this is *exactly* what SAFE=1 is supposed to prevent.  Those strings are tainted, and under SAFE=1, you can't eval tainted strings.
> 
> The fact that Psyche calls []= on objects whenever the user-supplied data tells it to is a problem in its own right, but even that would not have been a problem if SAFE=1.

If tainting is propagating correctly. Again, implicitly trusting "blessed" user-mutable objects/data is always wrong.

> The symbol exploit probably would still be there.  My Symbol[] proposal is designed to allow an easy fix for that one.

The symbol exploit just needs symbols to be GCable. I've wanted to do this for a while in JRuby, so I think we'll just fix it...and I believe MRI is probably going to fix it soon too.
----------------------------------------
Feature #8468: Remove $SAFE
https://bugs.ruby-lang.org/issues/8468#change-39652

Author: shugo (Shugo Maeda)
Status: Feedback
Priority: Normal
Assignee: shugo (Shugo Maeda)
Category: core
Target version: current: 2.1.0


Yesterday, at GitHub Tokyo drinkup (thanks, GitHub!), Matz agreed to remove the $SAFE == 4 feature from Ruby 2.1.
Shibata-san, a developer of tDiary, which is the only application using $SAFE == 4, also agreed to remove it, so today is a good day to say goodbye to $SAFE (at least level 4).

Furthermore, I'm wondering whether $SAFE should be removed entirely, or not.
Is there anyone using $SAFE?


-- 
http://bugs.ruby-lang.org/