--0016367b6050d0da4d04924f068c
Content-Type: text/plain; charset=ISO-8859-1

(Apologies if the quotes don't come out right in plain text -- I'm using
both Apple Mail and GMail and they're playing crazy HTML games with my
draft.)


>
>> I guess this comes down to idempotency. I expect that if I do
>>
> something twice in a row I will get the same result. Randomizing by
>>
> default breaks this expectation. It's astonishing, therefore bad, no
>>
> matter how good from a theoretical standpoint, and especially
>>
> astonishing when people have 10+ years of xUnit and its heirs building
>>
> these expectations.
>>
>
> Idempotency is a red-herring. There is nothing about the xunit
> family/philosophy of tools (or any other test tool that I have used or
> studied--except rspec) that suggests that test order must be run in the
> order defined (or must be run in any order at all). Just look at the new
> tools coming out that distribute and multithread/process your tests and you
> can see that right there, the notion has to be thrown out the window by
> design.
>
>
That's a fair point. The idempotency I was referring to was that running
"rake test" twice on a failing suite gets different results -- if not
different failures, then the same failures in a different order.

As for your astonishment, I thought it was pretty well addressed in the
> first line of my reply: "Really? I think preventing test order dependency
> has a very practical effect". If you're still astonished after that, then
> you're probably misusing the word.


I'm using it in its technical sense:
http://en.wikipedia.org/wiki/Principle_of_least_astonishment

And I stand by what I wrote: if your tests are all passing, and they're well
isolated, then randomizing them has no practical effect. It's just shuffling
a deck full of aces.

 At this point I'm going to cut much of your reply and everything I've
> written so far in response and cut to the chase:


 And I'm not even disagreeing with your observation! I agree that there

should be a randomizing mode, and that people should run it fairly

often. Just not all the time and not without a config or command-line

option to turn it off.


Apparently this is the crux of our disagreement:


> I __do__ think that people should randomize their tests a MAJORITY of the
> time and turn it off TEMPORARILY when they need to sort out an issue. If it
> wasn't random by default, it wouldn't happen at all.


This is a noble position, as I said before. You're the self-appointed
isolation vigilante, crusading against a problem you abhor. But I've rarely
encountered it. I feel that my tests don't need randomization, and the extra
output clutters my console (*), and the shuffling cramps my debugging style,
so I want it off unless I ask for it. If you're Batman, I feel like I'm the
Lorax. I speak for the trees whose pristine consoles are being polluted, but
who haven't spoken out. (I haven't really heard a chorus of protestors in
favor of randomization either, fwiw.)

You're the library author, so you have the privilege of deciding what mode
is the default. I'm hoping to convince you of a few things, but if I don't,
I won't take it personally.

Sounds like we're approaching a compromise, though: an option for me,
defaulting to off for you. (And option ! onkey patch -- it's a clear API
like a named switch on the command line and/or a value on some Minitest
object, e.g. "Minitest::Config.randomize  alse".)

I'd also be happy with just a verbosity setting, maybe with several levels
like you suggest.

2) use --seed when you want the order to be fixed, via TESTOPTS if you're
> using rake. And that'd be your command line option...

TESTOPTS. Roger that. Never used it before. Maybe the Minitest README should
say something about that when it talks about --seed. (Oh, looks like it
doesn't talk about --seed either.)


If you want a third option available, feel free to propose it and I'll
> gladly consider it.


Had a weird thought while doing the dishes... what if you write out seed
somewhere persistent like .minitest_seed, then erase it after the run but
only if the run was successful. Then when a run starts, if .minitest_seed
exists, it uses it (and says so) instead of rolling a new one. That way you
don't have to print out anything for successful runs and the user doesn't
have to remember anything and idempotency is preserved (if it fails once,
it'll fail the next time, in exactly the same way) and it'll keep failing
consistently until you fix the problem. It also works for C hackers since a
crash means it won't erase the cached seed.

(And hey, also, for Aaron's sake, can't you trap SIGSEGV and print the seed
then? Not a rhetorical question since I haven't done any C+Ruby stuff and I
know signals are sometimes flakey.)

Since --seed makes in-test randomization freeze out too, I think there
should be separate options for all three (--seed, --randomize, and
--verbose).

P.S. I'm still mulling over Steve Klabnik's suggestion that the output be
> sorted. I think it could be very confusing when you do have test dependency
> errors but that there might be some way to mitigate the confusion. I'd like
> to hear what you think about his suggestion.


I like it. It's pretty weird though. it's a very pleasant dream I'm not sure
if it will survive in the cold light of day.

 - A

(*) My conosle is already way cluttered even with the minimum verbosity --
my collaborator Steve wrote some code that runs each of our tests in its own
VM process, to ensure isolation of dependencies and other stuff, so I get a
big long scroll of test runs, each of which is now 2 lines longer because of
"test run output" cruft. git clone wrong and run "rake rvm:test" to see what
I mean. Every line of output I save is multiplied by (N tests) x (M Ruby
versions). Since it's slow, I only run it before checkin.

--0016367b6050d0da4d04924f068c--