James Edward Gray II <james / grayproductions.net> writes:

> On Feb 15, 2007, at 1:45 PM, Daniel Berger wrote:
>
>> On Feb 15, 12:32 pm, Alex Young <a... / blackkettle.org> wrote:
>>> Daniel Berger wrote:
>>>> Hi all,
>>>
>>>> What's the general approach folks use for skipping tests?
>>>> Sometimes I
>>>> have some tests that I want to skip based on platform (usually MS
>>>> Windows). I saw the 'flunk' method, but that's considered a failed
>>>> test. I'm looking for something that doesn't treat it as success or
>>>> failure.
>>>
>>> If you were to factor the platform-dependent tests out into their own
>>> module which you can conditionally include into the test case, I
>>> think
>>> you'd get what you were after.
>>
>> It's not a bad idea, but that still wouldn't explicitly indicate to a
>> user that tests had been skipped - they would merely see fewer tests
>> run. Plus, it's more work and I'm lazy. :)
>
> I think it's a much better design though.
>
> For an example of my concerns, what happens if your proposed skip()
> is called after a few assertions are run in a test?
>
> James Edward Gray II

This seems nice and I've seen similar in other frameworks:

Started
.S.
Finished in 0.00001 seconds.
3 tests, 3 assertions, 0 failures, 0 errors, 1 skipped

Steve