Eric Schwartz wrote:

> I know I should have written my tests first, but I didn't, so now I'm
> trying to do them after the fact.

The best way to do this (in my exalted opinion) is to point the tests at an
empty project, use the testless project as a reference, and write tests that
force you to copy tiny bits of code from the testless project into the empty
project.

> I'm running into a few style
> questions, though, not being used to the Test::Unit style of testing:
>
> 1) I have several test cases that do multiple assert_* calls.  Is that
>    a Good Thing, a Bad Thing, or not something that's even worth
>    worrying about?  Usually it's things like "Was the file created?
>    Does it contain data of the right format?  Is the data correct?"

Dave Astels (who I suspect wrote a book about TDD) and I pretended to have a
vicious fight over this subject.

He claimed to only do one (1) assert per test case. I made him admit this is
a goal, not a rule.

If you try to approach that goal, you will maximize re-use and encapsulation
of your tests' setup code. It grows into resplendent fixtures and resources
in the test case.

I test GUIs, which optimize for displaying a window that I never give a
chance to display. So the toolkit wastes a lot of cycles getting ready to
display, and I throw all these cycles away. Hence, after every setup and
test call, I pack in as many assertions as I can think of.

I defer to Astels' style as (potentially) peer-reviewed, but I still think
this topic awaits a more subtle metric or rule-of-thumb. I suggested 3
assertions, to provide triangulation within R3 space.

>    In one case, however, the test case is "run a command remotely that
>    logs to yet another remote database, and verify the command was
>    run, output was generated, logged to the right database with the
>    right timestamp, from the right machine, etc., etc., etc."

Yup. Don't run that one over and over again at test time.

If your tests get slower than the traditional time a UI event should take,
split the folder they run in, and keep going. Put another way, the time
tests take to run are a good metric for the number of source files you
should keep in a folder.

> 2) How do people generally feel about interactivity in test cases?  I
>    have a method that should send email to a random address.  Since
>    I'd like everybody on my team to be able to run my tests, I can't
>    use a specific address, as that address might not (probably won't)
>    exist on their machine, and depending on which OS they're running,
>    the mail spool dirs will be in a different location.

I hope nobody's as tired of the following anecdote as I am of telling it:

The first project I ran full-bore test-first, it sent faxes. So I got the
numbers of all the fax machines in our offices, and put them in my test
resources. So, as I developed, every 30-90 seconds I'd hit the test button,
and 20 to 40 bogus faxes would come out of a machine somewhere in the
offices.

I came in one day and found they'd installed a new fax machine, in my cube.

I then signed up for an online free fax service, and sent it faxes. They
came back as e-mails, each with an ad attached, of course. Then one night I
left my tests running over a huge number of records, to test a new
high-volume phone card they put on the server. I flooded the fax service
with ~45,000 faxes in a couple hours, and they cancelled my account.

Uh... what was your question again?

>    My idea is to have the test ask for the address by stdin, and also
>    ask the user to verify the contents manually.  The user's response
>    will determine if the test passes or fails.

Nix. Send to a common office account, or a Yahoo account, or something. Or
register each tester, and let them put their favorite addy in a
configuration file. Tests >must< run unattended. Everyone needs the minimum
possible excuses not to run all the tests, all the time.

> Other than that, I'm enjoying Test::Unit quite a lot.

It's a trip!

--
  Phlip
    http://www.c2.com/cgi/wiki?TestFirstUserInterfaces