FYI--This topic has come up a number of times in the past, and it usually 
seems that (most of) the strongest pro-dynamic arguments lean toward the 
"based on my (considerable) experience" varity, which nevertheless seem 
less than satisfying (in terms of understandying why) for many people. So 
I thought some people might find these remarks interesting. 

Subject: Re: Who's minister of propaganda this week?
Date: Thu, 15 Mar 2001 12:37:18 +0100
From: "Alex Martelli"
Newsgroups: comp.lang.python

Alex Martelli wrote:
> 
> "Michael Chermside" <mcherm_python / yahoo.com> wrote in message
> news:mailman.984619039.7856.python-list / python.org...
> > Alex Martelli wrote:
> >          ... [snip]...
> >   > So, all the compile-time checking is buying is catching (a small
> >   > subset of) the errors that would be caught in testing anyway, a
> >   > little bit earlier (thus, a little bit cheaper) -- it's never the
> >   > case that one has to write a test which would not be needed at
> >   > all if type-checking was static, since "the object has the right
> >   > type" is a small subcase of "the object _behaves_ per the specs".
> >   >
> > I'm really not sure I see it this way. If the method  foo(x) is known 
to
> > take a FancyDateObject
> 
> Assume that is an abstract interface (no gain in terms of
> functionality assurance if it's concrete) and (without loss
> of generality) that it has two methods First and Second that
> foo uses (it may have others that foo is ignoring, but they
> don't affect the following argument).  OK so far?  Good.
> 
> > there are three kinds of errors we could make. One is that foo() is
> > written badly so it
> > doesn't do what it's supposed to. The unit tests of foo() need to 
guard
> > against this.
> 
> Right.  Specifically, they'll test the combinations of calls
> to First and Second methods of the x argument that foo needs
> to actually perform, say in certain cases First only, in
> others Second only, in others yet, First then Second (if foo
> never needs to call Second before, and First after, on its
> argument, then its unit-tests will not exercise that path).
> 
> > Another possible error is that FancyDateObject isn't written properly.
> 
> As I assumed this is an interface, let's say you're talking
> about some specific implementation thereof -- FDO_impl1, say.
> 
> > The unit tests
> > of FancyDateObject need to guard against this.
> 
> Right again -- specifically, they'll test that FDO_impl1's
> implementations of First and Second support the call patterns
> that the specification demands.  For example, if that is what
> the specs say, the implementation will work fine if only First
> is called, or if only Second is called, or if Second is called
> before and _then_ First is called -- 'First before, Second after'
> may not be in the specs and thus doesn't get exercized by unit
> tests.
> 
> > And the third type of
> > error is that
> > that somewhere where we CALL foo(), we might pass it a DateObject
> > instead... or
> > even a String... which will cause it to perform wrong. To guard 
against
> 
> More generally: we might erroneously pass to foo some object that
> does not even _implement_ First and Second methods with acceptable
> signatures, or some that _does_ implement the methods with signatures
> that appear good BUT not with the semantics that foo needs -- e.g.,
> it does not let First be called earlier and Second later (because
> there is a mismatch in semantics specs between the specs that foo
> requires and the specs that x actually ensures -- just as in the
> other cases, regarding existence or signature of the methods).
> 
> No compiler wards you at compile-time against all, or even _most_,
> errors of this very common kind.  (The existence of more than one
> method is not needed to have this behavior -- just a single method
> suffices to exhibit this error, e.g. it could take an argument i
> with a prereq of i>23 and be called with i==23 by foo; or foo might
> call it 7 times when the semantics specify it must be called no
> more than 6 times; etc, etc).
> 
> Another way to express it: an interface is not just, not even
> _mostly_, about existence and signature of methods -- it's mostly
> about prereq's, post-conditions, and invariants; and nobody can
> check those at compile-time in enough cases to make a difference
> to your software's reliability.
> 
> So, statically-typed languages make a LOT of the not-very-important
> issues of method-existence and signature -- because those issues
> are THE ones they can check statically, not because their importance
> (wrt the importance of real semantics issues, the programming-by-
> contract parts of the interface) warrants special attention.  It's
> like the drunkard who was looking for his housekeys at one end of
> the street, opposite to the end where he had dropped them, because
> the end he was searching at was the one that a streetlamp lit...
> 
> > this in a
> > dynamically typed language, we have to write unit tests for *every
> > single place*
> > that we call foo(). Of course, we'd be writing unit tests for those
> > functions anyway,
> > but we won't be able to assume that foo() works properly, and will 
need
> > extra tests to ensure this.
> 
> You can never 'assume that foo(x) works properly' for a given x
> in an untested case -- and syntactic-level compatibility between
> the methods x offers (& their signatures), and the ones foo
> requires of its arguments, doesn't buy you much, since the likely
> and troublesome errors are with contract-expectations mismatches.
> 
> If the implementation of foo and its call are inside the same
> component, then unit-tests should exercize the relevant paths
> (or else the component is being released in a state of incomplete
> testing -- thus, dubious reliability, whether with or without
> static typechecks).
> 
> If the implementation of foo and its call are in different
> components, then you have a system-integration problem (again,
> one that remains independently of static type-checking) and
> thus more 'strategic' kinds of troubles.  You *STILL* need
> 'extra' tests to ensure the actually-implemented semantics
> of a given actual argument and the ones foo requires of its
> formal argument match -- for all distinct cases that occur
> in the (system-level) acceptance criteria tests.  If then, in
> later system operation, you meet a failing case that was not
> tested (makes no difference whether the mismatch is in
> method existence, signature, or semantics), then your acceptance
> tests were insufficient -- and static checking would not have
> made them sufficient.
> 
> > In a staticly typed language, we still need the unit tests for cases 1
> > and 3, but the
> > third type of error is caught by the compiler.
> 
> Not in most cases of interest, no.
> 
> > And in my mind, ANY TIME that
> > I can have a machine do my work for me it's better... I can be lazy, 
and
> > the machine
> > never gets tired after a long day and forgets to test sometimes. Of
> 
> Extra checks (even if theoretically redundant) are good insurance EXCEPT
> where they breed exactly this kind of 'complacency'.  Release procedures
> MUST 'never .. forget to test' either -- and it's not that hard a 
problem
> to setup your build/release environment so that a machine ensures this.
> 
> > course, I can
> > put an assert at the top of foo() which asserts that x is of type
> > FancyDateObject,
> > but if I always assert the types of my arguments then I'm basically
> > using a staticly typed language.
> 
> If your 'assert ... is of type' actually runs a sensible albeit small
> 'type'-testing procedure, which exercizes the whole contract that the
> interface implements, then you have gone statically typed languages
> one better -- unfortunately, this is most often impractical (as such
> infinitely-repeated tests are far too slow, can't be made non-invasive,
> etc, etc).  And it doesn't buy you all that much either (a bit more
> than just statically checking types, but not all that much more) --
> you still need to have *exercised* the call-cases of interest.
> 
> Asserting a single, specific, invariable concrete type would be on
> a different plane -- perhaps feasible (at non-inconsiderable cost)
> for some within-the-component cases (where you can commit to never
> needing ANY polymorphism EVER), definitely unfeasible across any
> component-boundary (cfr. Lakos' "Large Scale" book, again -- it's
> still the best treatment I know of dependency management in large
> scale software development, and, in particular, of the inevitability
> of purely abstract interface-classes across component boundaries
> to manage those dependencies; Martin's articles, which can be found
> on his objectmentor site, are more readable, although not quite as
> deep and extensive as Lakos' big book of course).
> 
> > There are times when a dynamically typed language is more flexible, 
and
> > it's certainly
> > nice not to have to declare everything just to specify its type, but
> > there ARE
> > advantages to static typing, and this, I believe, is the biggest one.
> 
> We agree that the (redundant, but earlier) checks performed by the
> statically-checking compiler are the 'biggest' (least small?-)
> advantage of said compiler (well, apart from speed issues, which
> may at times be paramount for certain well-identified components).
> 
> We disagree on how big that 'biggest' is.  I estimate that (speed
> apart) this specific advantage may buy me about a 5% productivity
> improvement -- so I agree it's an advantage, and I agree it's
> larger than any other advantages of static checks (none of which,
> it seems to me, may make even a 1% further improvement even when
> all taken together -- again, speed of resulting code apart), but
> I don't think it's worth anywhere near the _bother_ (productivity
> impact, _negative_ improvement) of the contortions I have to
> perform to satisfy the checks (which cost me _at least_ 10% of
> my sw-lifetime-coding-productivity, even in the cases where I'm
> least interested in specifically taking advantage of dynamism).
> 
> I come to these (tentative) conclusions after a lifetime spent
> working _mostly_ in statically-typed languages -- because the
> performance characteristics of the machines I was targeting just
> didn't afford me the luxury of doing otherwise (when they DID,
> I repeatedly tried out dynamically typed languages, and, over
> and over again, I was tickled pink at how well they worked --
> most people are surprised when they learned how much of the
> 'background processing' of the programs I was doing in the
> mid-80's [on IBM mainframes] was done in Rexx, but then, they
> are equally surprised at how fast and reliably I delivered:-).
> 
> Today, I'm still relying mostly on C++ to earn my daily bread
> (3-D modeling &c being pretty compute-intensive even by today's
> standards -- and 3D for mechanical engineering is that what
> we're mostly doing here at my current employer), but more and
> more dynamically typed code 'sneaks in' (thanks be!-)...
> 
> Alex


-- 
Conrad Schneiker
(This note is unofficial and subject to improvement without notice.)

Conrad Schneiker
(This note is unofficial and subject to improvement without notice.)