On Wednesday 19 November 2003 11:41 am, Weirich, James wrote:
> > But as to the issue of partially-implemented interfaces, I
> > think they're fine.  One IO interface is all you need.
> > If you have an object that is input-only and sequential,
> > and the base IO interface allows for more, then your class is
> > simply not going to implement them.
>
> Ah, the solution to an over constrained system is to ignore the
> constraints.

Sometimes you have to; that's programming for you.

> [... From another message ...]
>
> > now you know it responds to open, close, read, write, select,
> > etc. and you know what parameters they take.
>
> Actually, you don't know that because it may be a partially implemented
> interface.

No, you assume all are present.  The developer decides where to pass a 
partially-implemented interface.  If they pass an object that doesn't 
implement output to a method that requires output, KA-BOOM.  The human 
involved there needs to be careful.

Interfaces can't guarantee anything.  Type checking can never do that.  Every 
language has a way to circumvent type declarations.  If everyone is talking 
about finding a way to achieve perfect type checking, that's nuts.  There is 
no perfect type checking.  All you can do is make a declaration that informs 
the developer of obvious mistakes.  If they undertake development of a 
partially-implemented interface, then they should know the dangers.

This is how Ruby is now.  The onus always rests on the human involved.  The 
human decides if an object is suitable for passing to a method.  All I'm 
advocating is a way to inform them of obvious mistakes.  I do not advocate a 
perfect, strict type checking system.  Just some information about whether or 
not an interface is present, or at least should be.

> If the presence of the interface tag in the inheritance graph of an object
> doesn't guarantee behavior, then why bother checking for it?  If partially
> implemented interfaces are fine, then you still need to deal with the issue
> of figuring out what portion of the interface constitutes "fine".

Because they're useful to a point, and flexible.  You can combine many 
interfaces into a single implementation, you can partially implement them, 
you can override them, etc.  They fit nicely with class design.  They're 
essentially virtual class designs.

> What I suspect is that you when you write a function, you would like to
> communicate things like ...
>
>   * function foo expects a parameter that conforms to the "socket"
> protocol.

Well, of course I like that.  Methods expect certain things of the objects 
they are given.

> And I actually tend to agree with the desire to communicate.  However,
> making an explicit test for inheriting from a particular module or class is
> a poor way to communicate this.  Although you get /slightly/ better
> diagnostics when you goof up, those slightly better diagnostics come at a
> cost.  The cost is excessive, unneeded runtime checking, needlessly limited
> options on what kind of parameters a method accepts, and extra design
> overhead (i.e. the need to explicitly identify abstract protocols that may

You can make this type checking something that's toggled, so they're checked 
during development, but in production the type checks are ignored.

> or may not be needed).  And in the end, the goal of communication is not
> served because the test is buried in the /code/ of the method, not in the
> interface itself.

Which is better, an error that is reported as a type mismatch at the point of 
calling, where the code is owned by the developer, or an obscure parameter 
mismatch deep inside the code of someone else's library?  I prefer it at the 
point calling, in my own code.

> A well named parameter or an RDOC comment both serve to
> put the information in the interface (and in the browsable documentation)
> where the buried kind_of? test will not.

Type checking never replaces documentation; I'm not implying that.  But 
developers make mistakes.  If documentation could prevent all mistakes, then 
why do developers spend so much time debugging?  It's better to have 
informative error messages regarding type mismatches than obscure ones buried 
deep in someone else's code.

	Sean O'Dell