On Wed, 2004-10-13 at 02:04, Robert Klemme wrote:
> "Markus" <markus / reality.com> schrieb im Newsbeitrag
> news:1097596015.20674.61.camel / lapdog.reality.com...
> > On Tue, 2004-10-12 at 00:34, Robert Klemme wrote:
> > > "Markus" <markus / reality.com> schrieb im Newsbeitrag
> > > news:1097553071.15571.670.camel / lapdog.reality.com...
> > >
> > > >      But what about duck typing?  This was asked (IIRC) earlier on
> this
> > > > thread but (again IIRC) not addressed.  The duck typing way would be
> to
> > > > not depend on the class of the objects in the first place.  In the
> given
> > > > example it would be something like:
> > > >
> > > >     puts x.class.to_s
> > > >
> > > > which works for all classes, old, new, borrowed, blue.
> > >
> > > I tried to cover this issue but maybe it hasn't become clear enough.
> > > Although duck typing is appropriate in many cases, there are cases
> where
> > > it's not and in fact degenerates a design.  
> >
> >      Actually, what you are talking about is more "call it and hope it
> > works" than duck typing.  The first part of duck typing is asking "does
> > it quack like a duck?"; before calling a method, you check to see if the
> > object responds to it (and perhaps check other properties that may be
> > needed to get the results you want).
> >      It isn't a matter of eliminating conditional code completely, but
> > rather of basing the condition of what you care about (the methods)
> > rather than something only marginally related (the class).
> 
> I see we have a quite different understanding of Duck Typing.  IMHO with
> Duck Typing you just invoke a method and get bitten by the NoMethodError
> if it's not there.  After all, what do you want to do if you discover that
> an instance does not support a method you expect it to?  Through testing
> you ensure that the programm is well behaved.  And I think, my
> understanding is close on what Dave and others say.

     Sure, if you had only one trick in your trick bag, you would just
call it and hope/expect it to work.  That's the old "never test for an
error condition you don't know how to handle" rule.  But if (as in the
case in question) you have multiple ways to do what you are trying to
do, some of them appropriate to some kinds of values, some appropriate
to others, you'd be nuts to just use the one and hope the other cases
never arose.

     Instead, you'd either 1) test for method signatures with
responds_to? or 2) implement the most common case and then rescue
NoMethodError and try the rare cases.  The first is (IMHO) was cleaner
but I have seen cases where the second was used effectively.

> > > Now, while this is certainly a good idea for methods like to_s, to_i
> etc.
> > > it's a bad idea for functionality that is special to a part of your
> > > application.  For example, if you had a class Foo and defined methods
> > > String#to_foo and Fixnum#to_foo etc. that might come in handy for a
> > > certain lib or module and a lot of libraries do this, class interfaces
> of
> > > standard classes get cluttered with methods totally specific to
> certain
> > > parts of the application.  Also the likeliness of name clashes
> increases.
> >
> >      This is another thing that is not quite duck typing.
> 
> Clearly, as I was not talking about Duck Typing here but instead about
> certain design rationales independent of the typing model.

     So here is a language that gives you many workable ways to
accomplish a reasonable goal, but none of them are usable because they
every one of them violates some design principle or an other?  Is it
possible that the set of design principles you are trying to employ is a
tad too restrictive?

     If I want to write a polymorphic routine, but am not supposed to
test the class (since it provides no guarantee of behavour) or the
method signature, am not allowed to extend the core classes or sub class
them, how pray tell is it to be done?

> >  Yes, it is
> > possible to extend classes to simplify duck typing (sometimes called
> > things like masquerading or class spoofing), but it isn't even needed
> > (though it often isn't as bad as you make it sound).
> 
> Even worse, there are cases where it can do harm.

     Agreed.  I was thinking about taking 'rm' off my system for that
very reason, but then I thought--what if it somehow gets put back on? 
How will I get rid of it then?

> > > Note also, that you introduce dependencies from standard lib classes
> > > to application classes which can be dealt with in Ruby because of its
> > > dynamic nature, but which generally point in the wrong direction: it's 
> > > usually cleaner to have an acyclic dependency graph (certain languages 
> > > need this because otherwise you'll have compilation problems,...)
> >
> >      I don't see the relevance of this point.  In some languages you
> > need to declare all identifiers before you use them.  In some languages
> > you can not use negative numbers.  But generalizing from these to ruby
> > is something deserving of more thought than just "it is tru in some
> > languages so..."
> 
> As I said, Ruby can cope with these kinds of cyclic dependencies.  But
> that does not make it good design if you have a standard class rely on
> some kind of application part.  Maybe I should have been more clear as to
> the reasons behind this.  Reasons, why one generally does not want to
> modify standard classes with *specific* code:
> 
>  - possible name clashes
> 
>  - interface bloat, i.e., too many methods
> 
>  - documentation (Where do you document these methods?
>    The documentation belongs to your app's documentation,
>    but the methods sit in std lib classes.)
> 
>  - possible interference with the std class's behavior,
>    that might break other application parts
> 
>  - introduction of bugs into parts that are assumend to
>    be thoroughly tested and thus quite bug free
> 
> I don't say it's *always* bad to modify standard classes.  I say, it's
> very questionable to modify standard classes with behavior that is
> specific to a certain application.  Of course, if you write a small script
> this might not be a problem and if it's the most efficient way to arrive
> at a solution, then doing so is certainly a good thing.  But if you write
> an application consisting of several components or if you even write a
> library intended for general use, you should be very careful in modifying
> standard classes.

    If the behavior is specific to the application (and added by the
application) it can as well be encapsulated within the application.  To
a first approximation, this will be the case simply because the code
which extends them lives in the application.  If you aren't using it, it
isn't there.  (Though looking back, I see that several of your points
seem to assume that this is not the case--e.g., the documentation
point.)  If this is still worrisome, you can extend the interface only
of objects turned over to (or created by) your application.  You can
even (if you really feel the need to go this far) remove them from any
objects you export.  As for the name space issues, you can, if you like,
package your extensions in a companion object, "under-write" them (with
method missing, use reflection to detect and report collisions before
they occur, etc.)

     All of these precautions can be neatly encapsulated themselves, so
you don't even need to mess with them while you're working on your code.

     It really does seem to me that this is a "rule" rooted more in the
limitations of other languages than abstract software design principles
and not a good match for ruby.  There are too many good ways to deal
with the potential problems to write the whole technique off with an
"almost never" prohibition.

     For my part, I am not saying it is _always_ bad to extend standard
classes, but (if done judiciously) it can be a very useful technique,
and is no more dangerous than any other language feature of similar
power and generality.

    -- Markus