> > What about Objective-C? It has static typing (with regular pointers) and
> > kind of dynamic typing ("id" type). Maybe Ruby should have similar
> > capability? Or maybe I just told something really stupid?
> 
> I alway have to look up definitions...
> http://en.wikipedia.org/wiki/Dynamic_typing#Static_vs._dynamic_type_checking
> 
> Unfortunately I'm not familiar with Objective C so I can't judge on that.
> From what I've seen so far I'd assume it's similar to Java which is regarded
> as statically typed although you have access to type information at runtime
> (as seems to be the case with Objective C).  But the crucial part is that
> you declare types (of variables, of method arguments) in code and these
> types are checked by the compiler.

Well, ObjectiveC has the issue that it supports non-object types:
regular C integers, things like that, so it has to support static
typing, yes.

It also has the id type, which is "any object".  It also has typed
pointers to specific classes-or-descendants, too.  It's a rather hybrid
thing.

A large part of what sets ObjC apart from Java and C++ is that "id"
type: any object, and the isa pointer, the runtime type information, so
you can tell what the data is from a generic pointer.

C++ and Java, if you have a pointer to a Rect instance, will only let
you call methods defined in Rect or parents, unless you cast to a
subclass -- and heaven help you if the instance wasn't actually the
subclass.

In ObjectiveC, you can send any message to any object, and if it's not
supported, it will call the unsupported selector method instead
(allowing dRb-like proxy objects).  ObjectiveC is strongly and
dynamically typed (with a static check option), and C++ and Java are
weakly and statically typed.

Ruby is, of course, also strongly and dynamically typed. It also has no
primitive types like Java and C variants, so everything is an object.
That's one reason static type checking isn't needed: All instances have
the header that tells what type of object it is. There won't be any
memory overrun errors when the runtime, say, tries to access the type
field of a C-style int.

Which leaves the program domain.  Not checking the actual class lets one
write mocks, proxies and replacement classes.  It makes one define, or
at least think about the interface, since it's not coupled to a class. I
think that's a good thing: If you write a program that can robustly deal
with anything that quacks like a duck, you've probably got a decent,
usable interface.

Regarding exceptions: There's a maxim to crash early and loudly. That's
a good thing.  However, there's places where actually checking the type
(or interface, even) and throwing an exception is a pretty minor detail:
Is an ArgumentError that much more descriptive than a NoMethodError?
Let the place that calls the code throw the exception, or let it be
thrown at the critical places, so you don't scatter
exception-translating code (catching a NoMethodError and throwing an
ArgumentError in response) all over your code, at every accessor.

Instead, perhaps, apply the validation pattern and collect it in one set
of methods, checking for the purpose at hand. Write your code in small,
generic pieces, so that it's obvious what sorts of objects will be in
each place. Write transparent code.

Ari