On 2 Jun 2008, at 03:24, David A. Black wrote:
> On Mon, 2 Jun 2008, David Masover wrote:
>> As long as we're nitpicking, you can't necessarily measure what's  
>> happened
>> after the fact, either. The object may well swallow everything with
>> method_missing and do nothing. It may be possible to play tricks with
>> respond_to?, __send__, and so on, to achieve the same effect.
>
> I'm thinking of the effect, though. When you send a message to an
> object, something comes back, or an exception is raised. My point is
> just that none of that can be known with certainty (I can't absolutely
> know that a.b will return c, or an object of class D, or whatever)
> before it happens.


Exactly.

This is the same argument that split physics a century ago. The  
classical view relied on the precision of mathematics to provide a  
clockwork understanding of the universe whilst the modern view used  
real-world experiments to show that below a certain level of  
granularity such certainty was an illusion. At the time many critics  
of the new physics claimed that it couldn't possibly be right for very  
similar reasons to those given by advocates of immutable and static  
typing: that runtime uncertainty makes a nonsense of provability/ 
causality and hence must be something other than it appears. That  
argument still rages in some corners of physics (cf Bohm's Implicate  
Calculus) but for all intents and purposes uncertainty is the dominant  
view and the bedrock of our digital electronic technology.

How does this apply to Ruby? Because of method_missing and the open  
nature of classes and objects the only way to know anything about an  
individual object is to make an observation, sending it a message that  
queries its internal state. The very act of observation may or may not  
change that internal state, and the observer will never be entirely  
certain that the latter is not the case. That's just the nature of the  
language, much as in C programs there is no way to know in advance  
what type of memory structure a void pointer will actually reference  
or the damage that operating on it may cause to a program's integrity  
- but there are still cases where a void pointer is an appropriate  
solution.

If certainty is important you can apply all kinds of design  
conventions to support it. Unit testing performs a battery of  
experiments to ensure requirements are met. Behaviour driven  
development encourages a minimal implementation closely mirroring  
expressed requirements. Tight coding standards might forbid use of  
certain 'dangerous' language features such as method_missing or  
dynamic module inclusion, or perhaps even mandate specific design  
techniques such as exception-driven goal direction, runtime contracts,  
design patterns or whatever else happens works for the developers  
concerned.

The point with all these approaches is that they are focused on  
reducing the number of ways in which an object will act at runtime so  
that the underlying uncertainty is managed. Effectively they move the  
granularity of the system so that it obeys classical expectations.

But the uncertainty is still there under the covers, and when applied  
appropriately it can be used to provide elegant solutions to problems  
that would otherwise be tedious and/or impossible to tackle with  
static approaches. And once embraced for what it is, it opens a range  
of new possibilities for writing reliable and robust applications.


Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net
----
raise ArgumentError unless @reality.responds_to? :reason