"Pixel" <pixel / mandrakesoft.com> wrote in message
news:lyherj7mk5.fsf / leia.mandrakesoft.com...
> "David Simmons" <david.simmons / smallscript.net> writes:
>
> [...]
>
> > Overloading is the static-binding (language) name. Multi-methods is
> > the dynamic-binding (language) name. Some authors prefer to write it
> > as "multimethods" e.g., Art of the Metaobject Protocol and various
> > other excellent lisp references.
> [...]
> > However, be they statically or dynamically bound, the principle of
> > the facility is the same: in statically typed and analyzed/compiled
> languages
> > the binding (based on argument types) takes place at compile-time;
> > in dynamically typed languages the binding (based on argument types)
> > takes place at runtime. That principle being the use of all the
> > argument types
> as
> > predicates in the binding process, rather than just using the
> > method/function name (and possibly the receiver's type).
>
> beurk, i don't like this.

Yes, but if you follow this principle it leads to much clearer understanding
of the problems one finds in most statically typed languages w.r.t.
covariance and overloading. There is only one correct set of semantics, and
it is broken in most static type language implementations.

>
> multiple dispatch is very seldom present in dynamic languages since
> the only typed parameter is the object. dynamic languages with
> optional type annotation
> can have multiple dispatch (dylan, CLOS).

That statement is a bit circular. Most dynamic languages do not support
multi-method binding/dispatch. Ergo, all the arguments to their methods are
of type <Object/any>.

For those languages which do provide multi-method dispatch there is often
quite heavy use of the facility for some problem domains and hence you must
look at particular code bases. Within core language libraries, the most
obvious examples of the use of multi-methods show up in numerics and to a
lesser degree in iterators and collections.

In dynamic languages which lack multi-methods, we see "double-dispatching"
(a poor-man's workaround) for receiver+1arg cases where all the types are
known up front at design time [i.e., no 3rd party development or late
deployment time integration issues].

I am not sure why you mentioned these generally well understood facts [at
least they are generally well understood within the dynamic language
implementation community].

>
> >
> > This requires significantly more precise explanation to present the
> issues
> > properly for languages that have static typing and limited dynamic
> typing
> > facilities [which leads to type-case expressions that are runtime
> checked
> > rather than using modern-adaptive-dynamic binding-techniques
> > developed
> in
> > the mid-80's].
> >
> > This is "closely" related to my comments on C++ lacking "Calltime
> Dispatch
> > Binding". I.e., C++ has static-overloading but lacks
> > calltime-dispatch-binding facilities to be able to provide
> runtime/calltime
> > overloading.
>
> like nearly every OO languages, C++ has runtime dispatch only based on
> the type of the object, not on the argument types. Dispatch on
> argument types is decided at compile time. This is not multiple
> dispatch (also called
> multi-methods)
>
> cf the Castagna's nice paper
> http://citeseer.nj.nec.com/castagna95covariance.html

Sorry, I could not access the paper on that site without an ACM online
library membership. Feel free to send me a PDF if available.

>
>
> > If it had such facilities then it could, at runtime, truly dispatch
> > on one or more parameters to a function (including <this> as a
> > parameter).
>
> Java, C#... are in the same category

I agree, they are broken. But <g> you are preaching to the choir; I am a
dynamic language advocate, it is where I have spent my time over the last
ten years pioneering new facilities and advancing the state of the art.
Note: I include scripting languages into the dynamic language category.

That's one reason why Microsoft asked me to get involved with the .NET
architecture and its support for dynamic languages. It is an activity that I
will be continuing to interact with Microsoft in evolving over the coming
years.

All the "crap" regarding genericity in Java [and C#] would never have been
an issue if they did away with (or offered alternatives to) the VTable
dispatch mechanism -- which is something I appear to be influencing the
thinking on at Microsoft. It would also be trivial to inherently provide
proper AOP if true predicate based calltime binding and dispatch was
performed. NOTE: I am being very careful to stay away from the potentially
dodgy term "dynamic binding" because, like the broad use [and now relatively
meaningless] term object-oriented, it has become rather polluted.

Similar issues occur with static language notions of sealed and final as
well as interfaces.  And closely related is the need for selector
namespaces, which I invented in 1996. Which, to my great satisfaction, are
now going to be a part of the ECMAScript (JavaScript 2.0) standard. The
ECMAScript selector namespace design, as far as I known, is unrelated to my
work -- which is what makes it so satisfying because it means that
independent parties trying to solve similar problems came to the same
fundamental conclusions about its importance.

>
> http://people.mandrakesoft.com/~prigaux/overloading2.java

Sure, the example is clearly illustrating the problem. Static type binding
of dynamic type information does not work. I.e., languages which only have
static binding cannot provide proper semantics for method-implementation
selection based on argument types.

Dynamically typed languages with static language optimization features and
adaptive compilation can provide these features and do so at speeds that are
competitive with or better than C++. But, it is a much harder problem if
there is little or no declared type information; because general adaptive
compilation optimizations are hard.

Ideally, one wants a common language architecture that provides full dynamic
behavior, with optional declarative typing, solid type inferencing, static
compilation capabilities, and adaptive dynamic jitting facilities (which can
feedback to augment static compilation/pre-jitting). This is the focus of
the AOS Platform, the Microsoft .NET Platform, and others.

>
> >
> > > - "Calltime Dispatch Binding": not in C++? weird
> >
> > Strictly speaking, vtable dispatch is not a binding process at all.
> > It
> is
> > just a vectored indirection that is invariant with respect to
> > methods
> being
> > added, removed, calling context restrictions etc. Whereas a calltime
> > dispatch binding facility actually performs the binding at call-time
> based
> > on various runtime alterable characteristics.
>
> i don't think the implementation of C++ should distinguish it from
> Java. Many OO languages do not allow adding methods at runtime and
> have the same behaviour as C++. Of course, in Ruby you can add methods
> at runtime and the
> vtable trick is not possible.

It is worse than just describing C++ like mechanism as "the vtable-trick".
VTables are actually (demonstrably) slower than "true" (receiver only)
dynamic-binding-dispatch mechanisms. This fact is (reasonably well)
understood today. It is a reality that will increasingly be the case as long
as the gap between processor core speeds and L2 cache and memory speeds
continues to widen.

It is fairly easy to illustrate on the Intel processor family. I've posted
(on comp.lang.smalltalk within the last 12 months) at least two detailed
explanations showing the machine instructions, cycle-times, and benchmarks.
The originally published technique was developed by David Ungar and
published in OOPSLA papers in the mid-80's. It has been a standard part of
most jit-based Smalltalk implementations for the last ten years or so.

However, I intentionally have not published information on techniques I have
developed (on the AOS Platform VM) for hi-performance predicate based
(incl - multi-method) dispatching. Especially with regard to implementation
on the .NET platform, where I am still exploring with Microsoft [who needs
this technology as much as Sun/Java does].

NOTE: The AOS Platform VM is unrelated to the .NET technology. I have been
evolving the AOS Platform design for the last decade; and it is now in its
4th generation.

>
> >
> > > - what do you mean with "Tail Calling"? IMO it's an optimisation
> > > which
> > depend on
> > > the implementation. AFAIK many C++ implementations handle tail
> recursion
> > (gcc
> > > does it nicely since 2.96)
> >
> > I did not know gcc had this; I've never tried it. However it is not
> > part
> of
> > the C++ standard. By that measure. there are many different C++
> extensions
> > that exist (incompatibly) across implementations for providing very
> useful
> > features.
>
> do optimisations have to be in a standard ???

Of course not [but the chart was listing features of "languages" not
particular implementations]. I.e., to characterize them as a feature of the
"language" then they need to be offered in all "standard" implementations.
Otherwise, they are just a feature of a given implementation or dialect.

> the use of tail-recursion or not
> does not alter the behaviour of a program until you reach stack
> overflow. But you can say the same for many optimisations.

It also has significant impact on debugging, etc. However, that is beside
the point. What you are saying here does not address my comments back to you
on tail-recursion optimization in the "c++" language.

>
> >
> > w.r.t. tail-calls, (if you say that gcc implementation supports
> > tail-recursion calls, I certainly am interested in verifying that
> because I
> > could use it for some VM build work).
>
> well, it works on
>
> http://people.mandrakesoft.com/~prigaux/47.c
> http://people.mandrakesoft.com/~prigaux/48.c
> http://people.mandrakesoft.com/~prigaux/50.c
> http://people.mandrakesoft.com/~prigaux/53.c
>
> > However, I'm not really sure I see how
> > it (c++) could be doing this for anything other than calls to the
> current
> > method/function. Which is not general tail-recursion calling
> > [otherwise
> one
> > could loosely claim that any language with a goto had tail-calling].
>
> i don't know what you call "general tail-recursion". Of course the
> poor C compiler has hard time analysing things to proove the
> tail-recursion optimisation is safe.

I think we might be speaking about something without our having the same
definition of terms. Can you cook up a simple example (or some pseudo-code)
to illustrate what gcc can do?

Thanks,

-- Dave S. [www.smallscript.org]