From: "Brian Candler" <B.Candler / pobox.com>


> On Mon, Nov 04, 2002 at 10:02:12PM +0900, Gavin Sinclair wrote:
>
> (an all-encompassing statement if ever I saw one :-)
>
> > a += b is always, by definition, in any language, the same
> *behaviour* as a = a + b.  If you disagree, you are being absurd.
>
> By whose definition? Certainly not the language designers and standards
> bodies. As a counter-example: in C++, if an object has both "+" and "+="
> methods, the language definition does not require them to have the same
> behaviour, and therefore in general they don't. QED.

By C's definition.  The original and the best.  C++ is of course a
counter-example, but C invented the terminology "+=", not C++.  You can't just
redefine things and expect your definitions to be accepted elsewhere.

I'm not being silly here (don't worry, I'm not treating the debate as a fully
serious matter either :).  "+=" wouldn't have any "meaning" (cultual meaning, I
suppose) if it weren't for C.  And its meaning was around, unchallenged, for
long enough, to be protected against future would-be language designers.

"+=" has no meaning in mathematics.  C invented it from mid-air.  (Heck, maybe
it got it from somewhere else, but that's how it will be remembered.)

Of course, Ruby is free to ascribe to "+=" anything it likes, but it would then
become another abberation.


> In those languages, "+=" is just a label ( / token / method name).

Naturally.


> There may be some sort of shared understanding between certain groups of
> programmers that "+" and "+=" ought to work in the way you describe, but
> it's not a "definition". Programmers in certain languages may reasonably
> assume that a += b modifies the object referred to by 'a' in place, which is
> not the behaviour you describe. It's just part of the conventions of that
> particular language.

I highlighted "behaviour" becuase the top-level behaviour of
  a += b       and
  a = a + b
are (or *ought* to be) the same.  i.e. both have the same effect, value-wise
(not necessarily identity-wise) on "a" and "b".  The existence or otherwise of
intermediate objects is of no concern to "behaviour", IMO.  My intention for
"behaviour" was probably not clear.  I hope it is clear and agreeable now.  In
my defence, the tone was set by the original poster suggesting, if I understood
correctly, that it is perfectly fair to define "+=" to mean something entirely
different from what one would expect.  Your submission, while good and
relevant, deals at a lower level.


> Actually, many languages don't even _have_ a "+=" operator, which also
> breaks your statement. However I will assume "any language" is replaced with
> "any language which has these operators" in the above :-)

That restriction of domain is fair.


> > The only possible difference is
> > in efficiency: run-time hacks that save an intermediate object.  This is a
> > sometimes a valid concern, and there is a remedy: the << operator.
> >
> >   a = a + b       (old object 'a' lost)
> >   a << b          (same behaviour, *possibly* more efficient)
> >
> > This behaviour is defined for Strings and Arrays
>
> Those two examples do not exhibit the same behaviour in the presence of
> other variables containing references to the same object as the original
> 'a'. That's not a "run-time hack", that's a difference in semantics: i.e. "I
> want to create a new object and leave the old one alone", as opposed to "I
> want to modify the object itself". The difference is important and not just
> one of efficiency, unless we know for definite that there are no such
> additional references.

I'm not sure what you are getting at in the last sentence.  But I'm sure you
understand my point that implementation should not be the first concern for the
programmer who merely wants to get things right.


> I don't think we should restrict ourselves to Strings and Arrays, since you
> made a sweeping statement about operators in general. Even if there is a
> shared understanding of what we mean by "+" or "+=" on a String, it might
> not be so clear for a Foo object. Furthermore, the "<<" example is
> meaningless where 'a' references an immutable object; it may calculate some
> value, but can't modify the object in place.

Well, of course.  "<<" does not even have a well-understood meaning in a wide
programming community like "+=" does, as you point out below.

And my thesis is that it's perfectly clear what "+=" means on a Foo object.
I'm sure you know what that is ... ;)


> Anyway, I think the point I'm trying to make is:
>
> 1. There are two valid ways to apply an operator and argument to an object:
>    create a new object, or modify in place (if it is mutable)
> 2. You need two different operators to be able to distinguish them
> 3. Whether they are called "+" and "+=" (as a C++ programmer might use),
>    or "+" and "<<" (as a Ruby programmer would use, since he doesn't have
>    the option of giving different semantics to "+="), or "foo" and "foo!",
>    is pretty irrelevant.

I disagree with (2).  If a program works without requiring such a distinction,
as it always will, then to say "you need two different operators ..." is an
overstatement.

The good thing about "+=" being hard-coded in Ruby is that it is a sensible
default.  It has become part of the Ruby experience to expect "<<" or some
other method to perform effective in-place assignment.  (Besides, who needs
"-=", or "%=", etc. on general objects?).  And the "foo!" and "bar?" method
notations are a stroke of genius, IMO.  (Borrowed from Lisp, aren't they?)

> It's just convention. To me, coming from a C background, it looks odd that
> "integer left shift by X" also means by convention "append value X", but you
> get used to it as part of learning the language.
>
> Regards,
>
> Brian.


My point is: when I read that Ruby allows operator overloading, I trembled.
When I read (two seconds later) that Ruby does not allow assignment-operator
overloading, I was relieved :)

Cheers,
Gavin