On Tue, Dec 03, 2002 at 06:30:22AM +0900, William Djaja Tjokroaminata wrote:
> Daniel Carrera <dcarrera / math.umd.edu> wrote:
> > True.  The whole reason why '++' was invented in C was that it would
> > reduce the number of CPU instructions:
> 
> > In C, 'a = a + 1' does this:
> 
> >    1 -> Store "1" in a memmory location.
> >    + -> Add 1 to 'a' and store the result in another location.
> >    = -> take the contents from this location and put them at the
> >         location of 'a'.
> 
> > But 'a += 1' does this:
> >    1  -> Store "1" in a memmory location.
> >    += -> Add it to 'a' and put it directly in the location of 'a',
> >          without the intermediate step.
> 
> > Whereas 'a++' does this:
> 
> >    ++ -> Increement 'a' by 1 and put the result directly in the
> >          location of 'a'.  Without the intermediate location.
> 
> 
> > Surely, this reason doesn't apply to Ruby. :-)
> 
> > Daniel.
> 
> Hmmm..., I don't think the explanation above is strict.  I think the
> current C compilers are free to translate 'a = a + 1' to 'a++'.  Because
> of this, we don't have to worry whether to write 'a = a + 1' or 'a++'.  A
> good C compiler will translate either of these into an assembly/machine
> equivalent of 'a++'.  You can ask/search comp.lang.c if you like.  C is
> one of the languages where a lot of things are not strictly defined.

At the time C was first used, compilers were so much dumber.
Nowadays any decent compiler will optimize these cases.
 

-- 
 _           _                             
| |__   __ _| |_ ___ _ __ ___   __ _ _ __  
| '_ \ / _` | __/ __| '_ ` _ \ / _` | '_ \ 
| |_) | (_| | |_\__ \ | | | | | (_| | | | |
|_.__/ \__,_|\__|___/_| |_| |_|\__,_|_| |_|
	Running Debian GNU/Linux Sid (unstable)
batsman dot geo at yahoo dot com

The bug system is not a release-specific entity.  Users of
Debian 1.3.1 use the same bug tracking system as users of hamm.
	-- James Troup <troup / debian.org>