Thomas Sawyer wrote:

> Hi James,
> 
> Honestly, James, I didn't even think about contacting you --I didn't
> even suspect I'd be getting this deep into a conversation about it.

The I'm happy to offer a revelation in your mindset. And welcome to the 
Internet world.


> However, while your example gave me at
> least something to go on, the presentation as a whole left me more
> confused and thus less enthusiastic about the whole DCI idea.

That is in fact probably good! That is what should be happening as you 
make a paradigm shift. Expect to be confused and surprised.

I'll come back to this notion throughout the mail.

My example didn't only extemporize on Trygve's, but went into some new 
territory. And I think some people got lost there.

I know this sounds like rationalization, but in fact, most of my 
teaching the past thirty years has involved some kind of paradigm shift, 
and I am attentive to the need of this puzzlement. If you read "Zen and 
the art of motorcycle maintenance," Pirsig calls it "stuckness." 
Learning new things is not always a linear process.


> I apologize for compacting this opinion into the one word "awful", I
> suppose that is too disparaging a term, and for that I apologize. So
> please accept this explanation in it's place and take it for what it's
> worth.

You can call me whatever you want to my face, and in this discussion, I 
hope you do, you ignoramous :-) I am more hurt by the fact that you 
might have seized an opportunity for dialog and learning but instead 
cast aspersions mumbling in a corner somewhere. I feel hurt on behalf of 
the community. We need your criticism ??? but in dialog.


>> Words mean things, and use case is not just Swedish for scenario.
> 
> The definition seems fine. I just don't see why logging-in can't be
> viewed as a goal in itself.

As most use case experts would describe it, you don't go home at the end 
of the day and tell your daughter, "Guess what I did today! I got logged 
in!" Logging in isn't essential to the value chain. The goals in 
goal-driven use cases drive towards such goals. To focus on things at 
this level is noise.

Alistair Cockburn's "Effective Use Cases" offers the theory and practice 
of this perspective in a very satisfying way. Give it a read and come 
back and we can discuss it more.


>> > He's splitting hairs over words and as much as he thinks DCI is
>> > so cool, I'm not sure he actually "gets it" himself.
>>
>> Can you translate that into some delineated professional feedback?
> 
> Ok. This is my take. DCI seems to me like an idea with a lot of
> potential. But I don't think it's an all of nothing kind of thing.

No one is saying that it is! There is still a place for your 
grandfather's object-oriented programming. There are architectures we 
can call event-driven architectures, where the interesting operations 
are all atomic. A shapes editor is a good example. It in fact has no use 
cases. Most operations are atomic and trivial: "Change the color of this 
shape." "Resize this shape." "Create this shape." Once in a great while 
you might have operations involving multiple methods of multiple shapes, 
and that's where DCI comes in.

More broadly, not everything is object-oriented, whether DCI or your 
grandfather's object-oriented design. Procedural design still has a 
place. So do state machines ??? and that has nothing to do with objects 
(for example, in protocol design). So does rule-based computation. So 
does functional programming as in SASL or KRC. Anyone who attacks the 
world with one weapon alone will be defeated. I wrote a whole book on 
this several years ago called "Multi-paradigm design.

Why did you think I felt that DCI was the only way?


> get the feeling that you so badly want DCI to be a Major Paradigm
> Shift that you might be pushing it's concepts too far.

Given that it took me seven years to learn it, I can speak strongly for 
the fact that it is a paradigm shift. That it is a paradigm shift has 
nothing to do with its payoff ??? only with the mode of understanding.

You may be seeing some of my pedagogical techniques that are commonly 
used to extemporize new ideas. There are important ways of giving 
emphasis in a one-hour presentation that would not arise if we were 
pairing at a keyboard.


> For instance, to say that an account is not a not object... going all
> the way back to the bad old days of COBOL, an account has always been
> treated as as object, even if not coded in OOP form.

I don't think so. Have you ever worked in finance or banking? I think 
your statement holds true only in lectures by college professors or by 
people outside of finance. In the latter case, the object isn't the 
account, but the name of the account (sometimes called its account 
number). Can you give me a single example in the software of a real 
financial institution where the balance is actually stored as a data 
member of an object in memory?

After you answer that, we can discuss the difference between 
object-oriented programming and class-oriented programming. One of the 
major features of DCI is that it supports object-oriented programming. 
Very few languages support object-oriented programming directly: they 
support class-oriented programming instead.


> That's because
> banks treat accounts as objects --people have them, they have ID
> numbers, etc.

You are perhaps suffering from being trapped in the old paradigm, where 
everything must be an object. Stick with me here for a few paragraphs as 
I explore this. You would have been an excellent student of Kant. In 
fact, if you look at people's mental models of their worlds, they think 
of much more than objects. An ID number is not an object: it is a handle 
to an object. An account is not an object: it is a role that several 
objects together can play.

Do the following experiment. Go to someone who has recently done a money 
transfer in their bank. Ask them to give you a general user story for 
it. Inevitably, I find people saying, "I decide on an amount, and then I 
witdraw that money from one account and put it into another account." If 
they speak more precisely they will use terms like "source account" and 
"destination account." A "source account" is not an object. It is a role 
that something can play (like a savings account).

For the time being we can pretend that savings account is a class whose 
objects can play the role of "source account." That fits the simple 
Kantian model of the world, where most things must be objects. But if we 
go more deeply, to the level of that domain (in the sense of DDD) we 
find that it is not an object ??? certainly not in the "D" sense of "Data" 
in DCI. It is a collection of use cases, of behaviors. That makes it a 
DCI context. An account is a context (like an account number) in which 
we can carry out algorithms (like transferring money) with other 
concepts (like other accounts, or transaction logs, or audit trails).

Where I think you are confused is that you take these elements of your 
mental model and call them objects. That was also what Kant does. To do 
so is at least not useful, and probably isn't even right. Also part of 
your mental model is the mapping from the roles "source account" and 
"destination account" onto their respective accounts (my savings account 
#O991540 and my investment account #393497654). Also part of your mental 
model is those things called accounts, and you think of them as objects. 
The problem is that the people who implement those systems don't 
implement them as objects, but as something else ??? the objects are at a 
much lower level. That's the real world of real financial software 
today. Really. At least in my example????? which I think is representative. 
If you have a different example where an account can be an object in 
memory, I find that interesting, but I don't think it's germane to this 
discussion.

One reason I can justify DCI as a paradigm shift is that it differently 
translates the mental models of end users and programmers into code. 
Something called object-oriented programming was one way of doing it in 
the 1980s. This is another way of doing it, with different elements of 
the model. And these elements don't come out of thin air. Rebecca 
Wirfs-Brock and Trygve were having a discussion on the Hurtigruten about 
ten years ago as I was listening, and they concluded that objects don't 
have responsibilities ??? roles do. That is the essence of CRC cards. Most 
people think that CRC stands for "Class, Responsibility, and 
Collaborator." It does not. Rebecca wanted to call them RRR cards 
(Roles, Responsibilities and Relationships) but the C really stuck. What 
it really means is "candidate object," and it's a role. (I just verified 
this with her when she and I went together to dinner with Trygve and 
Gertrud at ??redev last year.) So you have an entire industry focusing on 
classes because they misunderstood an acronym (or popularizers of the 
technique misunderstood it ??? I won't mention any names). Focus on roles 
??? the roles that objects can play ??? not classes.

There are other ideas I could bring to bear from the field of user 
experience, from Brenda Laurel's writings, and from other staples of 
object-orientation that the Java-duped public doesn't read, but take my 
word for it. This is not the object-orientation you learned in college.



> Then please make it available!

It's there (somewhere) on object-composition (are you subscribed?); it 
appears in its entirety in the book I have coming out in June (there are 
several drafts scattered here and there on the web).

Part of understanding a new paradigm is going beyond a one-hour talk to 
do your homework. Read the Artima article. There are three or four 
articles you should probably read at Trygve's web site. If you're really 
interested, come to the next course I offer on it. Just for you, I'll 
give you a free seat.


> Please don't feel that I am criticizing *you* --take it for what it
> is, my personal critique of *your presentation*.

Likewise, I am not criticising your understanding, but how you channeled 
the criticism.


>> If you feel the Ruby style bears improvement I am open to suggestions.
> 
> I can can certainly offer some suggestions, but I would have to
> understand DCI better to go beyond the surface.

I suspected as much. My guess is that your sense of distaste in the code 
may also owe to being stuck in the old paradigm.


>> > (P.S. I also think this is much more like AOP then Coplien is willing
>> > to admit.)
>>
>> Second, I'd like to know why; and first, I'd like to know why it's
>> important. Again, I think you are making the mistake that another poster
>> here warns about: confusing the language mechanisms with the design
>> ideas.
> 
> The goal of AOP is to come at a problem orthogonal to the traditional
> OOP direction. In AOP you are organizing code into aspects. These
> aspects are like contexts in DCI. Aspects are composed of advice, code
> injected into classes/objects by wrapping other methods.
> There are clear similarities. DCI goes a bit further by injecting
> methods whole-clothe, and in doing so decomposes "aspects" into a
> context and set of roles. (Actually that might be useful, might DCI
> roles make use of AOP's concept of advice too?)

No, because it makes the aspectualized code unreadable. As to why DCI is 
more than AOP, read my above long segment on mental models and 
paradigms. DCI is not just a programming trick to reflect cross-cutting. 
It can represent much higher dimensions of cross-cutting than AOP can 
and, because of the Contextual mapping of roles to objects, is much more 
dynamic. They are barely in the same league. I think you are confusing 
one of the mechanisms of Aspects with one common mechanism used to 
implement DCI in some programming languages.

This, too, is the sign of a new paradigm: everyone tries to describe it 
in terms of what they know. Some people say that DCI is like mixins. 
Some say it is like multi-paradigm design. Some say it is like aspects, 
some like dependency injection, and a million other things. What gives 
me the most grief is that there is a tiny bit of truth in each of these 
claims, just enough to keep people from making the necessary mental 
leap. To do that requires digging into it and trying it. It is like 
learning a martial art: no number of PowerPoint slides will get you 
there. You have to feel it in your bones. Most people don't even "get" 
object-oriented programming in their bones. And this transition scares 
programmers because their identity is so tied up in understanding new 
technology. That leads people to do really amusing things. For example, 
someone may not yet understand DCI well enough even to give suggestions 
on how to clean up Ruby code that illustrates it, yet feels qualified to 
criticize one of its inventors as "not getting it." O, how human we can 
be...

But back to the point... The real goal of AOP (I have this from Gregor 
Kiczales personally) is to shock people into taking reflection 
seriously. It was supposed to scare people from Java back into Lisp, 
where you can express these things cleanly. Cutpoints and wrappers and 
whoppers are native to CLOS, for example. The problem is that people 
weren't shocked: they embraced the scaffolding. I view this as one of 
the best examples of the sheep-like stupidity of the industry, 
collectively.

I'm going to get back to my vacation and attacking the fjelds of Norway.

Med venlig Hilsen, Cope
-- 
Posted via http://www.ruby-forum.com/.