jQuery in Action, 2nd edition*
The moose likes Agile and Other Processes and the fly likes Can't RUP be an instance of Waterfall ? Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of OCM Java EE 6 Enterprise Architect Exam Guide this week in the OCMJEA forum!
JavaRanch » Java Forums » Engineering » Agile and Other Processes
Bookmark "Can Watch "Can New topic
Author

Can't RUP be an instance of Waterfall ?

HS Thomas
Ranch Hand

Joined: May 15, 2002
Posts: 3404
Following the post in this thread:
XP practise - Leaving out javadocs etc.
Difficult trying to imagine working XP-style in a Waterfall project.
Though, there is such a thing as planned releases - find the high ROI - high risk features first and tackle those then go on to the next set of ROI-high risk features. That could be done Agilely.
So even if the stakeholders think it's Waterfall
it's actually XP.
If XP is an instance of RUP, can't RUP be an instance of Waterfall ?
regards
[ June 30, 2003: Message edited by: HS Thomas ]
Stan James
(instanceof Sidekick)
Ranch Hand

Joined: Jan 29, 2003
Posts: 8791
The famous Whale Diagram (or lump chart or whatever we call it this week) shows RUP phases across the top, iterations across the bottom. The height of each lump is the amount of activity going on at that time. You never complete one phase and hand off to the next. They all overlap significantly. Iterative development is shown clearly across the bottom. So I think the short answer is "no" it cannot willingly be made waterfall. Doing so would be process abuse and should be reported to the authorities. But I suppose you could get a waterfall impression if you read a text book that covered one phase or activity at a time (to make tests easier to write) without emphasizing this chart.
Here's the RUP Overview page that picture came from, just for context.
Scott Ambler adds more disciplines and phases to his
Enterprise Unified Process customization. (Scroll down for the modified lump chart.) He's one of the very agile guys, and didn't change much at all about the parts he kept. More "not waterfall" evidence, I think.


A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Even if RUP could be tailored waterfallish, XP certainly can't - if only because in Waterfall testing comes last.


The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
Stan James
(instanceof Sidekick)
Ranch Hand

Joined: Jan 29, 2003
Posts: 8791
Hey, you edited while I was typing! Is that fair?
XP is the anti-waterfall incarnate.
Kent Beck had some neat diagrams somewhere showing a complete waterfall for every iteration, then why not one for every pairing session, and why not one for every unit test? A waterfall set to the Three Minute Waltz!
HS Thomas
Ranch Hand

Joined: May 15, 2002
Posts: 3404
I was thinking along the lines of tailoring Waterfall RUPish.
Break the Waterfall into smaller waterfalls and call them planned releases A, B, C.
A must exist before B can start, etc.
So planned release A, i.e. instance RUP A starts - Inception ,Elaboration with testing being done all along through to Transition, Production, perhaps..
Instance RUP B commences , probably while RUP A is in construction and so on.At some point RUP B absorbs RUP A's models and code and continues to do so until RUP A is in production.
Configuration Management would need tighter control, probably merge latest changes at the end of the day and be aware of what those changes are.I should think they are used to that anyway.
I'm not convinced it isn't possible.
Which part of the above scenario doesn't sound XP to y'all.
An XPer probably wouldn't be affected by the focus shifting from small system XP to bigger RUP or even bigger Waterfall. He/she will by then be involved in the next RUP iteration.

regards
[ July 01, 2003: Message edited by: HS Thomas ]
HS Thomas
Ranch Hand

Joined: May 15, 2002
Posts: 3404
Hey, you edited while I was typing! Is that fair?
And you've since posted before I replied!
But I think we are aligned now.
I can see the following are allowed to happen for each RUP iteration described above.
The Planning Game: business and programming professionals collaborate to estimate the time for short tasks (called "stories" in XP)
Small Releases: a ship cycle measured in weeks rather than years
Metaphor: "a single overarching metaphor" to guide development substitutes for a formal architectural specification
Simple Design: No provision in the code for future changes or flexibility
Testing: Every piece of code must be exercised by unit tests when written, and the full suite of tests when integrated
Refactoring: Any piece of code is subject to rewriting to make it simpler
Pair Programming: All production code written by two developers jointly
Collective Ownership: Any developer has the right to change any piece of code on the project
Continuous Integration: Code is integrated to the main line every few hours
40-hour week: No overtime
On-site Customer: A business person dedicated to the project
Coding standards: One standard per project
Apart from Illja's point about Testing coming last in Waterfall. If they want to do one final Test it isn't going to hurt. If this last phase keeps on having no results i.e. bugs , or significant ones , they'll scrap this phase and the process becomes even more RUPish.

regards
[ June 30, 2003: Message edited by: HS Thomas ]
HS Thomas
Ranch Hand

Joined: May 15, 2002
Posts: 3404
Or even thinking of it as a change programme from Waterfall to RUP ?? Doesn't the above look feasible ?
regards
[ July 01, 2003: Message edited by: HS Thomas ]
HS Thomas
Ranch Hand

Joined: May 15, 2002
Posts: 3404
Kent Beck had some neat diagrams somewhere showing a complete waterfall for every iteration, then why not one for every pairing session, and why not one for every unit test? A waterfall set to the Three Minute Waltz!

Don't quite comprendo. Are you agreeing or disagreeing ?
regards
HS Thomas
Ranch Hand

Joined: May 15, 2002
Posts: 3404
"By the way, RUP is not a process, but a process library or process framework. One is supposed to subset the library to get the instance to be used." --AlistairCockburn
which I think is what Illja meant by RUP being tailored waterfallish...and Stan by the Waltz scenario.
So Waterfall can only be an instance of RUP - an all singing, all dancing (i.e heavy ) instance.
No, I see XP wouldn't work with waterfall.
regards
Stan James
(instanceof Sidekick)
Ranch Hand

Joined: Jan 29, 2003
Posts: 8791
Don't quite comprendo. Are you agreeing or disagreeing ?

You're right - that's not real clear. I thought he got the waterfalls so short they were silly and the waterfall concept went away. What are you going to do, get a stopwatch and "Design for the next 60 seconds ... Go!" ?
Our team went to three week iterations, trying to be more agile. Somebody immediately posted a breakdown: Days 1-4 are requirements, days 5-8 are design, days 9-12 are code, days 13-15 are test. I about screamed! Fortunately nobody paid any attention to that, and we're doing a better job of analyze, design, test, document and code throughout the three weeks.
Forgot to go back to the original question - You can certainly make RUP iterative and use (almost) all of XP in RUP. But I don't think RUP would appreciate going the other way - making it waterfall. And the thing that identifies waterfall for me is setting a date that one activity is done with a handoff to start the next. "Today analysis is done and we start design!" The RUP lump chart does show trying to get the bulk of requirements early (XP would smooth that out) but does not show getting 100% on Victory Over Requirements day.
[ July 02, 2003: Message edited by: Stan James ]
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Originally posted by Stan James:
And the thing that identifies waterfall for me is setting a date that one activity is done with a handoff to start the next. "Today analysis is done and we start design!"

Yeah - and don't forget the "... so we are already one fourth through the project, and perfectly on time!" :roll:
http://www.agilealliance.org/articles/articles/RufusAndRupert.pdf
The RUP lump chart does show trying to get the bulk of requirements early (XP would smooth that out) but does not show getting 100% on Victory Over Requirements day.

In fact, an XP team would also want to get the bulk of requirements early, to do the first release planning game. It just wouldn't want to go as much into detail and allowed for more (and cheaper) variance later in development.
Stan James
(instanceof Sidekick)
Ranch Hand

Joined: Jan 29, 2003
Posts: 8791
Right!
In our agile approach, early on we attempt to know all the stories with just enough detail to estimate. The stories are filled in with detailed requirements during every iteration so the curve is flatter.

In our old approach, early on we tried to know all the requirements, a much larger lump at the start of the curve!
But we still consider ourselves a RUP shop.
[ July 03, 2003: Message edited by: Stan James ]
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by Stan James:
Right!
In our agile approach, early on we attempt to know all the stories with just enough detail to estimate.
...
In our old approach, early on we tried to know all the requirements

So, as described above, the only difference between the two methods is that in one case you admit you don't have all the information. :-)
Arguments like these always struck me as right up there with space indentiations (for which the correct answer is, was, and always shall be, 3 spaces :-p ). I think by now everyone is willing to admit you can't know 100% of the requirements up-front; they will change. The more information you have earlier, the better. But certain types of information are more volitale then others. For any and every moment in time, there's an inflection point at which the value of additional information, i.e. additional bits of information, at this moment is negative.*
For example, at the very start of the program, when you're looking at the business plan, you don't care about a lot of fine grained details. If you work them out now, you're going to waste more time then you help.
I should note that there's a presumption that the information which causes the volatility is not available at the time. In truth, what is means is that the cost of gaining those bits is prohibitive. At time zero, you culd, theoretically, create a thousand different versions of the product, and see what people like best, and then have significant, stable information. It is simply impractical to do so. The value of the additonal information is negative, either because the error bar is too high, and so it is uninformative and/or because the cost of decreasing the error bar is to expensive.
My point is that all processes today try to get "just enough" information, whether they are agile or not.

*As time remaing approaches zero, the inflection point moves off to either infinity, or some fixed number of bits limited by the total amount of information in the project. You could also argue that post completion, it moves to 0 bits.

--Mark
HS Thomas
Ranch Hand

Joined: May 15, 2002
Posts: 3404
posted by Mark Herschberg:
For any and every moment in time, there's an inflection point at which the value of additional information, i.e. additional bits of information, at this moment is negative.*

Negative in what ways? Information overload ?

But isn't this unavoidable, in that you are going to have to deal with ALL that information when your XP'd systems grow big and you need to start maintaining it.
All those bits don't just go away probably just gets more volatile until they start being dealt with.(Bubble away as much as you like, we shall just ignore you ?)
What is the best way of managing this ?
What do the different methodologies recommend ?

regards
[ July 04, 2003: Message edited by: HS Thomas ]
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by HS Thomas:

Negative in what ways? Information overload ?

Yes. Consider debating the color scheme the second week of a 2 year project. It takes time, it requires work to write up and document, and for other developers to read. You probably shouldn't be worrying about colors at that point.
Originally posted by HS Thomas:

But isn't this unavoidable, in that you are going to have to deal with ALL that information when your XP'd systems grow big and you need to start maintaining it.

No, you won't. Let's assume in a fully defined system, e.g a perfectly described system after completion, there are X bits of information. If you try to describe it earlier, you will still need X bits, but you will also add more bits to describe the uncertainty. And since some of your original bits would be wrong, you'd have incorrect information (further leading to the cost of information).
Originally posted by HS Thomas:

All those bits don't just go away probably just gets more volatile until they start being dealt with.(Bubble away as much as you like, we shall just ignore you ?)

No, they get less volatile. At the beginning of the project, you can write any code you want. As more code gets written, you start moving down a path. The further along you are, the fewer options you have. Well, maybe that's what you consider to be dealing with them. They can't get more volatile, then at the start of the project.

Originally posted by HS Thomas:

What is the best way of managing this ?
What do the different methodologies recommend ?


I'll have to think about this. There's no easy answer.

--Mark
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Originally posted by Mark Herschberg:
My point is that all processes today try to get "just enough" information, whether they are agile or not.

Yes - they only define "just enough" differently... :roll:
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Originally posted by HS Thomas:
What is the best way of managing this ?
What do the different methodologies recommend ?

Most old-fashioned methodologies recommend pinning as much down early as possible, so that it doesn't need to be changed later.
Agile methodoloties recommend delaying decisions (and therefore investment into them) as long as possible. This only works reasonable when you manage to hold the system flexible, of course.
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by Ilja Preuss:

Yes - they only define "just enough" differently...

Exactly. Just like agrees that you should use indentation... :-)

Originally posted by Ilja Preuss:

Agile methodoloties recommend delaying decisions (and therefore investment into them) as long as possible. This only works reasonable when you manage to hold the system flexible, of course.

Ah, I think I can perhaps better articulate one of my issues with agile methodologies, and XP in particular. I am a big fan of "do it once and throw it away." The agile view is that, "don't decide now, because the amount of information which we will have to make the decision can only increase with time." I agree that information increases. In my personal life, I delay decisins as long as I can afford to do so. Certainly there are cases where once the user first sees the program he says, "that's not what I wanted." Agile helps mitigate this risk.
However, I think because we (developers as a whole) are not good at understanding all the issues, it is important to do it, and screw it up, and learn. yes, XP supports a spike solution, but I'm not convinced that is sufficent. The devil is in the details, and until you've work with the solution for 2 weeks and found its limitations, you may not discover all the critical information. I think XP does not provide much safty in that situation.
--Mark
HS Thomas
Ranch Hand

Joined: May 15, 2002
Posts: 3404
The devil is in the details, and until you've work with the solution for 2 weeks and found its limitations, you may not discover all the critical information. I think XP does not provide much safty in that situation.

The acceptance tests should reveal the limitations sooner (hopefully) rather than later.
Perhaps XP should pay attention to the acceptance criteria early on , if it doesn't currently.
There is a shared interest, after all.
regards
[ July 04, 2003: Message edited by: HS Thomas ]
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Originally posted by Mark Herschberg:
However, I think because we (developers as a whole) are not good at understanding all the issues, it is important to do it, and screw it up, and learn.

Yes, of course. It seems to me that agile processes take this into account more than heavyweight ones.
As I see it, conventional methodologies are not focussing on actually *doing* things early, but on *collecting* requirements early. In the extreme case, Waterfall, the development team is expected to *fully understand* the requirements before they even write the first line of code.
XP, on the other hand, is focussing on getting the first cut of the system working as fast as possible. It also concentrates on doing *the most important* features first. After the first week into the project, the developers will get feedback on the "rightness" of the system.
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
I disagree. Complexity come not from one or two "hard cases" but rather from many interactions. I think the difficulty is from all the subsystems accindetally "bumping into" or "being in the way" of each other. Doing the first few key cases won't turn it up. it's only one all the systems are working together with some degree of complexity. (Just how much is need is, of course, the $64,000 question.)
By the time the issues are discovered, the system is way out of alignment. And no, you can't just say refactor. I'm talking about major system changes.
--Mark
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Oh, ok. I thought you were talking about getting the requirements right. Seems we are talking about getting the architecture right, instead...
Originally posted by Mark Herschberg:
By the time the issues are discovered, the system is way out of alignment.

Can you elaborate on an example what a system way out of alignment would look like and how an XP team would reach there?
BTW, even if that happens, it's still the case that the most valuable features are working and what is hard to integrate are features the Customer values less. Doesn't seem to be the *worst* case to me...
And no, you can't just say refactor. I'm talking about major system changes.

I wouldn't say that. I *would* propose to try to do it in small incremental steps. I would suggest to reflect on why a major change needs to be done, and why it's hard.
In fact, at work we just did such a major system change. It turned out to be hard just because there was a specific assumption scattered through the whole system - the design wasn't Simple. So, what could have been done in some days with a well decoupled system took several weeks.
Lasse Koskela
author
Sheriff

Joined: Jan 23, 2002
Posts: 11962
    
    5
Originally posted by Ilja Preuss:
the design wasn't Simple

This reminds me of something I'm still a bit unclear about wrt XP...
To me it sounds like "simplest thing that could possibly work" leads to exactly the kind of situation where a multi-week effort could have been just two days if someone had coded in a loosely coupled way earlier. It's easy to say that the code being produced should be the simplest thing that could possibly work and that it should be decoupled, but there's a big but; I'd say most developers would probably consider a tightly coupled implementation more simple than a more abstract, loosely coupled implementation. Of course there is refactoring (mercilessly) but isn't that again contradicting the "simplest..." approach to some degree if the refactorings are aimed to produce a more loosely coupled design (i.e. more abstraction and implicitness)?
???


Author of Test Driven (2007) and Effective Unit Testing (2013) [Blog] [HowToAskQuestionsOnJavaRanch]
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by Ilja Preuss:

I wouldn't say that. I *would* propose to try to do it in small incremental steps. I would suggest to reflect on why a major change needs to be done, and why it's hard.

See, that's a generic cop out. I can simply say, "use traditional, time-bounded waterfall; if the requirements change, then simply refactor or make the chanages in small, inceremental steps."
Originally posted by Ilja Preuss:
Can you elaborate on an example what a system way out of alignment would look like and how an XP team would reach there?

Well, think about what Lasse mentioned. Imagine you build a system with a certain amount of coupling that seems fine for the first 10 cases. But when you get to the next few, you realize it was too tightly coupled. Now you need to go through the whole system and decouple it. That's a major change. possible one which would be easier to do from scratch.
Originally posted by Ilja Preuss:
BTW, even if that happens, it's still the case that the most valuable features are working and what is hard to integrate are features the Customer values less. Doesn't seem to be the *worst* case to me... <http://www.javaranch.com>

Well, you almost have to order everything; but even if you get the 15 most important features, maybe the system simply isn't commercially vaiable without the first 20, so it may not be that great a state.

--Mark
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Originally posted by Lasse Koskela:
To me it sounds like "simplest thing that could possibly work" leads to exactly the kind of situation where a multi-week effort could have been just two days if someone had coded in a loosely coupled way earlier. It's easy to say that the code being produced should be the simplest thing that could possibly work and that it should be decoupled, but there's a big but; I'd say most developers would probably consider a tightly coupled implementation more simple than a more abstract, loosely coupled implementation. Of course there is refactoring (mercilessly) but isn't that again contradicting the "simplest..." approach to some degree if the refactorings are aimed to produce a more loosely coupled design (i.e. more abstraction and implicitness)?

No, it isn't contradicting. Don't confuse "the simplest thing" with "the easiest to do thing". In XP, Simple Design is explicitely defined as (in priority order):
1. runs all the tests
2. doesn't contain duplication
3. expresses intent
4. minimal number of classes and methods
Let's explore how the four parts lead to a design with high cohesion and low coupling:
1. runs all the tests
The main intent of this rule is that the code needs to do what it's meant to do. But it is also implying that *there are tests* - both at the system and the unit level. To test at the unit level, you need to isolate your units - they need to be decoupled.
2. doesn't contain duplication
This doesn't only mean obviously identical code. It also refers to duplicated logic, duplicated concepts, duplicated knowledge. To remove duplication, you need many small, well decoupled components.
3. expresses intent
Every concept that exists in the code should be explitely named. A class which does two different things, for example, doesn't communicate as well as two classes doing one thing.
4. minimal number of classes and methods
This rule simply means that every part of the code which isn't necessary to conform to the first three rules is cruft and should be removed.
Did that help?
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Originally posted by Mark Herschberg:
Originally posted by Ilja Preuss:
I wouldn't say that. I *would* propose to try to do it in small incremental steps. I would suggest to reflect on why a major change needs to be done, and why it's hard.
--------------------------------------------------------------------------------
See, that's a generic cop out. I can simply say, "use traditional, time-bounded waterfall; if the requirements change, then simply refactor or make the chanages in small, inceremental steps."

It seems I wasn't clear. I would suggest to reflect about why the change was hard because I'd think that we had screwed up. I'd think that if there was a way to make the change more easy, it would have been by better conforming to the rules mentioned above. I would want to find out what kind of duplication our design contained, or where it was missing to communicate intent, so that I'd be more aware of such issues next time.
Imagine you build a system with a certain amount of coupling that seems fine for the first 10 cases. But when you get to the next few, you realize it was too tightly coupled. Now you need to go through the whole system and decouple it. That's a major change. possible one which would be easier to do from scratch.

I can't imagine a system that conforms to the rules of Simple Design and still contains so much coupling that a major change was necessary. Can you? What would it look like?
Well, you almost have to order everything; but even if you get the 15 most important features, maybe the system simply isn't commercially vaiable without the first 20, so it may not be that great a state.

If the system isn't commercially viable without the first 20 features, the developers certainly were aware of all that features from the beginning. They even needed to have *some* idea of how to implement them, else they couldn't have estimated them. I can't imagine them to screw up that much that implementing the last 5 would be a major problem.
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by Ilja Preuss:

2. doesn't contain duplication
This doesn't only mean obviously identical code. It also refers to duplicated logic, duplicated concepts, duplicated knowledge. To remove duplication, you need many small, well decoupled components.

This does not necessarily imply low coupling. You can tighyly couple classes without identical code.

Originally posted by Ilja Preuss:

3. expresses intent
Every concept that exists in the code should be explitely named. A class which does two different things, for example, doesn't communicate as well as two classes doing one thing.

Granted, to follow this rule, each class needs a single purpose. But again, it can still be rigid and tightly coupled.
Originally posted by Ilja Preuss:

4. minimal number of classes and methods
This rule simply means that every part of the code which isn't necessary to conform to the first three rules is cruft and should be removed.

This still doesn't imply low coupling. In fact, with tight coupling, you can usually get away with less classes.
Originally posted by Ilja Preuss:

It seems I wasn't clear. I would suggest to reflect about why the change was hard because I'd think that we had screwed up. I'd think that if there was a way to make the change more easy, it would have been by better conforming to the rules mentioned above. I would want to find out what kind of duplication our design contained, or where it was missing to communicate intent, so that I'd be more aware of such issues next time.

OK, yes. My point is that is possible and recommended in any methodology. Again, like refactoring, there's nothing inherent in XP which gives it a monopoly on this tool.
Originally posted by Ilja Preuss:

I can't imagine a system that conforms to the rules of Simple Design and still contains so much coupling that a major change was necessary. Can you? What would it look like?

See, now here's the copout again. You're arguing that if they following the rules of Simple Design, they shouldn't have those problems. Those rules aren't unique to XP. The "runs all the tests" is very XP, although the intetion is "does what it should" and that is true for code in every project. Doesn't contain duplication, that's been around for decades. Minimal number of classes and methods, again, standard. Expresses intent; also pretty common these days. All of these principles are coding focused principles, as opposed to derived from the particular methodology.
Originally posted by Ilja Preuss:

If the system isn't commercially viable without the first 20 features, the developers certainly were aware of all that features from the beginning. They even needed to have *some* idea of how to implement them, else they couldn't have estimated them. I can't imagine them to screw up that much that implementing the last 5 would be a major problem.

Yeah, that's why in this history of software projects, there's never been a case where people knew what they had to do but screwed it up anyway. :-p
--Mark
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Originally posted by Mark Herschberg:
2. doesn't contain duplication
This doesn't only mean obviously identical code. It also refers to duplicated logic, duplicated concepts, duplicated knowledge. To remove duplication, you need many small, well decoupled components.
--------------------------------------------------------------------------------
This does not necessarily imply low coupling. You can tighyly couple classes without identical code.

Yes. But, as I wrote above, duplication is not bound to identical code.
For example, you don't want knowledge about the used database duplicated over the system. To remove this duplication, you would introduce an abstraction from the database, for example by using a persistence layer. This will reduce coupling in the system. Switching to another persistence mechanism isn't a "major change in the system" anymore.
3. expresses intent
Every concept that exists in the code should be explitely named. A class which does two different things, for example, doesn't communicate as well as two classes doing one thing.
--------------------------------------------------------------------------------
Granted, to follow this rule, each class needs a single purpose. But again, it can still be rigid and tightly coupled.

Less coupled than one big class. Even less when there isn't duplication allowed. Still less when you need to unit test your classes.
Can you show us some code which conforms to the rules of Simple Design but isn't highly decoupled? That would be really illuminative!
4. minimal number of classes and methods
This rule simply means that every part of the code which isn't necessary to conform to the first three rules is cruft and should be removed.
--------------------------------------------------------------------------------
This still doesn't imply low coupling. In fact, with tight coupling, you can usually get away with less classes.

Yes. That's why it has lowest priority.

My point is that is possible and recommended in any methodology.

Mhh, to me it seemed as if your point was that the rules of XP could be insufficient. I don't think they are.
Also, XP doesn't only *recommend* these things. It does *require* them. Anyone, at any stage of the project, is required to look out for and remove duplication, and to make the code more communicative.
I honestly don't know of any other methodology doing this.
Again, like refactoring, there's nothing inherent in XP which gives it a monopoly on this tool.

Of course not. Nobody said that the single parts of XP are that revolutionary. What I am saying is that it works.


I can't imagine a system that conforms to the rules of Simple Design and still contains so much coupling that a major change was necessary. Can you? What would it look like?
--------------------------------------------------------------------------------
See, now here's the copout again. You're arguing that if they following the rules of Simple Design, they shouldn't have those problems.

Yes.
Those rules aren't unique to XP.

What was new to me was the consistency in which they were applied.
The "runs all the tests" is very XP, although the intetion is "does what it should" and that is true for code in every project.

In my experience, it isn't true for most projects. What is true for most projects is "we certainly cross fingers that it does what it should".
And "does what it should" is not the only intention, as that alone doesn't give you the decoupling effect of unit testing or the safety to refactor mercilessly.
If "does what it should" was the intention, it would have been called "does what it should". It *has* been called "runs all the tests".
Doesn't contain duplication, that's been around for decades. [...] Expresses intent; also pretty common these days.

Lip service - probably. Day to day practice - I don't think so...
One very common practice heavily conflicting with the above rules is "don't touch a running system": I added a new feature. It has some similarities with an existing one, but the existing code isn't in the shape to be reused as-is. What will I do - refactor the existing code, or don't touch it and tolerate the duplication provoked by cut-and-paste programming?
Minimal number of classes and methods, again, standard.

I don't think so. "Put in that hook - it will make including feature X next month much easier" is very common. And I know many programmers who will leave in dead code "just in case".
All of these principles are coding focused principles, as opposed to derived from the particular methodology.

I don't follow you. They are the practices recommended by XP. They do work - very well, if I may say so. What is the problem???

If the system isn't commercially viable without the first 20 features, the developers certainly were aware of all that features from the beginning. They even needed to have *some* idea of how to implement them, else they couldn't have estimated them. I can't imagine them to screw up that much that implementing the last 5 would be a major problem.
--------------------------------------------------------------------------------
Yeah, that's why in this history of software projects, there's never been a case where people knew what they had to do but screwed it up anyway. :-p

I can imagine teams screwing up without problems.
I can't imagine a reasonably experienced team, knowing the critical features in advance and following the rules of Simple Design conscientiously, screwing up the design in a way that they can't implement the last 5 critical features in a reasonable time frame.
Perhaps I just have a limited imagination...
Lasse Koskela
author
Sheriff

Joined: Jan 23, 2002
Posts: 11962
    
    5
Ilja, Mark, thanks. You're once again helping me a lot.
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Interestingly, there is currently a similar discussion on the XP mailing list: http://groups.yahoo.com/group/extremeprogramming/message/75620
And also a related discussion at comp.software.extreme-programming.
[ July 07, 2003: Message edited by: Ilja Preuss ]
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by Ilja Preuss:

One very common practice heavily conflicting with the above rules is "don't touch a running system": I added a new feature. It has some similarities with an existing one, but the existing code isn't in the shape to be reused as-is. What will I do - refactor the existing code, or don't touch it and tolerate the duplication provoked by cut-and-paste programming?

Show me one methodology which recommends this practice. Again, bad practicioners are different then a bad methodology.

Originally posted by Ilja Preuss:

I don't think so. "Put in that hook - it will make including feature X next month much easier" is very common. And I know many programmers who will leave in dead code "just in case".

In the first case, it's a matter of perceived need. I do tend to subscribe more to the XP theory of "we'll do it only when necessary." But again, no methodology every says do it a month ahead of time. Even in waterfall-like methodologies of "build the architecture first" there is nothing which requires it be fixed by a certain date. XP has no monopoly on this practice, it was simply adapted first by those who happen to do it.

Originally posted by Ilja Preuss:

All of these principles are coding focused principles, as opposed to derived from the particular methodology.
I don't follow you. They are the practices recommended by XP. They do work - very well, if I may say so. What is the problem??? <http://www.javaranch.com>

As I've been trying to show in the above, no other methodology explicit prohibits these actions and I would say most, if not all implicitly excourage them as appropriate.
--Mark
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
OK, you've been asking for a while for some examples. Many fot he projects I've worked on would not have been suited for XP. As of a few days ago, my current project would, I believe, fail under XP.
The project is due roughly Sept 1. A few days ago we learned the following:
1) The targeted number of simultaneous users is now 1000, not 100. (This is due to external political factors.)
2) The major subsystems we had expected and started working on (block trading, auction system), are pretty minor, and we can get away with much simpler systems.
3) We have to implement new major sub systems instead. For example, different runs of the system will now offer different functionality (kind of like loading up different maps or senarios in a video game). This means a slew of new GUI components, the back end to serve them, and modified behavior, under different simulations, of existing components.
These features are close to all or nothing, e.g. Warcraft needs: computer AI, maps, sprites, health scoring, etc. if any of those are missing, the game just isn't worth playing. Because this is an educational game, we have similar boundaries.
This is exactly what I've been talking about. Major funcionality changes to the system, late in the project. The deadline cannot be extended (classes have a fixed start date), and functionality cannot be significantly scaled back.*
I can do this, but come Sept 1, the code won't be as pretty as I'd like it. It won't be a mess, and I can fix it up after that. But for the next few weeks, my code base will be in violation of XP rules.
Some might try to argue, "well, if you had been using XP all along, it would take less effort to make these changes." I disagree. In general, at any time for any project, there exists a code base of complexity C, a change in feature requirements of size D and a time constraint of T, such that making D changes to C exceeds T. It's just that for many projects the constraints which satisfy this are very large D's and small T's which are not realisitic to the true nature of the problem, and so we ignore them as academic.
--Mark

*Can we be late a week? Yeah, they can probably stall for 1 week. 2 weeks? No chance. The prof's are already way out on a limb. Can we cut back on functionality by having a crappy but functional interface? Maybe, but at HBS student evaluations are critical to continued employment. If the MBA students (i.e. pointy haired bosses in traing) don't think the software is slick, it will negatively impact their pereception of the class.
Junilu Lacar
Bartender

Joined: Feb 26, 2001
Posts: 4458
    
    6

Originally posted by Mark Herschberg:
I can do this, but come Sept 1, the code won't be as pretty as I'd like it. It won't be a mess, and I can fix it up after that. But for the next few weeks, my code base will be in violation of XP rules.
Some might try to argue, "well, if you had been using XP all along, it would take less effort to make these changes." I disagree. In general, at any time for any project, there exists a code base of complexity C, a change in feature requirements of size D and a time constraint of T, such that making D changes to C exceeds T. It's just that for many projects the constraints which satisfy this are very large D's and small T's which are not realisitic to the true nature of the problem, and so we ignore them as academic.

This just doesn't seem to add up...
I just spent two weeks fixing up and adding functionality to our web app. I started out with a complex morass of code and JSPs. I am fairly certain that the first week I spent refactoring/unit testing saved me at least two weeks of pain and suffering had I just forged ahead and tried to work in new features on top of the original mess. I actually spent some time trying to do just that but it quickly got to a point where I was breaking more things than what I was fixing. After refactoring, I was able to implement all new features in a week.
Are you saying that you can implement all the necessary changes by Sept 1 if you don't spend time doing refactoring, writing unit tests, and other things XP that may be applicable to your situation?
What kind of approach do you plan to take then that will allow you to meet your deadline?


Junilu - [How to Ask Questions] [How to Answer Questions]
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by Junilu Lacar:

What kind of approach do you plan to take then that will allow you to meet your deadline?


Good programming practices are an investment. I know that historically real estate has a higher return then a bond. However, real estate takes 20 years to mature. If I'm already 55 and looking to retire in 10 years, I can't count on real estate, even though overall it is a more solid investment. Likewise, real estate requires a minimum of tens of thousands of dollars in order to properly diversify your holdings. Someone looking to invest $5,000 simply won't be able to afford to--even though real estate is, in general preferable to bonds.
I know that code duplication is bad. And, since I'm not perfect, let's say I have 5% of my code duplicated. (I have no idea if that number is accurate, even to an order of magnitude. I'm give using it as an example.) Now to add in the next features, without additional duplication will cost me 3 weeks of refactoring. I estimate even being faster beacuse I refactored, I still won't get in all the necessary features. On the other hand, if I do allow some additional code duplication, say raising it to 8%, I can even with the penalty I pay for having duplicated code, compelete it in 6 weeks.
Under XP, I couldn't do the latter because, as Ilja pointed out, duplicated code is simply never allowed, even if doing so would meet the end goals of the project. Hence, under XP, I would not meet my deadline.
For the record, I have no idea if I have to duplicate code, or how much. more likely there will be cases where I should clean up some old code, but simply won't because there's not time prior to Sept 1. I will also probably write less then elegant code. It would be nice to always keep things in "a pure state." But for the state of this project, that goal conflicts with the primary goal of getting it to work.
Let' me give you another example. We' followf the XP philosophy and "take it to the extreme." At any given moment (except when in the middle of writing a line of code) the code must always compile. That is, as soon as you type that semicolon, the program must compile without error. This means you can't realize you need a variable, first use it in an if statement, and then code back and code the defintion, because for 5 seconds, the code won't compile. If you follow that rule, you will never get a compile error. You will also be overly constrained. Sometimes as you go, you write your code "out of execution order" so to speak.
This is similar. There are short term periods where it's ok to break a rule which makes sense over a longer period. it's just that in thise case, the short term period is for a few weeks.
Because projects and their constraints are dynamic, it is often ok to break the rules at times so long as: you've cleared defined when and how and that you fix any shortcomings of temporary process so it doesn't conflict iwth long term goals. Most people forget to the the latter (e.g. "don't worry, we'll go back and document it at the end of the project"). By XP does not want these rules to be broken.
--Mark
Stan James
(instanceof Sidekick)
Ranch Hand

Joined: Jan 29, 2003
Posts: 8791
One thing I like about agile stuff and the story sized tasks in the planning game is it makes progress and the backlog very visible. It should make the impact of the recent priority changes very obvious. If the new plan cannot be done at the old velocity somebody should ask how you hope to achieve the new velocity and your answer is by cutting corners. We've ben burned by sloppy work in the past, and the customer now says things like I can wait a couple weeks if it will keep the quality up. It's highly visible progress and backlog to make this so obvious nobody can quibble.
Part of the XP coder's bill of rights is the right to produce quality work. That's a pretty serious thing to sacrifice.
Sorry, that sounded preachy. You're in a tight spot fer sure and I really am sympathetic!
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by Stan James:
If the new plan cannot be done at the old velocity...

Good analogy. Basically, any time you change requirements and refactor, you've changed your velocity. While you were humming, you produced so many features per day. Now you need to stop short and change direction. XP would have you bank the corner.
If you're running a 5 mile race by going between two cones 5 meters apart, you do need to bank the turns. But if it's a 400m run, or if you're trying to be a time limit and you've got maybe 2-3 round trips left, you stop short and cut. Sure it's more wear and tear in the long run, and not recommended hwen you have miles to go, but in the short term, it's more efficent in terms of distance per unit time.

Originally posted by Stan James:
We've ben burned by sloppy work in the past

That's the other fairly unique thing about this project, it is unlike to be significantly used in the future. (Unfortunate, but this is how academia sometimes works.) I don't mind do some short selling of the future (for this application; in general, I'm not so short sighted).
--Mark
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Originally posted by Mark Herschberg:
I know that code duplication is bad. And, since I'm not perfect, let's say I have 5% of my code duplicated. (I have no idea if that number is accurate, even to an order of magnitude. I'm give using it as an example.) Now to add in the next features, without additional duplication will cost me 3 weeks of refactoring. I estimate even being faster beacuse I refactored, I still won't get in all the necessary features. On the other hand, if I do allow some additional code duplication, say raising it to 8%, I can even with the penalty I pay for having duplicated code, compelete it in 6 weeks.

Yes, I understand. I just wonder where those numbers are coming from. How sure do you feel about them?
Under XP, I couldn't do the latter because, as Ilja pointed out, duplicated code is simply never allowed, even if doing so would meet the end goals of the project. Hence, under XP, I would not meet my deadline.

Of course you would be "allowed to do so under XP" - after all "they are just rules".
Nevertheless, I would like to question the usefullness of doing so. In my experience, you can't go faster by cutting corners for more than a couple of days in software development. But if it works for you - well, your mileage may vary...
By XP does not want these rules to be broken.

Well, it certainly strongly discourages to break some of the rules. Mostly because it seems that bad things happen when you do so.
But the above is not exactly the kind of example I was asking for. It is an example of when it might be reasonable to break some of the rules of XP for a short amount of time.
What I was looking for was an example of where following the rules of XP would give you a design/architecture that couldn't handle a feature change as well as an upfront designed one...
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Originally posted by Mark Herschberg:
1) The targeted number of simultaneous users is now 1000, not 100. (This is due to external political factors.)
2) The major subsystems we had expected and started working on (block trading, auction system), are pretty minor, and we can get away with much simpler systems.
3) We have to implement new major sub systems instead. For example, different runs of the system will now offer different functionality (kind of like loading up different maps or senarios in a video game). This means a slew of new GUI components, the back end to serve them, and modified behavior, under different simulations, of existing components.

OK, what parts of your existing design help you implementing the above features? What of that would be missing if you designed the system XP-style?

These features are close to all or nothing, e.g. Warcraft needs: computer AI, maps, sprites, health scoring, etc. if any of those are missing, the game just isn't worth playing.

Yes - probably that would be a bad way to partition the features. Perhaps there is a different way?
Can we be late a week? Yeah, they can probably stall for 1 week. 2 weeks? No chance. The prof's are already way out on a limb.

Why are they?
In my experience, there is *always* a way to cut scope. There are *always* features which are critical and others which are "nice to have" - or OK to have in a later version.
Can you tell us more about the project?
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Originally posted by Mark Herschberg:
--------------------------------------------------------------------------------
Originally posted by Ilja Preuss:
One very common practice heavily conflicting with the above rules is "don't touch a running system": I added a new feature. It has some similarities with an existing one, but the existing code isn't in the shape to be reused as-is. What will I do - refactor the existing code, or don't touch it and tolerate the duplication provoked by cut-and-paste programming?
--------------------------------------------------------------------------------
Show me one methodology which recommends this practice. Again, bad practicioners are different then a bad methodology.

You are right, I don't know of a methodology explicitely recommending this practice (though I wouldn't bet much on its non-existence).
But there *is* a big difference, nevertheless, in what they (directly or indirectly) encourage.
For example, a methodology which requires you to go through a heavy process for a small design change will probably not exactly motivate you to do the change. I have actually been guilty here: late in the design phase of a universitary project, we found that an additional class would make much sense; but introducing it would have meant to rework dozens of documents, so we didn't do it. Fortunately, we never had to code that design...
Do you know of any methodology which encourages removing duplication (for example) as authoritative as XP?
As I've been trying to show in the above, no other methodology explicit prohibits these actions and I would say most, if not all implicitly excourage them as appropriate.

Then perhaps the only difference is that XP *explicitely* asks for them.
What I don't understand in this case is why so many people think that XP can't work, if when following the other methodologies they are doing the practices anyway...
 
It is sorta covered in the JavaRanch Style Guide.
 
subject: Can't RUP be an instance of Waterfall ?