I'm trying to learn how to write Use Cases. (I'm an experience code-writer, but I don't have much experience with formal product specification).
I've been reading books, tutorials, even took an academic course... Unfortunately, they all focus on silly "toy" examples (like "Elevator" or "Withdraw money from bank" )... I feel I'm missing a lot of info before I can apply this to complex "real world" projects !
Just to demonstrate, consider a Use Case for "withdrawing money from the bank": The "toy" Use Cases are usually 5-10 lines. At best, they'd do some validity checks ("exceptions" table). In the real world, a *precise* description could be 200 lines... it might involve: Communication between banks / branches; Special permission to exeed overdraft limit (it happens in my country); Moving money between saving accounts and checking accounts (which might have complex rules of its own), etc...
I was hoping to find a tutorial that could answere "real-world" questions such as: - Should my Use Case describe all the subtle details, or should it be kept short and readable, even at the price of missing some details ? - Are there guidelines for breaking my 200-line UC into various sub-UseCases ? What's the recommended size for such sub-UC (kind of like the way programming books recommend that java methods don't exceed 20 lines of code... just a rough estimate, but it's something to start from)
These are just example, to demonstrate why a 10-line example doesn't prepare me to the real-world... So, could anyone please recommend a *realistic* tutorials (books / web sites), that with real-world, complex examples ?
This is a very common mistake. A use case is not a flowchart; it's not a description of every possible thing that could happen. It's a description of one case. So to continue with your example, here are a few use cases:
1) A customer wants to make a withdrawal from the checking account. They have enough money in their checking account to cover it. They get the money and a receipt.
2) A customer wants to withdraw money, but they don't have enough to cover it. They have not obtained special permission to exceeed the overdraft limit. They are told that they need to get permission, and they don't get any money.
3) A customer wants to withdraw more than their balance, but they have obtained special permission to do so. They are given their money and a recipt.
And so on. Each situation is its own use case; you can implement them one at a time.
Originally posted by Ernest Friedman-Hill: And so on. Each situation is its own use case; you can implement them one at a time.
And that is a pretty key idea to what makes use cases a viable tool. Each individual use case should describe something of value to the business or end user, something your boss, customer, client, etc. would be willing to (at least conceptually) pay for, receive delivery on, and walk away. I find that helps me to avoid having use cases for things that are too specific or fine-grained (e.g. the "login" use case, which absolutely isn't a use case unless your client is somebody like the NSA). Technical issues need to get resolved, but use cases aren't the swiss army knife for addressing all requirements issues.
Use case size & granularity are tough issues and there is no single right answer. I like Alistair Cockburn's "Writing Effective Use Cases" for an even handed discussion of many options. His site has lots of good stuff, too. Maybe start here: http://alistair.cockburn.us/crystal/wiki/UseCases
We have been working essentially the same use cases through three implementations over 10 years now, and they've gotten pretty crufty. A few cautions: Don't get them too big or too formal. It's easy to get a fancy template with lots of verbage that doesn't contribute to capturing requirements. Avoid complex includes and extends relationships; they are not subroutines or abstract objects. Only use them when a word processor is the best way to communicate a requirement; we have lots of text describing things that would be far easier to read in a spreadsheet.
By now I'd look hard at agile practices like stories and user acceptance tests to capture requirements instead of use cases. If you have the option to replace use cases with something else you'll get lots of ideas right here.
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
Joined: Jan 29, 2003
BTW: Had to comment on "login use case ... absolutely isn't a use case". It "absolutely" is in our world because the opening rules of the game were: 1) You can only write code to meet written requirements and 2) Use cases are the only place to write functional requirements. Either of those may sound like anti-rules to you, but we live with what we're given.
Reid M. Pinchback
Joined: Jan 25, 2002
Originally posted by Stan James: Either of those may sound like anti-rules to you, but we live with what we're given.
Yeah, kinda hard to follow any kind of process that differs from what your boss says. I always love silly dialogues that follow the behavioural dynamic "X is a Y because by fiat I declare X is a Y, even if X really isn't a Y, but now I can tell other people we have Y". Just like watching news stories about federal politics. ;-)
Setting aside organizational politics (which of course you never really get to do, but let's pretend for all of 30 seconds) and focusing more on "can we think of ways to get even better at what we do than we are now, because not getting better would just be sad?", I tend to have my doubts about something being called a use case if you couldn't choose when you were going to implement it, and the customer/user couldn't feel empowered to alter the priority or eliminate it from the work to be performed during some phase. Use cases are a tool, and if neither the user nor the developer has much power over how they are used, they start to sound like a tool with a lot of rigidity, and hence reduced value. I'm not arguing against having good requirements, just against the metaphorical equivalent of using the wooden end of a hammer to pound in nails.
My reason for mentioning the specific case of login is that it has two characteristics that raise immediate concerns, particularly when you are dealing with abstract use cases.
1. At its simplest level login is arguably a near-unavoidable technology artifact, not far removed from stuff like plugging in the computer and booting it up; it is rather on the critical path for many projects. As such mention of it *in* use cases could make sense but then it would be hard to remove from *any* of the use cases initiated by user activity (ignoring refactoring duplication out of use cases by using use case inclusions). I'd cringe at the idea of being on any project team that was so constrained that they wouldn't implement a login because it was only mentioned as a step in a use case instead of having the political clout of being labelled as a use case all on its own. Just like I would figure my boss really expected me to implement persistence mentioned throughout the use cases without having an explicit "persist the data" use case.
2. Login is "technical" speak, not "business" speak, and it talks about one very specific activity - using some kind of form or dialog or prompted sequence of inputs to enter in information to get "logged in". It says nothing about business processes whatsoever, and that is bad - it really misses the whole point of having use cases. In reality login is one small piece of the larger cross-cutting concerns around authentication and authorization. My concern is that a login use-case shoves people immediately towards a particular implementation instead of doing the appropriate level of information gathering to figure out what is really needed for authentication/authorization. AA is going to appear in bits and pieces scattered across multiple use cases, it rarely would be partitionable the relevant issues into a single use case. It is a very small and direct step from a login use-case to thinking of AA as a single widget to implement, when in non-trivial systems nothing will be further from the truth. AA is the classic aspect.
When something is called a use case but in practice doing so shoves in you in the wrong direction during design, that seems like getting less value from a methodology just so folks can say they follow the methodology. Not implying you don't realize that, it just sounds like your boss has drifted a bit in that direction. I'd rather just hunt for ways to get more value from a methodology than put energy into psyching myself out with terminology labelling games. There is an unfortunate tendency in any adoption of a methodology to shove all the square pegs into round holes.
You can work for a boss that calls everything a use case, and you are definitely stuck with that. You can also (depending on the boss) work with them to make a better development process over time. I'd rather be optimistic and hope for the latter! [ January 31, 2006: Message edited by: Reid M. Pinchback ]
Joined: Jan 29, 2003
Our login has some extra wrinkles. It was important for the business to specify how we authenticated, i.e. which userid/password pair we used. There are several choices in the company, and they have a big investment in one of them. We also have some fallback business procedures; if federated authentication fails users can access most of the system (but not all) with internal authentication.
Quite a lot of interesting stuff happens between the time you enter your userid & password and the system is responsive and ready to run. We needed some place to document these and the login flow worked fine.
Regarding "the whole point of having use cases" we may simply disagree on what that is. Technical speak and business speak don't enter in our definition. We have use cases for web services we expose to the rest of the world that are pretty technical. The point is to record requirements. The use case document has bi-directional links to the Rational Requisite Pro requirements database, which is our primary tool for tracking what has been built and tested. We got all this straight from Rational mentors. It's far from agile and I constantly argue to change it, but it is what it is.
We also had some problems with consultants staying up all night and coding things that were not in the requirements. Made everybody crazy. The rule about coding only what you can trace from the requirements was in response to that, and has probably lived about 9 years too long.
I'm not sure where "if you couldn't choose when you were going to implement it, and the customer/user couldn't feel empowered to alter the priority or eliminate it from the work to be performed during some phase." fits in. We use use cases as requirements docs, not engineering tasks. (That was a big change for us at one time!) We implemented most of login several iterations into the project and some parts much later.
Bottom line, I avoid language like "the whole point" or "absolutely not" because there are no absolutes or universal "whole points". It's all local.
Reid M. Pinchback
Joined: Jan 25, 2002
Originally posted by Stan James: [Bottom line, I avoid language like "the whole point" or "absolutely not" because there are no absolutes or universal "whole points".
I used to think that, practiced it for way too many years. Then I finally got a clue and realized that putting conditional qualifications on everything I said just allowed things to never change and definitely didn't make the projects any easier or the products any better. Now if I have reasons for thinking something is a really numb idea, I just say I think it is a really numb idea and why. Others can accept or reject my conclusions, both outcomes are fine with me. I just find that if one isn't willing to have a few random glimmers of faith in their own conclusions, everybody else is perfectly justified in thinking less of or paying less attention to those conclusions than the author apparently does themself. Continually self-apologetic or qualified phrasing makes for really tedious listening or reading. Your mileage clearly varies.
In this case, I accept your description of your process. I'm glad it worked for you. I just believe that as a common practice, as opposed to the specifics of your project, it sounds like something people do when they take another kind of requirements usage and jam into the term "use case", maybe because at some past point in organization history people got all nervous about the words "requirements" or "specification" so they re-labelled their documents. Your login system definitely sounded like it had some complexity, so for you it made sense to create a use case (much like my original example of somebody who would have a complex login activity - NSA). No law stopping that. My point was, and will remain - do you think *your* process for using what your organization terms "use cases" is the strongest process it could be, as in you would recommend to others that they try and adopt your way of using use cases? I'm digging for the anecdotes here and conclusions here, not to be an SOB but out of interest... are you saying if you were the boss or the customer you would use the process exactly the same way your boss does now? [ January 31, 2006: Message edited by: Reid M. Pinchback ]
Joined: Nov 13, 2002
Thanks for the interesting observations.
That's exactly what i'm looking for - are there any books/sites that offer an intelligent discussion similar to the one in this thread... ?
It doesn't even have to come up with definite answers: If you consider the above "login" discussion, IMHO each side has some valid points... I could live with a tutorial that mentions the different approaches, even if it doesn't tell you which approach to take. It's still *way* better than those silly "Elevator" examples...
Originally posted by Stan James: Regarding "the whole point of having use cases" we may simply disagree on what that is. Technical speak and business speak don't enter in our definition. We have use cases for web services we expose to the rest of the world that are pretty technical. The point is to record requirements. ... Bottom line, I avoid language like "the whole point" or "absolutely not" because there are no absolutes or universal "whole points". It's all local.
Declaring something as "local" can become an easy way of excusing ingrained weak habits in an organization. Actually I'll put a pretty deep stake in the ground over this, because I think I'm very justified in believing that an important point is being missed here about use cases. Comes from having a memory going back far enough in the history of the particular issue.
Use case approaches, like CRCs, were an attempt to break or at least reduce a particular undesirable behaviour in development projects. The behaviour to be addressed was constantly using functional decomposition as the primary driving factor in software designs. Yes, folks wanted to capture good requirements, but use cases weren't the only option possible for that. Project success vs failure isn't just a matter of verbose traceable requirements, it is also a consequence of the analytical behaviours of the participants in the project, and use case approaches were trying to encompass that challenge too. Many slices of software eng were being re-cast at that time in an attempt to support the growing emphasis on object-orientation as the stronger design approach; use cases and CRCs were recasts of the requirements and analysis activities. Existing approaches at that time just weren't cutting it.
A couple of decades or more had clearly established functional decomposition as a weak approach. Dig back through the use case and OOD writings, and you'll find that there is a deliberate cautiousness about when and how to include technical artifacts in the requirements and analysis activities. The desire was to capture processes and interactions forming the meat of the functional requirements with the best signal-to-noise ratio you could before getting sucked down into the mire of non-functional requirements. The typical solution was to start with a leaner version of the use cases that focused on functional (as in "has-value-to-the-actors") issues with a tolerable balance between abstractness and specificity (talking about specificity of process and interaction).
Whether you captured non-functional requirements in a different type of documentation, or if you followed a refinement process to your original abstract use cases to weave in the non-functionals, the understanding remained that the formost challenge was to be sure the business needs were understood and organized in the use cases in a way that wouldn't thwart attempts at providing a decent object-oriented decomposition of the system. When people don't "get that", and yes sometimes you are better served by making a point intead of waffling about it, you open your project up to the old risk. The risk is that you just fall back into the old patterns of tackling software projects via functional decomposition, just with a document template entitled "use case".
And you are perfectly welcome to disagree about how to get the most value out of use cases, but I don't see the value add by ignoring history that informs us about what the original point of the approach was. And if you think about it really hard - when was the last time you worked on code that actually had a good object-oriented design? It isn't the default in software development by a long stretch. Functional decomposition is the normal anti-behaviour, as the assorted racks of useless J2EE so-called patterns (and books) clearly attests - functional and layered designs all over again, just coded in Java. When OOD happens it is because the people focused more on using an approach, agile or otherwise, to try and make it happen. I've yet to see a situation where waffling made it happen.
Joined: Jan 29, 2003
My point was, and will remain - do you think *your* process for using what your organization terms "use cases" is the strongest process it could be, as in you would recommend to others that they try and adopt your way of using use cases?
Our process is absolutely not optimal. I brought it up first in this thread as some cautionary lessons - places you do not want to visit.
I hand't thought of use cases in contrast to functional decomposition before. When I wrote mainframe CICS in COBOL my requirements / design phase was writing the user manual first. It was awfully close to use cases plus UI designs so the transition to use cases and separate UI docs was quite natural. I guess the last time I thought of FD in requirements analysis was in batch systems.
If you want to get to absolutes, "the whole point" of all this is delivering software quickly and accurately, not producing artifacts to any particular definition. (My company seems interested right now in how many documents they can add to the process so the sheer volume will knock over a CMM auditor.) The main thing flowing through our development process is information. I'm trying to get our team to focus on communicating effectively, eliminating document sign-off and hand-off, using the best possible artifact when necessary. Our current use cases suck pretty hard by this measure - we're in release 9 of a very large production system and a one-line requirement change is hard to spot in a 10 page doc. I'm very interested in the idea of writing acceptance tests instead of requirements docs. Executable tests written by customers rock.
I still try to avoid absolutes in forums like this. There are a lot of things that are better for one team and worse for another but if I knew one thing that was best for all I'd be rich & famous (like Scott ) [ February 01, 2006: Message edited by: Stan James ]