Lisa Crispin

Ranch Hand
+ Follow
since Feb 03, 2009
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Lisa Crispin

luri ron wrote:i have tried to adopt some technique from agile development methods such as Scrum, Extreme programming, and RUP. One thing I found that is chanllenging is that the requirement gathering time desribed in these methods is too short. in extreme programming and scrum, the requirement gathering is done in story cards or use cases within a day or two. but in realitiy, the requirement gathering takes much longer time. just want to see if anyone with agile experience has any feedback on this.



Luri, the reason requirements gathering can go so quickly in Scrum and XP is that you are working on a very small subset of functionality at a time. Each story is a small chunk of functionality, which still delivers business value, and can be completed in a few days. Of course, it's part of some larger feature or theme, so you're not working on it in isolation.

When my team has a big theme or project coming up, which might take us 2 - 4 two-week iterations to complete, we usually have a few meetings with the customers to learn what the theme is about, how the feature will be used, examples of desired and undesired behavior. We brainstorm about the design, we size the stories, we figure out dependencies, we choose the most basic path thru the functionality and plan accordingly. Before the first iteration, we spend an hour talking about the stories in the upcoming iteration, so we're ready to plan it during the iteration planning. By the time we're working on requirements and high level test cases for the first story, we are already quite familiar with the functionality, so it doesn't take long to write up examples, requirements and high level test cases and go over them with the customers and the developers.

Does that make sense? You have to be sure to limit the scope of each story and of the amount of work being done in each iteration so that you have adequate time to spend on each story, and complete all the testing activities for the story.
-- Lisa

Mourouganandame Arunachalam wrote:Hi,

What AUP lacks when compared to RUP, being a subset of RUP?

Mourougan



I'm not really familiar with AUP either, but I think it must have something in common with OpenUP. Agility and Discipline Made Easy: Practices from OpenUP and RUP by Kroll and MacIsaac has a pretty comprehensive explanation of OpenUP and its agile aspects.
-- Lisa

Ilja Preuss wrote:

Lisa Crispin wrote:
IMO, Agile is mainly about values and principles. If you're committed to delivering high-quality software and the best possible business value, and you're always trying to improve the way you work, that's agile in my book. Being "agile" would mean working closely with your customer, and using good practices to produce what the customer needs.



Uh, while I agree that Agile is defined by values and principles, I'd also say that it's a bit more specific than "just" what you indicate above. I think it's very well defined by the Agile Manifesto. William Petri has a very good blog post on this topic: http://agilefocus.com/2009/02/agile-versus-agile/ (well, at least I like it... ;)



Oh, I do like William's post.

I don't think I'm communicating my viewpoint very well. My wish is that someday we don't give it a special name, it's just the accepted good way to develop software. And of course, there are variations in practices from team to team, but the principles and values are the same.
-- Lisa

Jeff Langr wrote:

Lisa Crispin wrote:... how to go about identifying your tool requirements, researching and evaulating tools.



I'd be interested in that list of criteria. From my standpoint, two of the more important considerations for agile testing would seem to be accessibility and ability for the tests to be reasonably self-documenting. I can think of a number of other things. What do you feel is most important to consider?

Thanks,
Jeff


Here are some of the questions we like to ask the team:
What tools do you already have?
Do you need a tool that will easily integrate into your continuous build process?
Will your hardware support the automation?
Who will use the test tool? Who'll write the tests? Do both customers and programmers need to feel comfortable with the tool?
Do you have distributed team members who need to collaborate?
Who will automate and maintain the tests?
What skills are already on your team? (eg, if you code in Java, a tool that uses Java or Groovy for scripting might be appropriate).
What's your development environment? Do you need the tool to integrate into a particular IDE?
What type of testing are you going to do? What type of app are you testing?
-- Lisa

Mike Farnham wrote:So with regard to the book "Agile Testing,
please name some automated testing tools.

Also, is Agile Testing applicable to software development without regards to language,
as long as an Automated Testing tool is available?


In addition to what Janet replied, we do give examples of automated test tools in the book, but we didn't go into a lot of specific details. Tools change too fast. The tool Tip and I used for all our automation examples in Testing XP (back then there weren't so many of these great open-source tools) doesn't exist any longer.

A few of the tool examples in our book are FitNesse, Watir, Selenium, Canoo WebTest. We also give several examples of home-brewed tools and explore the pros and cons of home-brewed, open source and vendor tools, and how to go about identifying your tool requirements, researching and evaulating tools.

We have a list of good places to find tool ideas in our bibliography. Rick Hower's site is a good resource, www.softwareqatest.com.

Although test automation is a core practice on agile teams, agile development (as Janet said) is independent of tools. The key IMO is adopting a whole-team approach to solving problems such as test automation, and finding ways to deliver good software.

Jeanne Boyarsky wrote:Lisa has a cool photo in her JavaRanch profile. What animals are these?




Those are Ernest and Chester, extremely agile miniature donkeys! They love to work, and are always improving at things like cart-pulling, log-skidding, hauling brush and the like. They play just about every minute they aren't working!

palla sridhar wrote:Hello Lisa and Janet!

Thanks for the offer. But I got a few questions.

Can it be used for legacy systems like Mainframes?


To add to the other answers - we have an example in our book from John Voris about testing legacy systems. John explains how he uses a "minutes afterward" test automation approach on an RPG / AS 400 application. It's not test-first, but it does the job of making sure testing is done continually., and that tests help guide coding. If everyone involved in developing and testing software works together, you can usually find good ways to apply agile principles and practices to your application and get a higher quality product.
-- Lisa

Mike Farnham wrote:Thanks Janet (and Lisa) for your replies and insight.

I did find the article on InformIT.

I wonder if there is an anti-pattern or syndrome. "Too busy to test",
or "Too busy to write tests"?


That does sound like a valid pattern!

Mike, at my last job, the development organization said it wanted to "go agile", but never adopted any practices except for one small (and successful!) project. The programmers wouldn't automate unit tests. We released every two weeks, but that didn't make it agile.

While I tried my best (using patterns from Rising and Manns' Fearless Change) to motivate the organization to change, I focused mainly on making my own QA team successful, and working with the developers and customers as best I could. Within our QA team, we used agile practices such as using good design techniques and refactoring on our automated test scripts, pairing, working in small increments, and collaborating closely with other teams. I got more help from programmers on functional test automation when I started writing test scripts in the same language they used for the app. I went to the dev managers to ask them their greatest areas of pain, and borrowed what I could from Agile to address those. For example, they complained most about not getting useful requirements. I suggested writing customer-facing tests ahead of time in place of requirements documents. They agreed to this practice, and it solved the problem.

That's just one example. The agile approach to testing is mainly about applying certain values and principles. You can try to get more involved with other parts of the organization, even when testers are on a separate team. You can also try to work more closely with business experts, learn more about the business so you can help deliver value, and step out of your comfort zone to find more ways to contribute.
-- Lisa

Mourouganandame Arunachalam wrote:Hi,

Is it possible to apply Agile testing process in on on-going project? or can only applied to new projects?

Mourougan


I might be misinterpreting your question, but I think you are asking if agile development works well only in greenfield projects?

It's certainly easier to implement agile development in new projects, but there are many success stories for "legacy" projects switching to agile. There are more cultural and organizational challenges to overcome - it is hard to learn new habits, even if your old habits don't work well for you.

The first two agile teams I worked on did greenfield projects. Everyone came in knowing we would be using XP practices, and everyone was highly motivated to use agile practices and principles. My third team worked on a legacy system, and was never motivated enough to successfully implement agile, or even try very hard to do it. The cultural barriers were just too high.

My current team was in bad shape 5.5 years ago, not able to deliver new functionality to production, the waterfall model wasn't working at all, the code was buggy and poorly tested. The very first sprint where we implemented Scrum, things started to turn around. We've had our struggles over the years, but through diligent use of retrospectives to identify and address problem areas, and good leadership in the team, we've become highly productive and have delighted our customers.
-- Lisa

Joe Deluca wrote:Lisa and Janet,
I am a programming student interested in agile methodologies. Is there any way to implement agile concepts in single person projects?
Would the content provided in your book be suitable/learnable for someone such as myself who is new in the agile field?

Cheers,
Joe


Hi Joe,
IMO, Agile is mainly about values and principles. If you're committed to delivering high-quality software and the best possible business value, and you're always trying to improve the way you work, that's agile in my book. Being "agile" would mean working closely with your customer, and using good practices to produce what the customer needs. The smallest agile project I worked on had two programmers, a tester and a customer, and it was a success.

Our book assumes some readers won't be very familiar with agile development, so we give a lot of context and explanations, but oriented towards testing in an agile project. You shouldn't have any problems following it even if you're brand new to agile. If you want a more general introduction to agile development, there are a lot of good books. The Art of Agile Development by James Shore is a good intro for newbies and also contains a lot of good information about testing. Bob Martin's Agile Software Development is also good, although it's several years old now, I don't think it's dated. You could also find some good information on the Agile Alliance site, www.agilealliance.org.
-- Lisa

Lance Zant wrote:

Lisa Crispin wrote:
Because we drive coding with our business-facing tests, coverage is, in a way, built in. The tests are there first, and the code has to be written to make them pass....
-- Lisa



The question I was trying to get to is "coverage of what?" in the case of business-facing tests. Writing them first is great, but seems orthogonal to the question of how many are enough (or better, which ones are needed). The goal is to cover the requirements. Using tests to document requirements might turn the question back to the customer/product owner. If there's not test where x=1 && y=-1, you can argue that there's no "requirement" to handle that condition. If you can make that work, I'd love to know how you do so.

In my experience, tests identified by business product owners' tend to be indicative rather than exhaustive. They tend to come up with a sunny day case and stop there. Prodded for error cases, they give me a couple of obvious missing or bad values. A second round of prodding may or may not produce a couple of interaction exceptions (no cash refund for a credit purchase), but it definitely begins to raise the frustration level. ("I just need it to work, dammit!") Unfortunately, when a subtle interaction bug arises, the fact that there was no test for that combination is cold comfort, and the blame game begins. ("Of COURSE, we need to process payments against canceled orders!")

So the question is, how do you assess the adequacy of your business-facing tests, if it's not based on some kind of coverage of the possible input combinations and sequences? If the answer is "heuristically", fair enough. The follow up in that case is whether any of the heuristics are general across projects and domains, and how do you get the business types to really engage them?

thanks again,
Lance



Our focus as a team has been on learning the business quite well ourselves, as well as working closely with the customers to identify not only tests for the story at hand, but potential ripple effects on other parts of the system. Our product owner has a "story checklist" which helps him research whether a story impacts things like reports, plan documents (we manager 401(k) plans), external partners, vendors, legal concerns, government regulations, other parts of the system, training and the like. He writes high level test cases and because he's been doing this so long with us, he does think of negative and edge cases. We go over these checklists, then we add our own test cases, and go over those with the PO. We're communicating about the stories all the time.

I confess that I have never used the techniques you described on either non-agile or agile projects. No doubt they'd be helpful, but I haven't had a problem that those would seem to address. If we had a lot of bugs getting out to production, it would be worthwhile. But thanks to diligent TDD and ATDD, our code emerges fairly defect-free.
-- Lisa

Ilja Preuss wrote:

Lisa Crispin wrote:
Here's an example: Our application manages retirement plans. When people contribute to their retirement account, withdraw money, or change their investments, we have to do trades through a trading partner (they actually do the buys and sells of the mutual funds). We had a story where for certain mutual funds, we had to mark trades as "new money" if they were the result of new contributions to the account. In other words, if someone merely switched from fund AAAAX to BBBBX, that wasn't new money, but if they sent in money to buy new positions in fund BBBBX, that's "new money".

This wasn't anything that our plan administrators could see - it was a new field in the trade file sent to the trading partner, based on a value in a new column in the database. So there was nothing to demonstrate to our internal users.



Are you saying that non of your internal users cared about fullfilling this requirement? Who came up with it? How did you know you had to do it? Just curious...



The internal users wrote the story based on the requirements imposed by the mutual fund companies. There were other, related stories - for example, some mutual funds send "concessions" which are compensation for us doing some of their administration. Some pay concessions only for "new" investments. That functionality is also invisible to the users, except for the resulting balances in accounts.

We had a meeting with our internal customers where they explained what was required. As I recall, this involved some examples on the whiteboard. Then, a senior programmer who understood the back-end processing really well (at the time, he understood it best; now we all understand it but this was a few years back) walked us through the current processing, and what would need to be changed to accommodate the new requirements.

We tested this with FitNesse tests as well as through the application in real-time. I don't recall now, but it's possible I went over the FitNesse tests with the internal customers to verify we were doing the right tests. Sometimes they are comfortable that we understand the requirements and don't feel a need to see the actual executable tests.

In these stories, there actually was a visible component - reports that showed how the different types of funds were processed. We actually failed to involve the customer enough with these while we were developing them - we showed them the reports but not enough different scenarios, and we ended up having to fix them later. That happens when we get complacent and we have to renew our efforts to collaborate more closely with the customers.

Does that help explain what we did?
-- Lisa

Ilja Preuss wrote:

Lisa Crispin wrote:
My team could probably get along ok without testers. But it would mean that the programmers spend a lot more time eliciting examples and requirements from the customers, and thinking about them from multiple viewpoints.



Mhh, somehow that sounds like a good thing to me... ;)



I have to give our programmers credit, they work hard to understand the business. Today we had a long meeting with our product owner to work out an algorithm to determine Internal Rate of Return for each retirement account. The PO had an incredibly complex calculation, taking into account things like interest on loan payments when participants take a loan from their retirement account, and interest from dividends paid to the mutual funds in which they are invested.

Did our programmers simply accept the PO's solution? No, they did not. We all questioned it and asked for specific examples to be worked on the whiteboard. The programmers proposed many alternatives - they understand the business and are able to do that. We questioned what the real purpose of the IRR was, and we felt that participants wanted to know how well they did at choosing the right mutual funds, so that shouldn't be altered if they took out a loan.

In the end, the programmers prevailed with a much simpler solution. As a tester, I didn't have a lot to do with that, other than having prompted the PO to work through some concrete examples to start with, and asking other questions about the true goal of the story.
-- Lisa

Mike Farnham wrote:

The application I work on has a complex data structure, and for our GUI smoke tests, it's not feasible for the tests to set up all their own data, so we also use a "canonical data" approach where the build process first refreshes the test schema with "seed" data before running the suite of tests. This is a pain because the tests have to be run in a particular order.



So, is this "canonical data" stored in a database schema entirely outside the path to production?

We have dev, test, qa, and prod environments.
The code migrates from dev to test to qa and finally to production.
Each environment has its own schema.

I would be interested if the "canonical data" you are talking about resides,
in a separate schema.

This might be a big help to our situation,
at least for the data we actually maintain.

Our biggest challenge however is the data we get from other schemas
that we do not maintain. Plus, the fact that our data is cyclical in nature.

Do you have an suggestions for testing data that has a cyclical nature?
(I work for a University and our applications primarily deal with data for the current semester.)



We also have dev, test, staging and prod environments. In dev, test and staging, we have a recent copy of the production data so that we can do realistic exploratory testing.

We also have several "canonical" schemas that have a tiny subset of production-like data. Our different suites of regression tests each have their own schema, which is refreshed before the tests run. The unit tests also have their own schema.

We have two "seed" schemas - one for the unit tests, and one for the business-facing regression tests. The other test schemas get refreshed from these two schemas. The confusing part is that each test schema may have data that just lives there and doesn't get refreshed (this drives our DBA crazy). Lookup tables, for example, don't get refreshed, they just stick around.

I started out my career at a university so I wish I could remember some examples from that time! But it was too long ago. My current team's business is somewhat cyclical in that we have date-dependent activities in the application. For example, right now, our actual customers are having to run tests that prove their compliance with IRS rules governing retirement plans. Our canonical test schemas are frozen in time, so at the end of each year, we have to decide what data to "roll forward" in time or what to change in our regression tests so that our regression tests will still pass. Our production-like schemas can be refreshed whenever needed so that they reflect what's going on right now in production.
-- Lisa

Ilja Preuss wrote:

Lisa Crispin wrote:In fact, when we don't do any "visual" stories in a sprint, and the stories delivered don't change the way the business people work, we don't bother with a sprint review.



Can you give an example on such a story? I'm a bit puzzled - if it doesn't affect the users of the system, why would you want to implement it at all? Thanks!



Here's an example: Our application manages retirement plans. When people contribute to their retirement account, withdraw money, or change their investments, we have to do trades through a trading partner (they actually do the buys and sells of the mutual funds). We had a story where for certain mutual funds, we had to mark trades as "new money" if they were the result of new contributions to the account. In other words, if someone merely switched from fund AAAAX to BBBBX, that wasn't new money, but if they sent in money to buy new positions in fund BBBBX, that's "new money".

This wasn't anything that our plan administrators could see - it was a new field in the trade file sent to the trading partner, based on a value in a new column in the database. So there was nothing to demonstrate to our internal users.

However, not all mutual funds had this rule, so we needed a UI that allows our administrators to mark a fund as requiring this "new vs. old" flag in the trades. That UI was something we could demo to the internal users.

Does that make sense?
-- Lisa