Janet Gregory

+ Follow
since Jan 25, 2009
Merit badge: grant badges
For More
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Janet Gregory

On agile teams, we encourage testers to be more technical, but so many hiring practices for testers are based on outdated hiring practices. Do you address that issue at all?
12 years ago
Congratulations to the winners. I hope you enjoy the book.


Thank you for that link. Excellent.


Thanks for the clarification.

Unfortunately I cannot answer the question as I have never done RUP or AUP. I have read about both but I don't think that gives me the necessary experience to answer the question. There may be others on this list that can answer that question for you. I suggest you start on Scott Ambler's website http://www.ambysoft.com/unifiedprocess/agileUP.html where you can read first hand what AUP is about, and can then compare it to RUP (which I guess is what you are practicing now).


I am not sure what you mean by AUP? Can you please elaborate on the acronym and your question?

Jeff Patton has been doing a lot of work in the user experience and agile world. I attended a workshop quite a few years ago when he first proposed how User Centric Design could fit into agile methods. His website has quite a few articles. I suggest you start there.


We also have to remember that we are working with the customer right from the beginning. As testers, we question and try to make sure we understand the customers needs. We consider impact, and look for hidden assumptions. Programmers will question to make sure they can give feedback and encourage a logical way of approaching the requirements to take dependencies into account.

As a story is flushed out during the planning session at the beginning of an iteration, we can use examples to create acceptance tests that helps the team understand exactly what is being asked for. These tests become the basis of what is built and how we develop the rest of the tests for that story.

It sounds complicated, but it really simplifies the process. Expectations are clearly stated so the customer is hardly ever 'surprised' by what is delivered. They may change their mind because of what they see, but it is early in the process and not hard to adapt.

I hope that helps answer your question.

Gian Franco wrote:

but what artifacts did the team in your example make more consistent?


Once we had the process fairly consistent, I created a couple of documents - one was a test strategy document that all new testers coming in could read. It explained our agile process and how to approach our testing, from planning to release. Another was a simple (fairly simply anyhow) swim lane diagram showing where artifacts needed to be produced. (note there was a number of iterations involved to get these right).

We produced some templates as a guideline for session based testing, after Jon Bach did some consulting and showed the team how to take advantage of this skill. We also made use of checklists.

Each team is different and I have found that larger organizations need more effort to try to keep some level of consitency across teams if that is what is needed.


Hopefully that example is an exception - it sounds a lot like an urban myth. Teams should have self monitoring controls to prevent that kind of misuse of metrics.

Michael is correct that metrics are often misused, and I agree that they need to measuring something which the team finds useful. Number for the sake of numbers is not useful, but watching trends can be rewarding (if they are going the right way), and should be a trigger for change if they are not. Numbers are only a piece of the puzzle and as Michael points out, if not understood can cause damage within a team.

As I read his blog, I get the feeling that all control metrics are bad, and I don't necessarily agree with that. There are many reasons to use metrics - some good, some bad. It's all in how you use them.

If there were enough requests, I'm sure the publisher would find someone who had the right skill set. I hope you enjoy the book.


Lance Zant wrote:
In my experience, tests identified by business product owners' tend to be indicative rather than exhaustive. They tend to come up with a sunny day case and stop there. Prodded for error cases, they give me a couple of obvious missing or bad values. A second round of prodding may or may not produce a couple of interaction exceptions (no cash refund for a credit purchase), but it definitely begins to raise the frustration level. ("I just need it to work, dammit!") Unfortunately, when a subtle interaction bug arises, the fact that there was no test for that combination is cold comfort, and the blame game begins. ("Of COURSE, we need to process payments against canceled orders!")

So the question is, how do you assess the adequacy of your business-facing tests, if it's not based on some kind of coverage of the possible input combinations and sequences? If the answer is "heuristically", fair enough. The follow up in that case is whether any of the heuristics are general across projects and domains, and how do you get the business types to really engage them?


You are right that many product owners only see the happy path. That is one of the reasons we advocate professional testers on a team - to help identify all the other cases. How each team determines what is enough is usually based on the skills of the testers. I find many experienced testers use heuristics without knowing that is what they are doing. If you ask them why they decided to do that, they can usually explain and it starts with "Because ....", and ends with "in my experience".

When I do find teams that don't seem to have any kind of process for figuring out "what is good enough" I encouage use of simple tools like truth tables, or decision trees, or maybe even a flow diagram. And I recommend they take testing courses who can teach more indepth methods. Often though, the simple tools provide 'good enough' coverage.

Hi Roger,
Our book does not attempt to map testing to any one methodology. Instead it is about a general approach.

Personally, I like to develop a process and then document it rather than the other way around. It is easier to change things when they don't work first time.

If I had to write a process first, I would keep it very general and involve the whole team in defining it. Something like...
During iteration planning meetings, testers participate by asking clarifying questions, helping the customer to define acceptance tests ... etc. Testers will work closely with the developers to write tests and execute them during the iteration. During the end game, testers wil ....

I like pictures and flow diagrams too.... :-)
Hello Mouroughan,

This question has been answered in other posts, but I couldn't find the exact one. There are many agile testing "tricks" that are applicable to a traditional waterfall project (assuming that is what your project is). Agile testing is about a mind set, thinking how can I work with the team to understand what the customer really wants or working with the developers to get the best test coverage possible. If you are able give developers your tests before they they start coding, it can save a lot of misunderstandings.

All these things are not limited to agile projects, although the short iterations make is much easier.


Sometimes books like ours can be translated so a request to the publisher would be in order if you wanted to do that. I have a friend who translated technical books into Japanese so there is an industry out there for it.

Hi Gian,

I think your question could be taken one of two ways. Paul has answered the technical side of things. The other is related to people issues, and cultural issues. Each project is different so how one team reports on testing may be completely different than another. This can cause confusion if members move from one team to another, and even to upper management because the numbers may not be comparable.

You said you had a small team so it should be fairly straight forward to get all stakeholders to discuss what is really needed as artifacts... If you are talking about test results, are you using the same tools? If not, why not - there can be valid reasons, but I'd question to find out why. The more you can leverage off each project, the less waste there will be is switching from one project to another.

On one large project I was on, we had a test community that shared ideas between teams. When one team figured out how to make their tests work in a new and simple way, they shared it with the other teams. This didn't guarantee everyone did it the same way, but it definitely made artifacts much more consistent. I was the process consultant on the team and helped the teams to be fairly consistent in how they created their artifacts.

I hope that answered your intial question - I may have gotten off track a bit.