aspose file tools*
The moose likes Agile and Other Processes and the fly likes Tracking Process Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of Spring in Action this week in the Spring forum!
JavaRanch » Java Forums » Engineering » Agile and Other Processes
Bookmark "Tracking Process" Watch "Tracking Process" New topic
Author

Tracking Process

Siegfried Heintze
Ranch Hand

Joined: Aug 11, 2000
Posts: 381
What are folks doing out there to connect the following together:
(1) Fragment in functional spec or defect tracker item
(2) Fragment of java source code file stored in CVS
(3) Programmer implementing the change
(4) Testor
(5) Test
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
2->5 is tracked automatically - if you change the source code, the accompanied test fails.
If I would do full XP, (1) and (5) would be identicall - the functional spec is provided in the form of automated test cases. You could weave the name of a bug report or feature request into the test name, if you liked.
2->3 is tracked by CVS itself.
The other things I am not sure why I should care about...
[ September 30, 2002: Message edited by: Ilja Preuss ]

The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
Siegfried Heintze
Ranch Hand

Joined: Aug 11, 2000
Posts: 381
>>What are folks doing out there to connect the following together:
>>(1) Fragment in functional spec or defect tracker item
>>(2) Fragment of java source code file stored in CVS
>>(3) Programmer implementing the change
>>(4) Testor
>>(5) Test
>>
>
>2->5 is tracked automatically - if you change the source code,
>the accompanied test fails.
If the test fails, how do I know who the testor and implementors are?
>If I would do full XP, (1) and
>(5) would be identicall
Yes but 1 and 5 should be performed by different individuals -- correct? Which individuals? How is this recorded?
>- the
>functional spec is provided in
>the form of automated test
>cases.
Hmmm... interesting. I wonder if anyone has used Donald Knuth's literat programming techniques here.
I understand there are tools out there to regression test GUIs. How does one automate the functional test for a GUI?
>You could weave the
>name of a bug report or
>feature request into the test
>name, if you liked.
>
>2->3 is tracked by CVS itself.
>
What if six programmers are each implementing a different feature and they serially check out/in a source code file six times. When it comes time to test, one computes the delta (perhaps by using emacs diff) between the prevous release and the current release. How do you identify which programmer added which code in the delta document?
>The other things I am not sure why I should care about...
>
>
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Let me first make clear that I am talking about a small team - say, up to a dozen developers.
Originally posted by Siegfried Heintze:
>2->5 is tracked automatically - if you change the source code,
>the accompanied test fails.
If the test fails, how do I know who the testor and implementors are?

I could ask the team or take a look into cvs. Why do I care?

>If I would do full XP, (1) and
>(5) would be identicall
Yes but 1 and 5 should be performed by different individuals -- correct? Which individuals?

I am not sure I understand the question, as 1 and 5 are artifacts, not activities.
If we are talking about the creation of these artifacts, I don't think that should be performed by different individuals, but by the same team.

How is this recorded?

What do you need this information for?

>- the
>functional spec is provided in
>the form of automated test
>cases.
Hmmm... interesting. I wonder if anyone has used Donald Knuth's literat programming techniques here.

Well, not me. How is that connected?

I understand there are tools out there to regression test GUIs. How does one automate the functional test for a GUI?

One way is to make the GUI so thin that you can inject the functional tests directly beneath it. See http://www.xprogramming.com/xpmag/acsFirstAcceptanceTest.htm for an example.

>2->3 is tracked by CVS itself.
>
What if six programmers are each implementing a different feature and they serially check out/in a source code file six times. When it comes time to test,

You seem to think of "big bang" testing here. What if you were working in two week iterations, having all functional tests for the iteration available at the end, if not much earlier (some very extreme teams even have most of the functional tests available at the *start* of an iteration).
What if you would run all the previous functional tests at least daily? What if every developer would run the full suite of unit tests several times a day - at least every time he integrates his changes?

[...] one computes the delta (perhaps by using emacs diff) between the prevous release and the current release. How do you identify which programmer added which code in the delta document?

Obviously you would need the diffs between all the individual checkins. And you should probably use the CVS diff. But, again, why do you care? I do see the need for knowing which line was written by which developer very rarely...
Frank Carver
Sheriff

Joined: Jan 07, 1999
Posts: 6920
I have encountered two typical reasons for wanting to know who made a change. The first is because he/she "owns" it, and no one else knows about that part of the code; the second is so someone can be blamed for the fault.
XP tries to attack the causes rather than the symptoms, and has practices to avoid both the above problems.
The practice of "collective code ownership" us aimed at making sure there is no part of the code "owned" by a singe person - anyone in the team is empowered to fix or improve any part of the code. This has plenty of other benefits, too. It reduces one of the main sources of programming stress, and greatly improves the truck number of the project.
XP also has the intriguing idea of "Chet". "Chet" is someone to blame when anything goes wrong. The team immediately knows that it is all Chet's fault as soon as any problem is discovered, so nobody needs to waste time looking for the culprit and can get straight on to the real work of solving the problem. It helps if "Chet" is not a real person on the team (unless he or she is very robust and self-confident!) - some teams use a teddy bear or other toy, or maybe just make up a name or use the original "Chet".


Read about me at frankcarver.me ~ Raspberry Alpha Omega ~ Frank's Punchbarrel Blog
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Originally posted by Frank Carver:
It helps if "Chet" is not a real person on the team (unless he or she is very robust and self-confident!)

In fact, the original "Chet" is Chet Hendrickson, who was one of the developers on the C3 project. IIRC, one day in a meeting there was a lot of blaming and finger pointing involved, until Chet finally said something along the lines of "Hey, it's all *my* fault. Now, can we please discuss *solving* the problem?". Afterwards the discussion went along much more constructive. (There is a small chapter in "XP Installed" about this story.)
Siegfried Heintze
Ranch Hand

Joined: Aug 11, 2000
Posts: 381
I posted this thread on behalf of a client with a small team of six programmers. They are pretty informal and have one month release cycles.
I believe one of my books, perhaps Kent Beck's, suggest that programmers should not test their own code.
Presently my client manually prints out the diff between the previous release and the current release and teams members claim ownership of fragements of code and sign up for testing fragements they have not written.
This is a manual procedure and they are worried they might miss some fragments of code and not test them.
Later, if a test succeeds in breaking the release, does it not make sense to identify the tester and the implementor as prime candidates for repairing the problem? After all, they are the most intimate with it!
Does no one attempt to identify the programmer and the testor when something breaks?
Of course, they are successful in presently doing this manually. Why is it unreasonable (or contrary to XP) to keep a record of who is doing what? Yes, I agree that this information could easily be used for destructive finger pointing.
But could not this information be used constructively to most efficiently fix the problem too?
My client does not claim to have a finger pointing problem!
In small teams this (identification of the author and tester) is happening manually anyway. Folks go to the author of the code for help! Why not record who the author of the code is?
Yes, I know CVS records the author but it is tedious to retrieve the author of a fragment when the same source code file has been edited by six other programmers. There must be a better way!
Ilja Preuss
author
Sheriff

Joined: Jul 11, 2001
Posts: 14112
Originally posted by Siegfried Heintze:
I posted this thread on behalf of a client with a small team of six programmers. They are pretty informal and have one month release cycles.

Sounds good!

I believe one of my books, perhaps Kent Beck's, suggest that programmers should not test their own code.

The reason for that is mainly that you are biased to test the things for which your implementation obviously works. This doesn't apply for test first development, as you don't have the implementation when you are writing the test.
Other techniques that help in this regard are Pair Programming, Collective Codeownership (there are more than one pair of eyes looking at the tests) and Customer Tests (as another safety net).
And I know for sure that Kent Beck is a strong proponent of test driven development...

Of course, they are successful in presently doing this manually. Why is it unreasonable (or contrary to XP) to keep a record of who is doing what?

It's only contrary to XP if it isn't the most simple and effective way of doing it.

Yes, I know CVS records the author but it is tedious to retrieve the author of a fragment when the same source code file has been edited by six other programmers. There must be a better way!

Well, at least three things come to mind I would try before searching for a better way to record who worked on which part:
- asking the programmers - they should know, shouldn't they?
- testing more often - after all, the earlier you find a bug, the easier it is to fix it
- switching pair partners often, so that you have more developers who could work effectively on a specific part of the system
What do you think?
Frank Carver
Sheriff

Joined: Jan 07, 1999
Posts: 6920
One of the things which keeps ocurring to me is that you seem strong on the notion of this "testor" as being a person. I am not sure if you are thinking of this as being the person who "performs" a test, or the person who "writes" a test.
Can you clarify this for us?
If this "testor" is the person who performs a test, then it implies that at least some of your tests are performed manually. This may actually be the biggest factor which led to the original question.
Manual testing is expensive and error-prone, and so is often left for far too long. If the full test suite is automated, quick, and anyone can run it at any time, then checking in any code which "breaks the release" should be really unusual. The next developer to run the tests would also spot the problem straight away, so no such faults should last more than an hour or two at most before being noticed. If only a short time has elapsed since the offending change has been made, it should still be fresh in the mind of everyone involved.
If the "testor" is the person who wrote the test, then it often won't make particular sense to ask them about it. In a well-unit tested system, many of the tests will have been written months or even years ago, and used as a "scaffolding" for refactoring and future development. If one of these old tests suddenly fails after being run successfuly thousands of times before, then it is very likely to be the new changes which are at fault.
Lots of these problems just go away if you adopt test-driven development: Analysis generates a requirement which is written as an automated test; code is then implemented until it passes the test and refactored until the whole application is clean and stable again. After every small change the full unit test suite is run again to make sure nothing inadvertently broke an old test. Whenever the tests pass, any changes are checked into the version control system.
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: Tracking Process