This week's book giveaway is in the OO, Patterns, UML and Refactoring forum. We're giving away four copies of Refactoring for Software Design Smells: Managing Technical Debt and have Girish Suryanarayana, Ganesh Samarthyam & Tushar Sharma on-line! See this thread for details.
Originally posted by Jane Somerfield: Where should the data modelling phase be in the RUP or XP?
There should be no *phase*, it should be done throughout the whole project. In RUP, the main data modelling work will probably be done in the construction phase. In XP, you do data modelling when you feel the need to do it (which is probably a little bit in every iteration). See http://www.agiledata.org/essays/rup.html
What is the difference between physical and logical model in data modelling?
I don't know much about data modeling, but to me it sounds as if the logical model is concentrating on the logical (!) relationsships between the data, whereas the physical model shows how those relationsships get implemented (and perhaps even where the data resides in the system?). Someone else will probably correct me...
Which model should be used?
You should always use the models which are likely to help you in your current situation. This can only be decided individually. As the physical and logical data models sound quite complementary to me, you are likely to want to do both (at least to some extend - perhaps one of them is so trivial that you can do it in your head). See "Multiple Models" at http://www.agilemodeling.com/principles.htm
The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
That Agile Modeling site has a page on database refactoring - sorry, don't have the link handy. I find this a very scary area, and would hope that data models are significantly more stable than the code. I'd put a little more up-front work into data than most agile methods seem to recommend. As Kent Beck says, methods are developed out of fear. I'm there on the data model.
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
Joined: Jul 11, 2001
Originally posted by Stan James: I find this a very scary area, and would hope that data models are significantly more stable than the code.
Would you like to elaborate on why you are scared by this area?
I'd put a little more up-front work into data than most agile methods seem to recommend.
Another strategy is to use something very simple for persistence at first (like flat files, XML or serialization) and to not decide about a (relational) data model before the object model has somewhat stabilized.
Joined: Jan 29, 2003
With data persistence well enough abstracted, changes to the underlying storage shouldn't damage the rest of the code too much. But the data itself is significantly harder to change than code. You may have to write special one-time programs to convert data from an old schema to a new one, create new database instances, and so on. Ambler has written some tips on keeping data agile, and I think Martin Fowler has a paper on the subject. I'll trust them that data can be made more agile, but it's not something I know how to do, so it makes me nervous. Re starting with flat files: I like it! In one of Kent Beck's illustrations of Do The Simplest Thing That Can Possibly Work he told about putting data in flat files "just for now" at C3 to get coding moving. It turned out the data was read only, very stable, and they NEVER had a good reason to move it to the database. That was a major revelation to me. We tend to overengineer if given the chance. In Agile Software Development one of the example applications is built from the top down. Because the unit tests pump test data into the classes, he could build the whole system with no database right down to the persistence layer. That deferred data design to the very end. I've been able to divorce data from objects that neatly when we got data via messaging from legacy systems, but never thought to try it with a database.
Joined: Jul 11, 2001
Originally posted by Stan James: But the data itself is significantly harder to change than code. You may have to write special one-time programs to convert data from an old schema to a new one, create new database instances, and so on.
I think we are missing some tools here - refactoring browsers for relational databases. Theoretically it seems to me as if this could (should?) be mostly automated...