This week's book giveaway is in the OCAJP 8 forum. We're giving away four copies of OCA Java SE 8 Programmer I Study Guide and have Edward Finegan & Robert Liguori on-line! See this thread for details.
I need some ideas on how to manage schema changes to your tables, done in an agile way. Currently we micro-manage each schema change request like a CVS update i.e. we create a patch script that's sent to the program manager for approval and it's checked into CVS (thus made available to other developers) once a week. "Once a week" meaning that we have a specific day (e.g. Tuesday) whereby all APPROVED patch scripts are merged into one script and checked into CVS. While waiting for that to happen; we have to wait at the worse case, 1 week, to check in changes to our model objects, while other developers in remote parts of the world wait. Can anyone improve this process ?
In theory, yes. In practice, that depends on how attached the powers that be are to the existing process. Managing schema changes (scripts) in version control is a good thing. A 1-week integration cycle as such is not too bad either. However, what I can't understand is why individual changes can't be merged separately from each other? Do the developers each have a private database to develop with? If they don't, would it be possible to get them? That should finally allow integrating schema (and code) changes immediately when the developer has verified them locally against his own workspace. Also, who is this program manager guy? Why is *he* approving schema changes? If he's the best DBA you've got and his schedule has enough available slots, then fine, he might very well be the right one to review the changes. If not, that's surely a process smell and should be gotten rid of.
We are just moving from an environment with one development application server and database for the team to one per workstation. If I check in Java changes, no problem, everybody refreshes from the repo and we're all in sync. If I check in Java plus schema changes, everybody has to update their database before the code they get from the repo will work. I'm interested in real-life solutions to this, too. I like the idea that I check in a script that modifies the current baseline database along with the code. Other techniques?
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
Joined: Jan 23, 2002
Originally posted by Stan James: I like the idea that I check in a script that modifies the current baseline database along with the code. Other techniques?
Scott's book also mentions these incremental database management scripts: - a "database change log" contains a chronological list of DDL statements as applied when doing the refactoring (or adding new features) - an "update log" contains cleanup statements to be executed when the "deprecation period" of a refactoring (a kind of residue for supporting old and new simultaneously) is reached - a "data migration log" contains DML statements for massaging the data (e.g. changing how a date type is stored in a field, etc.) This all makes sense to me, but without seeing it in action, I can't really say what issues to look out for... Maybe Scott would like to comment on this?
Check out The Process of Database Refactoring, which is an update to what was published in the Agile Database Techniques book. It'll work if the database owner is willing to work in this manner, if not then you've got a problem. Right now you need to hand-jam the database scripts, but soon we'll see database refactoring tools (one has been discussed in the Eclipse DTP effort) which will automate the database side of things for us.
Originally posted by Stan James: We are just moving from an environment with one development application server and database for the team to one per workstation. If I check in Java changes, no problem, everybody refreshes from the repo and we're all in sync. If I check in Java plus schema changes, everybody has to update their database before the code they get from the repo will work. I'm interested in real-life solutions to this, too.
I like the idea that I check in a script that modifies the current baseline database along with the code. Other techniques?
Though i was advocating this same approach, not many in my team supported it. In the end i had to go alone this way, while the others used the central development database. The main advantage i see is that we code and complete assignments faster. But people find it quite scary, which to a certain level i too agree with. Usually in projects, even though the developer requests for schema changes, the DB team has its own manager who would approve this. And this approval process is usually slow and defeats the developer's intentions of getting things done quickly. If a good process can be centred around this, i think this approach can give good results. But it sort of wrests control out of the DB team's manager and they don't like it.
Sab<br /> <br />Perfection does not come from belief or faith. Talk does not count for anything. Parrots can do that. Perfection comes through selfless work.<br />Swami Vivekananda
Joined: Dec 12, 2003
The DB team can still have control over the changes, they just need to find ways to work together with the developers in an agile manner. In the forthcoming book Refactoring Databases (January 2006) we present a strategy for data professionals to do exactly that. It's based on my agile enterprise administration ideas. I think that the real challenge is that the nature of development has changed but most data professionals haven't recognized this yet, part of the cultural impedance mismatch.