Soemtimes during maintaince i need to change a jsp and a servlet. in that case i just ftp the files to prodution and then run a simple ant script to compile the servlet. I dont know whether I am using the correct procedure or not.
or everytime should i pack the whole application from test server and deploy in production. I would like to get some comments abt it.
If you know exactly what you changed and it aren't more than a few files that need to be replaced, I don't see anything wrong with that. We are doing this, too - doing a full deployment cycle would just be a waste of time in those cases.
The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
My thoughts differ. When you FTP the files individually you have little control over exactly what is present on the production server at any time.
What I do is to create a versioning scheme where each build that will be pushed to production is unique. I use the a.b.c scheme and increment the number most appropriate to the changes made. For example, 'a' represents a major version, 'b' a minor version, and 'c' an update to the minor version.
The build creates a war file named projectname-a.b.c.war. I also use Ant to create a manifest in the war file that identifies the a.b.c version.
I push this war file and unpack it on the production site (by hand, since I am a control freak) as an entity. I also archive previous war files to a reasonable history level.
This way I always know exactly what is deployed on my production sites, and if need be, can regress easily to a previous version if I completely screwed up. [ September 23, 2004: Message edited by: Bear Bibeault ]
I do what Bear said as well, except with ear files.
We actually have several pre-production environments. Any build that is going out of development into those environments (where the users have access) goes in as a full ear. That way when the users report a problem, we can rollback to exactly what they are looking. It also gives us a repeatable process to deploy.
I'm not gutsy (and maybe not organized) enough to do what Ilja suggested. I have been on a project where the deployed app worked, but the one in version control did not. Nobody was sure what happened, and some folks had gotten in the habbit of pushing small changes to the production server, one small piece at a time. Perhaps if everyone had been better organized, and was always sure to check in these small changes to the version control system, they could have gotten away with it.
For me, being well organized means checking in the code, checking it out, testing it, and if the tests pass, deploying it.
True, but my point is, by deploying as an aggregate built from a CVS tag, you know exactly what is on the server at any time and can easily recreate it. Once someone FTP's an individual file up to the server, there is no longer any determiniistic way to know what is and is not on the server -- especially if more than one person has access.
But what works for you works for you. Hey, I actually like anchovies on pizza even though I've never found anyone else who does.
I get a little nervous when something is said to require strong discipline. Not because I think of myself or those I work with as not being capable, but because that immediately rings a warning bell for me that some risk has increased somewhere.
I don't have a lot of experience automating the development process, but reading the posts during the promotion this week has convinced me to buy a copy of "Pragmatic Project Automation" by Mike Clark and start automating more things, to make better use of my time and to reduce the amount of risk in my development process.
I'm on the side of doing a complete CVS merge, and automating a new complete deployment of the application. [ September 24, 2004: Message edited by: Al Korov ]
Joined: Jul 11, 2001
Originally posted by Bear Bibeault: Once someone FTP's an individual file up to the server, there is no longer any determiniistic way to know what is and is not on the server -- especially if more than one person has access.
I am certainly not advocating to have everyone in a big team do this on a regular basis.
But just yesterday I had to investigate a bug that occured at a customer installation. I added a line to one class for better diagnostics and have our administrator patch the system with it. One our later I knew that the problem was a missing entry in the classpath, two minutes later it was fixed (which was quite important).
Let's compare this to a full deployment process:
- building the system: 1,5 hours - uploading all jars to the server (over ISDN): 5 hours - starting the webstart application, which means *downloading* all the changed jars again over the same ISDN line: another 5 hours
OK, this is quite an extreme example, but I hope it explains where I am coming from when I say: there needs to be a balance between risk and effort. [ September 24, 2004: Message edited by: Ilja Preuss ]
I don't have the time penalties that Ilja described. That would probably change my mind about things!
I like deploying the full ear because it is easier to redeploy if the machine crashes. We can then tell operations to deploy one file, rather than to deploy the main file plus change one and change two and ...
I wonder if there is a difference in opinion on this for people who have full control over the production environment vs those who have someone else deploy. Bear, Dirk: which situation are you in?