Hello, I'm the "Best Practices" (terrible name for a column, I know) columnist for Software Test & Performance Magazine. (http://stpmag.com). I'm working on my June column, which is on post-deployment performance testing. I have a thread started over here at the Performance forum. Reid M. Pinchback suggested that I try posting here, too.
Basically, I'm looking for stories, anecdotes, etc. from folks willing to go "on the record" about what's worked, what's not when it comes to post-deployment tuning. It's always best in journalism to name names, etc., and generally be transparent, though I'm happy to negotiate attributions, too. (ie, If you don't want your name tied to an employer, client, etc., I can say something like "according to a 10 year Java programmer and tester who works at a large Midwestern university," etc.
I'm at the point in my reporting where I've established a few things: 1) post-deployment tuning is generally a sign that there were some mistakes made and general sloppiness upstream; developers should try to avoid situations where they wind up doing extensive post-deployment tuning whenever possible. 2) most post-deployment tuning really is about tuning hardware and not code, so it's somewhat goofy to write about such a topic in a software developer-focused magazine.
I agree on both points, but I'm trying to plow ahead here. Surely, there are stories, anecdotes and so on about code being tweaked post-deployment? I'm particularly curious if there are tips in writing code that is somewhat easily-adjustable, if that makes sense. (ie, The analogy would be that some auto engines are easy to work on and some aren't. How do you build an engine that's easy to work on and tune?) Sorry for the long post. Any help here is greatly appreciated. My deadline is 4/15, but I'll probably wrap up my reporting next week.