I've come to the point where I need to provide example hardware specifications for my deployment and I'm wondering what figures to base my specs on. Should I be digging out an example weblogic deployment and then adjust accordingly or should I be undertaking a calculation from scratch based on concurrent sessions, average memory footprint per session etc?
I'm working on my SCEA assignment as well right now. It's Big Smokes -- an e-commerce web-site.
I think it's always good to keep an eye on other architectures, to see if you do something differently, if you understand/have arguments why.
With respect to requirements s for memory, cpu, network bandwidth, etc, I think the answer is yes, you should do a rough calculation to have an idea of what you need. Problem is, it's very difficult or very subjective depending on how you look at it.
Specifically for this issue I bought "Performance Analysis fro Java Web Sites" by Joines, Willenborg and Hygh, but while it's a useful book, it doesn't help a great deal with this issue. It has a number of general discussions on performance which are interesting and may help (e.g. regarding caching reverse proxy servers) and they have some checklists, mainly to estimate network bandwidth, but if you're look for an answer to the question: Will I meet my performance requirement if I recommend a cluster of two applications servers with 2 CPUs each -- then this is not your book, or I couldn't find it.
I think you have to go more or less by gut feeling: If you have to support 500 concurrent users, then probably one dual core CPU isn't going to cut it. Make some assumptions about how many of your "users" are actually loading a page and how much CPU on a single core you can handle. Include some headroom and that's your estimate.
But like I said, I'm working on this and I don't know that I'm going to pass, so if anyone else has comments?
I passed OCMJEA. I browsed the Dell web site for expensive servers, decided what was appropriate for my project, then nicked some of the text from there.
Example, for each app server:
CPU, I included: CPU name, Cores, GHz; Mb size of Cache Memory, I had: Gb size, DDR3 Hard drive, I had: Gb size OS, I had: Red Had Linux 64-bit
I had two active app servers, and one standby server for failover.
Joined: Dec 13, 2008
I'm about to submit my assignment. In fact, I tried to submit already but somehow I don't seem to have the right permissions in the system. I did part 1 when the exam was still managed by Sun, so perhaps the system doesn't recognize I did part 1 -- I've sent them an e-mail to look into it.
I do have one question about your deployment. What's your availability requirement? Is it 99.99% during core working hours? That's what I had, and it's what Cade and Sheil have in the example in their book.
Cade and Sheil have, like you, 2 app servers and one stand-by for fail-over. When I showed this diagram to an office mate, he said, why don't you turn on the third server? Wouldn't that be faster? So that's what I would propose: Three servers, all live, and make sure that you can meet the performance requirements on just two servers in case one goes down.
I can see the use for hot stand-by's in the case of single points of failure, such as the router, load balancer or firewall, but app servers in a cluster are not single points of failure. So why the hot stand-by server?
I’ve looked at a lot of different solutions, and in my humble opinion Aspose is the way to go. Here’s the link: http://aspose.com