This week's giveaway is in the EJB and other Java EE Technologies forum. We're giving away four copies of EJB 3 in Action and have Debu Panda, Reza Rahman, Ryan Cuprak, and Michael Remijan on-line! See this thread for details.
I am trying to determine how much memory (RAM) with an application be consuming given it has 100 concurrent with a job reading off 1,000 messages per day from an MQ series aside from the regular CRUD functions.
This is a very wide scope question... I guess what you could start with is .. how much additional memory your request processing will take
1. If you look at the memory consumed per request, it should be a multiple of your message size. 2. If you decide to parse the message into tokens, then you'll end up multiplying your memory foot print 3. If you cache messages, then multiply the memory by the number of messages
There are real approximate. A good thing to do would be to run a tool like Auptyma's JAM on your test system and do some memory analysis. So you can see the amount of memory being used by different objects, caches and threads.
<a href="http://www.auptyma.com" target="_blank" rel="nofollow">The Peak of Performance</a>
Joined: Jan 12, 2004
Thanks for the info. But we are only on the funding and requirement stage of our project where we have to estimate whether we'll need a separate physical server for the application we are about to build.
Can you please at least give me an idea of how much memory would it take to consume a 3,200-character message from an MQ series feed under normal conditions? A rough estimate would do.
Joined: Nov 27, 2005
Best case scenario ... you read the 3200 bytes into jchar = 6400 bytes you bind it to a varchar2/char bind value = another 6400 bytes Assume another 1K for other cursor related stuff You consume < 16K/message
Worst case scenario ... you'll parse about 400-800 fields/message (4-8 chars per field) Each field might go into an object with an average size of 24-80 bytes You end up with an additional overhead of about 64k/message.
Assuming you are committing data with a batchsize of 20-100, that means you'll be caching 20-100 rows at any given time. Depending upon the type of bind variables and their size, you'll further need to scale the data up.
Of course this is an absurd answer to an absurd question, because I am sure there'll be other activity besides message processing which will happen. If you are currently looking at adding some functionality to an existing server and want to calculate additional memory overhead, this will give you an idea. The best thing to do is to get a similar system and run a tool like JAM to get the size and number of objects involved.
100 concurrent with a job reading off 1,000 messages per day from an MQ series
I am reading this as saying up to 100 users "at one time" - and - each user may have 1000 messages a day - is that right?
It seems to me you will need to size the system to meet the surges at peak times but most of the 24 hrs will have little load. Therefore your application could live on a server that does lower priority jobs most of the time.