aspose file tools*
The moose likes EJB and other Java EE Technologies and the fly likes mdb scalability (wespshere) Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of Spring in Action this week in the Spring forum!
JavaRanch » Java Forums » Java » EJB and other Java EE Technologies
Bookmark "mdb scalability (wespshere)" Watch "mdb scalability (wespshere)" New topic
Author

mdb scalability (wespshere)

jeetu krishnan
Greenhorn

Joined: Feb 13, 2006
Posts: 12
Hi All,
Currently we are facing some performance issues in of our applications which are using Message Driven Beans (Transaction type is Bean Managed). Also queues are build as persistent.

Appreciate any guidance provided by you on this subject.

Following is the architecture of our application: (Websphere app server 5 and 6 / Websphere MQ JMS provider)

app 1 (deployed on was 5.X) -----> App2 deployed on was 6 --->database.

Issue: There are delays between when the message was sent by application 1 and when it was picked up by the application 2. This issue is exaggerated when the volume of transaction increases.

Out of 57000 (over a period of 3 horus) send by application 1 to application 2, 5000 transactions have delays ranging from 1 to 80 seconds.
Out of the 5000 transactions which have delays there are 4 beans on the application 2 which constitutes 80 %. (This is because these beans are called quite frequently)

Current Settings for Application 2:
1 Connection Factory
Max Connections: 10
Session Pool : 120

Thread Pool for Listener : 100
Max Sessions for Listener : 40
Max Messages for Listener: 1

Application 2 uses the same connection factory while sending back the reply to application 1.

Please advise.
Thanks,
Jeetu
Andreas Schaefer
Ranch Hand

Joined: Feb 13, 2006
Posts: 63
I am not a WebSphere expert so I cannot advice you on the settings. That said delays with MDBs are not uncommon and actually most of the time a desired feature because you want to handle the messages asychronoulsy. So I would not be worried too much about the delays except you get a permanent backlog. If you do then you need to increase the ability of the app server to handle more messages concurrently.
Basically a MDB is like a Stateless Session Bean (SLSB) that is not invoked by a client but rather by an incoming message. If you have a SLSB pool size of 100 then you can handle no more than 100 messages at the same time. In addition the thread pool defines the number of threads that can be used by MDB (s) to receive a message (here I am not quite sure if WS means for all MDBs or for a particular MDB). But be aware that every thread has a stack size of around 1/2 to 1M and so they can eat up a lot of memory.

-Andy
Mark Spritzler
ranger
Sheriff

Joined: Feb 05, 2001
Posts: 17257
    
    6

jeetu, please try to be careful about duplicate posting. Sometimes you can accidently hit the Add Post button twice and create the same thread twice.

I deleted the duplicate, and left the one with the great answer from Andreas.

Mark


Perfect World Programming, LLC - Two Laptop Bag - Tube Organizer
How to Ask Questions the Smart Way FAQ
Reid M. Pinchback
Ranch Hand

Joined: Jan 25, 2002
Posts: 775
Originally posted by jeetu krishnan:
Issue: There are delays between when the message was sent by application 1 and when it was picked up by the application 2. This issue is exaggerated when the volume of transaction increases.


This will likely be a combination of three things:

1. You said you persist the queue. JMS serialization/deserialization can be a noticeable performance drag if these messages are big. You may want to check to see if persistence of the queue really matters for this situation.

2. As the load on "application 2" increases, MDBs will take longer to be released back to the instance pool, and hence longer for the server to pull off a message to assign it to an instance. Sometimes increasing the pool size helps. Sometimes *decreasing* the pool size helps. If your MDBs are doing a lot (and 80 seconds sounds like transactions that are doing a lot), then MDB instances are consuming other resources that themselves may be too constrained to support all the instances in a pool. Experiment with different pool sizes to see what happens.

3. Bad exception handling killing MDB instances instead of allowing them to be released back to the pool. A common mistake is to allow variations of runtime exceptions to propagate out of the onMessage method instead of marking the transaction for rollback. If that happens too often, too many of your messages have to pay the cost of MDB instance creation and the associated container callbacks.


Reid - SCJP2 (April 2002)
jeetu krishnan
Greenhorn

Joined: Feb 13, 2006
Posts: 12
The MDB's donot take 80 seconds to process. Most of the mdb's respond in milli seconds.

80 seconds is some time lost between when the message is send from app 1 and when it is picked up by the MDB in the app 2.

This problem is exagerrated during peak hours. I dont think that the MQ channels are the ones creating the delay because a message fired after the delayed messge is picked up immediately by app 2.

The default pool size for the MDB in WAS 6 is 50 min and 500 max. Websphere PMI shows the live bean count around 300 from which we can infer than enough beans are available.

As per my understanding following are the "tunable parameters" on the listener.
1) Q connections pool
2) Q session pool
3) Max sessions on Listener
4) container pool size
5) Max messages on listner

Am I missing some other paramter.

How are the MDB's invoked by the container? Is there some setting which decides how many MDB's can be invoked at the same time?


Thanks,
Jeetu
Jorge Ribeiro
Ranch Hand

Joined: Oct 18, 2005
Posts: 40
I think this is not an issue related with WebSphere, JMS or MDBs, but with MQSeries itself. Have you tried to tune it? MQSeries is a very powerful tool that has many peculiarities. With this kind of volume of messages is very likely that MQSeries is trying to optimize the process- instead of sending messages one by one, MQSeries processes a bacth of messages at each time. This way the overhead with communication and persistence (and don't forget that messages have to be written to the log files) is minimized. But you might have side effects as the one you're describing. There is a parameter that defines this value, but whose name I don't recall. Let's say that MQSeries processes batches of 5000 messages. That means that a QManager X won't send any message to QManager Y while it doesn't have a set of 5000 messages targeted to Y. This causes a very strange behaviour in the application- long periods of inactivity, long waiting times, and short periods of very high activity.
jeetu krishnan
Greenhorn

Joined: Feb 13, 2006
Posts: 12
I hope so. Next week are are moving app 1 to WAS 6. So it reside on the same box/Q manager as app2. Also we are making some changes to make the messages non persistent. This should eliminate all issues related to MQ.
Jorge Ribeiro
Ranch Hand

Joined: Oct 18, 2005
Posts: 40
It will be faster as it does not write the messages to the log files. Still, the problem I was describing will not go away. What's the value of the batch size property of your channels?
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: mdb scalability (wespshere)