aspose file tools*
The moose likes Websphere and the fly likes Hardware Setup for 900 beans??? Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login
JavaRanch » Java Forums » Products » Websphere
Bookmark "Hardware Setup for 900 beans???" Watch "Hardware Setup for 900 beans???" New topic
Author

Hardware Setup for 900 beans???

Vignesh Pillai
Greenhorn

Joined: May 24, 2002
Posts: 13

Hi all,
I have newly joined a banking project where we r using around 900 beans. 300 Sessions and 600 BMP's. For me, this seems too many for an Application Server to handle.
On the hardware side, again the picture doesn't seem good.
RS6000 box - 2 way processor - 5 GB RAM.
Software Used:
WebSphere3.5 on AIX
what should be the hardware setup which is not too expensive as well as gives best performance for the above no of beans ?
Is these many beans a normal feature in banking project or what we use is too many??
Any enlightment on this is apprecited.
Bye
PV


OCPJP
Steve Granton
Ranch Hand

Joined: Jan 13, 2002
Posts: 200
Hi,
I've not had any experience of working with that many beans but purely from a design point of view may I suggest that your beans are too granular. Have you read the Core J2EE Design Patterns book, which has good practices and recommendations.
As I remember, one of the main bad practices with Entity beans is that people often map the Entity beans directly to the relational model - consequently you end up with lots of fine grained EJBs.
Also, you don't mention whether the Session beans are Stateful or Stateless which obviously has implications for scalability.
Do your session beans provide a small range of functionality, maybe just a couple of business methods in each? Could these methods be grouped together more logically? Each session bean providing a range of services rather than a single service
Its may be worth thinking about doing some refactoring so that the code is a more manageable size - as it must be a nightmare to assemble and deploy all the beans. WAS must take forever to generate the stubs and skeletons? We have 7 stateless session beans and ejbdeploy on WAS 4.0 takes 3-4 minutes.
I hope this helps.
Cheers,
Steve
[ May 25, 2002: Message edited by: Steve Granton ]
[ May 25, 2002: Message edited by: Steve Granton ]
Kyle Brown
author
Ranch Hand

Joined: Aug 10, 2001
Posts: 3892
    
    5
That many beans is insane. This is simply not good J2EE design; I've seen too many projects like this fail miserably before.
You should strongly suggest to your management that they contact IBM directly for assistance in reworking the application -- this project has very little chance of working as designed on this hardware...
Kyle


Kyle Brown, Author of Persistence in the Enterprise and Enterprise Java Programming with IBM Websphere, 2nd Edition
See my homepage at http://www.kyle-brown.com/ for other WebSphere information.
Vignesh Pillai
Greenhorn

Joined: May 24, 2002
Posts: 13

Hi Kyle and Steve,
As Steve said most of the entities are relatinally mapped. There are nearly 200 beans(both stateless Sessions and BMP's together) for just setup data alone.
Kyle , IBM has been working with the team for nearly two months for improving performance and conducting stress tests.
Just few days back , they have asked to go for the architecture:
Network Dispatcher running on Two way eServer with Linux OS. Two HTTPS running on separate RS600 box 2 way processor . Three AppServers running on separate RS600's - two with 4 -way proccessors and another with 6 way proccessors.
All the above with 5 GB ram.And all 3 AppServers having 2 clones each.
The major constraint is, the project has to go live in just another 2 months time. And the manangement doesn't want to change the existing setup. Functionally project is working fine.
So Kyle what do u feel about the situation.
Bye.
Vignesh
Kyle Brown
author
Ranch Hand

Joined: Aug 10, 2001
Posts: 3892
    
    5
Well, this hardware setup has a MUCH better chance of running the application than the previously described setup (which was only one 2-way RS600!)
However, it really depends on the volume of traffic that the application will have to support. In the end, it's not the size of the app but the number of users that will be the real determining factor here.
Kyle
Vignesh Pillai
Greenhorn

Joined: May 24, 2002
Posts: 13

At any given point , there will be 1000 to 1500 users connected to server. Is this volume ok?
The comfort level is better with online modules. But the major concern has been End of Day and End of month services. Approximately EOD is running for 4 hours.
Now we face difficulty in implementing the earlier mentioned setup for the following reason:
We have some singletons and static data cache. Now how should these be shared across vertically scaled Servers with totally independent JVM's.
Vignesh
Kyle Brown
author
Ranch Hand

Joined: Aug 10, 2001
Posts: 3892
    
    5
If the total number of "logged in" users is in the 1000 to 1500 range you should be OK (but it depends on how the application is coded, naturally). If, however, the number of users making "simultaneous requests" is 1000 to 1500, then you're hosed -- you'd need probably twice the given number of processors to pull that off.
However, your mileage may differ. The ONLY way to find out if this will work is to stress-test it.
Kyle
Kyle Brown
author
Ranch Hand

Joined: Aug 10, 2001
Posts: 3892
    
    5
Originally posted by Vignesh P:
At any given point , there will be 1000 to 1500 users connected to server. Is this volume ok?
The comfort level is better with online modules. But the major concern has been End of Day and End of month services. Approximately EOD is running for 4 hours.
Now we face difficulty in implementing the earlier mentioned setup for the following reason:
We have some singletons and static data cache. Now how should these be shared across vertically scaled Servers with totally independent JVM's.
Vignesh

If the singletons are read-only then you have no problems -- having a copy in each JVM will not be an issue. However, if the data in the cache changes (e.g. it's a write-through cache) then you've got a problem. The only way to handle that in a semi-reliable way is to use JMS messaging to update all the clones (through publish-subscribe) when the cache values change in any clones; this gives the other clones a chance to dump their cache and read it from the database the next time.
Kyle
Vignesh Pillai
Greenhorn

Joined: May 24, 2002
Posts: 13

Hi Kyle,
For ur first query , 1500 are maximum connected clients. Hopefully not more than 50 simultaneous requests at any given point of time. We stress tested the system with just 2 way processor and 4 GB RAM using Rational Robot And the system responds fairly well at around 75 simulataneous requests. And this should be performing more better with earlier mentioned hardware setup.
For second, We have dynamic data caches so we can't replicate them on each JVM .
IBM also has recommended MQ Series way to dump cahes into queues, we are working on it. But is there any other way than using JMS to use dynamic data caches as well as run them on vertical clones.
Vignesh
[ June 01, 2002: Message edited by: Vignesh P ]
Kyle Brown
author
Ranch Hand

Joined: Aug 10, 2001
Posts: 3892
    
    5
Well, it sounds like your second hardware configuration should work. I think you're probably OK there.
And no, short of using JMS there is no way to do distributed caches in WAS 4.0.
Kyle
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: Hardware Setup for 900 beans???