- JDK 1.4.2
- JSF 1.1 (Myfaces 1.1.6)
- Ajax4JSF 1.1.1
- Tomahawk 1.1.8
- Tiles 2.1.0
- Spring 2.5, Spring Security for security layer
- iBatis 2.0
- SQL Server 2000, DB2 8, Sybase
- IBM Websphere 6.0 - JVM memory min - 64MB, max 512MB
- IBM MQ Series 6.0
- IBM AIX UNIX, load balacing on 2 servers (Clustered environment), Each unix box has 2 CPUs
Our application uses MQ for lot of business transactions. It is dependent more on MQ than the database. Among the databases, SQL Server is the main database. DB2 and Sybase is used for few transactions.
This application is supposed to take a load of 1000 users in production and give us a response time of 10 secs.
As we started with load testing we saw poor response time. We did profiling using Jprofiler and corrected some inherent application bugs which was causing high JVM utilization.
As we reached 250 user load, myfaces started eating memory (This was revealed by heap dump).
We tuned myfaces based on various websites and did the following:
1) State saving mechanism as "server"
2) Number of views as 3
3) Streaming resource org.apache.myfaces.component.html.util.StreamingAddResource with t:documentHead
4) Set org.apache.myfaces.SERIALIZE_STATE_IN_SESSION as false.
By adding the above code, it improved the performance only a bit.
I later wrote some filters to cache the images and css files. This improved the screen load performance a bit.
But the overall response time was not to the benchmark. We were getting a response time of 20 secs for 250 user load. We are far from achieving the response time for 1000 user load.
We got the heap dump once again but we saw myfaces continued to eat memory. The object in heap that is causing the trouble is JspStateManagerImpl$SerializedViewCollection
I saw in some website that this object tries to save the old view states in some weak hashmap which nevers gets garbage collected. Thought that could be
the problem. I found a fix in the website and replaced the corrected jars. Now JspStateManagerImpl is not storing old views in weak hashmap.
This actually helped a bit. It reduced the memory utilization.
But When we run for 500 users, heap dump still shows JspStateManagerImpl object is eating (Approx 1.6MB).
I am not sure if 1.6MB size in heap is normal for 500 users!!!
I know the screen size also makes a lot of difference to give a conclusion upfront.
But let me provide more information.
On an average we use 25 components.
Each screen has selectitems drop down. The select items is inturn refered by a managed bean in session.
Apart from the above object in session, we store only 3 managed beans in session. These managed beans carry menu and user information.
JSCookmenu inturn reads the menu object in session and renders the output for every screen.
I wrote a session size calculator jsp to find the size of each of these session objects.
They are hardly 20~30KB. But JspStateManagerImpl object in session is easily 150KB min. Sometimes it goes above 450KB.
We use tomahawk savestate to store some object information.
I should accept that we do use EL expression statements in many of our screens.
We have limited usage of datatables. But wherever we have used, we have done managed bean (in request scope) binding with preservedatamodel as true.
Wherever JSF components were not needed, we used pure HTML tags enclosed withing f:verbatim.
Each page contains atleast 4 command buttons. Each command button is supposed to call some other managed bean and render those screens.
This means apart from the main managed bean, when the screen is rendered the beans referred by these buttons are instantiated.
Of course all these managed beans are in request scope.
Most of the components in our screen use "rendered" attribute to perform some business function.
Now my question is, did I miss anything else in myfaces? Did i miss anything that could help me tune JSF further?
Our constraint is we cannot move to JDK 1.5 (which means JSF 1.2 or higher) as it will be a big infrastructre cost to our clients.
I know the problem for poor response time could also be due to database, MQ and others. We are working on them parallely.
But I want to eliminate all JSF related issues from the picture.
10 second response time is fairly generous, by my standards. And no, I don't consider a few megabytes of server RAM as an outrageous price when you're talking hundreds of users.
The JRE restrictions are another matter. The one time when you do NOT want to be cheap or casual is when you intend to support hundreds of users. Hopefully whatever whatever those users are doing is important enough that the cost-per-user is worth it. However, I'm not sure what server you're using that would still be running JDK 1.4 anyway if it can support even JSF 1.1. I suspect that the server's actually running a JDK 1.5, just that the application itself is coded to Java 1.4, and I doubt that recoding to use the extra goodies in JDK 1.5 would help in that case.
It sounds like you've made a good start on analyzing and addressing your issues, and there's not a whole lot I could add to that (at least for free!). But one thing I will recommend is that you look at the actual real-world usage of that webapp and see what can be done to boost the parts that are actually going to see the load as opposed to a general test that abuses the entire site uniformly. One of the great things about JSF is that it's not an exclusive framework. So if you have something that makes heavy demands, you can implement it with an alternative technology without interfering with the JSF parts. For example, a shopping site may have low usage on admin pages, medium usage on checkout and shopping cart and high usage on catalog listing pages. In which case, you might want to redo the catalog listing pages as servlets or JSPs. Or blend in Struts.
It does sound like you're a good candidate for JSF 2.0, when it goes live. I've been told that it's now very solid, but until they publish it to the main Maven repositories I'm not prepared to use it in production myself.
An IDE is no substitute for an Intelligent Developer.
We are websphere 6.0 and it supports only JDK 1.4
Websphere 6.1 and above support JDK 1.5.
JSF 1.2 use some programs available only in JDK 1.5 hence we are forced to use JSF 1.1
We are using Ajax4jsf for re-rendering the same screen. ie if I am in page-1 and a section of the screen should be updated, we re-render that section using Ajax4jsf. Thanks to AJAX this works really fast.
Our application screen layout is somthing as follows:
1) The top portion is formed by header and tabs
2) The left side is formed by menu (rendered by JSCookmenu)
3) The botton portion forms the footer
Only the body portion which is in the center keeps changing with different contents. The header, footer and the menu remains the same for all screens.
Now my question is if I have to reload a different page, can I use AJAX4JSF? , ie if I move from page-1 to page-2, can we render page2 totally using Ajax4jsf. Basically can I refresh the body portion (JSF sub-view) alone?
TO my knowledge it is NO as JSF state object for the next screen is not created and Ajax4JSf uses the state object thats created already. Am I right?
The problem is that the entire client-side document object is discarded and a new document is built from scratch when you do a full page request. It's not so much the JSF context that's the issue, it's the whole page document structure.
You might be able to work around this issue by dynamically replacing a panel in the page, although depending on how extensive you get on this, it could get messy to maintain.
Another option would be to see if you can't make some of the framing stuff cacheable. That way you could at least spare the effort of rebuilding them server-side and re-downloading them.
After lot of research and trial and error, we have resolved the issue in our application. It turns out the issue that we were having is because of a SQL query for loading the menu and not JSF. WoW!! Thats good news isnt!! Someone had posted in this forum. "Do not blame JSF for poor performance". How true is it!!! Most of the time the frameworks we use is not the culprit, it is how we use the framework.
We did the performance testing for our JSF application for 500 users and response time was 7 secs which was good and that is what we want. I feel our application can even scale upto 1000 and still produce a good response.
I am posting the list of things we did to tune our application. I thought it could be useful for those who would be running around thinking JSF is performing poorly.
First step, profile your application using Jprofiler or some other tool and eliminate the first sets of problem creators. Profilers should reveal problems with your application code.
JSF performance tuning tips:
Things to be taken care during design/coding:
1) Do not use datatables where it is truly not required. Even If it is search results table, you can manage with datalist. Go for datatable when you need to have links, buttons within the table
2) Do not use panelgrids, JSF components unnecessarily when you can do the same using regular HTML tags. For example: If you want a static table just enclose with f:verbatim tag and use regular HTML table tags. Reduce the burden of component tree creation.
3) Use facelets instead of tiles.
4) If you have an option to switch for JSF 1.2 or above, please do it.
6) Follow the link http://wiki.apache.org/myfaces/Performance and do all tips mentioned for server side state mechanism. You have to be careful when you set org.apache.myfaces.SERIALIZE_STATE_IN_SESSION as false. Especially in distributed/clustered applications, if the app server is supposed to replicate sessions in both the servers, this should rather be turned on. For our application, we set it to true. The number of views need not be 20!!!. For our application we set it to 5. This reduces the memory consumption in JVM.
7) Write your own state manager and override the method which stores objects in session. The myfaces 1.1.6 and below is having a logic wherein it stores the old views in weak hashmap. Modify your statemanger to remove this behavior.
8) Write your own navigation handler to remove unnecessary views from session. For eg: Once the user has logged in, it does not make sense to retain the login screen view in session. Remove them. For our application, we had our own state manager and navigation handler. We then load a static map which will give the list of view ids that need not be stored in session. When the user navigates from one screen to another, the navigation handler checks if the viewid exists in hashmap. If exists it requests the statemanger to remove the state from session. As far as our state manager is concerned, if a particular view id exists in hashmap, it does not spend time trying to serialize the state in session. This way some time is saved and the burden on the memory is reduced.
9) Use ajax as much as you can.
After doing the above if you still find screen response to be slow, it is time you start looking at your application code and environmental factors thoroughly.
Check our if your OS box and app server is good enough to take up high user load. Check out the clustering env setup for your application. Check out the RAM/CPU capacity of underlying OS box. If it is not ideal for high user load, then there is no point in breaking our head for tuning the application.
Whenever you are doing performance testing -
1) turn on VM stats on the UNIX box (equivalent to perfmon on windows box). This should reveal the CPU utilization
2) Turn on verbose or garbage collection logs on the app server
3) Verify if the JVM memory settings is good for your application. Check if the min/max memory fits perfectly for your application
4) Turn on database monitoring scripts. For eg: statspack in oracle
5) Get forced heap dumps from app server
Do the performance testing and see what is revealed by each of the above.
If VM stats shows high CPU utilization, it could be an issue bcos of other services in the same box or your application could be the cause
If verbose or garbage collection is high, your JVM memory may not be sufficient. It is possible that your application has inherent bug for creating huge objects.
The garabase collection graph will tell you the min and max heap used during your load testing. Probably if no application code issue exists, may be you have set your JVM size to a optimal number
The DB monitoring scripts should reveal problematic queries.
Analyze heap dumps and check for object utilization tree. It should reveal problem areas
If none of the above helped, check the thread pool, connection pool set in your application. Turn on performance monitoring index on your appserver (we are using websphere and hence not sure what is the equivalent in other servers) and run load testing. This should tell you long running threads, connection pool behavior.
If there are long running threads and it is due application code, it may be difficult to spot the exact area. We were luckily using Spring and we made you of AOP. We wrote a universal before advice and after advice interceptors for all the programs. In our interceptor it calculates the time taken for each method and provides the required info in the logs. If you do not use Spring, you have to visit every method and record the start and end time in the logs. During the load testing, turn off all the logs except the interceptor (or any other prog which records time). In between also write your custom JSF phase listener and record the time taken for each phase and for each user session. Check out the logs and carefully list down the time for each method and for each user session. Check the time revealed by phase listener. This time you should catch the main culprit in your code or external candidate who is affecting your application.