• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Resultset caching causing heap dump

 
Ranch Hand
Posts: 120
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Friends,

We are support a B2C public facing portal. The sites are rendered using a Java based CMS.

We are currently facing lot of outage's in our production environment,
where the JVM exits after printing core-dump/heap-dump

After carefully analysis the logs files we concluded that :-


1.) we are using Result set-caching at our application server level and this result-set caching is used for improving the performance of our application.

2.) The result-set are of nearly 300MB in size, and there are multiple such objects in memory of various sizes varying from 34 MB to 300 MB.

3.) These large objects are causing Allocation failure in our Applications server JVM. and Allocation failure occurs even when we have 500MB of free space.

4.) We cannot remove the result-set caching as the performance of the application will go very slow.

As we are in support mode, we cannot afford to do any major design/code change to the application.

I would like to know what pattern is followed in Industry to handle large resultsets

I happen to speak (For few minutes) with expert J2EE architect on the fly he suggested that :-

Generally the pattern used for handling large result-set is to go for some OR-Mapping tool like hibernate.


I would like to know if that is the right kind of solution for this problem, as we are using application server cluster will tools like hibernate support cluster environment.

Is OR mapping tools the right solution for handling large result-set.

Can OR mapping be used as alternate for result-set caching.

Is it advisable to re-factor our design using OR-Mapping tools , I understand that OR mapping will have some performance issues,

Will OR mapping help in resolving allocation failure in JVM.

Can hibernate execute on a separate JVM and can it support cluster( Most of the data's are RO (Read only)

Thanks in advance.
 
Bartender
Posts: 10336
Hibernate Eclipse IDE Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
If your JVM is crashing, and it is because of memory issues, swapping to an ORM solution is unlikely to help. You would just be swapping from a cache implemented one way to a cache implemented another. In fact, I'd diagree with your architect friend - large result sets in an ORM solution will probably consume more resources than using straight JDBC.

Hibernate can be used in a cluster, presuming you use a clusterable second level cache implementation (which is outside the scope of Hibernate).

However, are you sure memory is the issue? Are you actually seeing OutOfMemoryErrors? Does your CPU activity spike before the outage?
 
Ranch Hand
Posts: 231
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Siva,
I believe the Resultset cache is stored in the memory so you are seeing a frequent outages. Why can't to try using oscache to cache the resultset in the disk rather than the memory? Oscache is a open source api used for caching any objects. we have done this in one of our application and the performance seems to be good.

Give a try.
 
siva kumar
Ranch Hand
Posts: 120
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Siva,

Thanks for your inputs I will look into Oscache that info was pretty much helpful.

Hi Paul,

We are not facing out-of memory issue, Actually we are facing Memory allocation failure issue.

Where the JVM heap although has 500 MB of free memory, Memory allocation failure occurs when
Continuous free memory units (Memory is divided into number of segments) is not available this happens as the object size is very large
 
reply
    Bookmark Topic Watch Topic
  • New Topic