• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

How to prevent outOfMemoryErrors when performing XSLT transformations

 
Ranch Hand
Posts: 384
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hello,

We frequently have out of memory errors when our XSLT transformations run depending on the size of our source xml documents.

Here is the code that causes the above error to occur:


and the imports


We cannot increase memory on our servers nor can we reduce the size of our source documents. Is there a way to prevent this kind of errors from occurring or reduce memory footprint? (even though if it means having the transformation taking longer...)

Any clue welcome,

Thanks,

Julien.
 
Julien Martin
Ranch Hand
Posts: 384
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I found something interesting that I am going to quote:

I fortunately discovered them long ago (weak references) thanks to an article at java.com (when it was still called like that) and I've since used Soft References in a few occasions, like creating smart caches of pre-compiled objects (XSLT sheets in my case) that are able to be garbage collected if a sudden peak in memory usage occurs.

You simply re-create them the next time you need them and if the memory usage has gone down, you go back to normal.

 
Author and all-around good cowpoke
Posts: 13078
6
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

We cannot increase memory on our servers nor can we reduce the size of our source documents. Is there a way to prevent this kind of errors from occurring or reduce memory footprint? (even though if it means having the transformation taking longer...)



XSLT is not particularly memory efficient. You may have to perform the transformation in two or more steps. In one example I had to combine a Java program that performed some selection and re-ordering of elements to create an intermediate XML file which was then run through a simpler XSLT to create the final result.

Bill
 
Ranch Hand
Posts: 2308
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I was under the impression that anything in terms of streams cannot easily lead to out of memory error as because the program using that stream would work in blocks/chunks of bytes rather than using all the bytes at a time.

If the above assumption is true then you are not getting this error because of large input xml size ,Or this might be an issue with the transformer that you are using for transformation.
Try replacing the the current transformer with some other transformer.
 
Marshal
Posts: 28177
95
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Rahul Bhattacharjee:
Try replacing the the current transformer with some other transformer.

When you are using XSLT, it's almost always necessary to store the entire input XML in memory. That's because your transformation is unlikely to access the input XML sequentially and it's very difficult for a program to identify transformations that do. So you'll find that transformers always store the input XML in memory.

So, if your input XML is too large to fit into memory and you can't get more memory, then you can't use XSLT on it and you will have to do what Bill Brogden suggested.
 
Rahul Bhattacharjee
Ranch Hand
Posts: 2308
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thanks for the explanation Paul Clapham.I understand that it would be quite difficult to transform by traversing the input xml sequentially ,so inmemory model would be suited for this.

But I am failing to understand as how multipass transformation would help in this case.Any pointer would be greatly appreciated.

Thanks in advance.
 
Paul Clapham
Marshal
Posts: 28177
95
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
In the first pass you would use a streaming method (such as SAX) to read your bloated XML file and (as Bill said) produce a smaller XML file to work with.
 
Rahul Bhattacharjee
Ranch Hand
Posts: 2308
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Paul Clapham:
In the first pass you would use a streaming method (such as SAX) to read your bloated XML file and (as Bill said) produce a smaller XML file to work with.



Thanks Paul.I got it now. ;)
 
Of course, I found a very beautiful couch. Definitely. And this tiny ad:
a bit of art, as a gift, that will fit in a stocking
https://gardener-gift.com
reply
    Bookmark Topic Watch Topic
  • New Topic