This week's book giveaways are in the Jython/Python and Object-Oriented programming forums. We're giving away four copies each of Machine Learning for Business: Using Amazon SageMaker and Jupyter and Object Design Style Guide and have the authors on-line! See this thread and this one for details.
It is not clear if the table protein has 4 mil rows or the related tables have 4 million rows?
You should add filtering to your query. If you are interesting in a subset of the records use JPQL to build your entities.
You could also try to increase the max heap space on your jvm.
posted 12 years ago
The table which i am querying directly has 4 million rows. I don't want to fix the max heap size, because I assume one day the table may have 10 million rows, etc. I would rather have a scalable solution which is lazy. Does hibernate not provide a lazy iterator for a result ?
I need to iterate through all "n" records, regardless of how large n is, so I cannot want to filter the query in any way.
If you are going to be iterating through tons of records, I highly recommend that you evict objects as you are finished reading them. This will clear stuff out of the Persistence Context maps and not store all the record in memory.
So since you know that the collection loads as you go through each record, because of lazy fetching and the fetchmode isn't eager or subselect.
So as you are looping through them and going to the database, after you are done with that record, then evict the object from the session.