I have an object called rootNode which represents the base for a fairly large tree structure. Unfortunately, when I perform a simple insert like so:
it's taking nearly an hour to save. During that time I get a steady stream of select statements printing to my console, apparently running a separate select statement for every single child node on down the tree. The best I can figure is this is for some kind of dirty check, which is not at all necessary.
I do wish to save the entire tree in one shot.
I'm very frustrated. Anyone have any ideas? I'll willing to try just about anything.
Well, your mapping will have an affect on how that runs. But first, I would choose to use session.saveOrUpdate() instead of calling insert().
The second is how deep do you need to keep that object filled. You could make a different instance, that is only as deep as you need it to be, to be inserted and/or updated. (cascade options also could have an affect on what you are seeing)
A lot of what you are seeing, has to do with where this object was created, was it within a session? Is it being managed completely by Hibernate at that point? How are those relationships objects, how are they mapped, and do those objects have ids already in them or not. If they do have ids already, and the call to insert is the first time Hibernate will get to manage the object, then Hibernate has no idea if those child objects need to be updated or inserted or not, and therefore needs to run a query to the database to check.
These are all conjecture at this point because I would need to see all the config and mappings and code that built up this object.
I gathered up all the nodes as a collection and inserted them in a loop, using a stateless session to avoid the 2nd level cache. Still not super-speedy, but it's much faster than what I was originally doing by running a save() on the root node.
flattenNodes() is a recursive method that converts my nodes' tree structure into a list:
Now I'm trying to wipe all the data before running this. Big problems. It typically returns an error that there's some kind of deadlock during execution. I'm pretty sure the hierarchical structure (the node table has a foreign key to itself and a constraint to not leave any orphans) is the cause of it.
Turns out that even doing a mass delete in DB2 directly causes the same problem, so I can't blame Hibernate for that. Someone suggested running a query to update and remove all of the foreign keys first before the mass delete so to prevent the deadlock. I'm going to try that.
A good workman is known by his tools.
I wish to win the lottery. I wish for a lovely piece of pie. And I wish for a tiny ad: