This week's book giveaway is in the Mac OS forum. We're giving away four copies of a choice of "Take Control of Upgrading to Yosemite" or "Take Control of Automating Your Mac" and have Joe Kissell on-line! See this thread for details.
A XML file is need to parsed and data should be stored in database. So existing implementation is XML file inserted into a database table and fired a trigger on that table. The trigger is reading the XML content and storing the data into appropriate table.
What my idea is, with out inserting the XML file into database i am planning to use JAXB here. I will unmarshalize XML file and using DAOs i can store them into database.
But here i need to prove which approach is better? I need to find the CPU % usage and lapse time in both the cases. Can anybody help me here with you suggestions to find CPU% and lapse time in bothcases.
You are talking about two approaches here
1. Making the persistence layer do the parsing
2. Making the business layer do the parsing.
There are pros and cons with each approach.
We are doing approach 1.
Our xml is not very big. We update the xml quite often.
We choose aproach 1 because network traffic is less.
In our case ...
we needed to store the xml anyway- whether we go with approach 1 or approach 2.
If we used approach 2 we would have send the xml and the parsed data too. That would have doubled network usage.
Another consideration was CPU usage in application servers. With approach 2 it may be higher than approach 1. Our servers are clustered and we are already parsing the xml for business needs. So approach 2 didn't have any affect on CPU usage.
Now we are feeling we should have gone with approach 2 because...
For us we don't have a staff to work on stored procedures. So any change needed on the stored procedures or triggers is a big bottle neck for us.
subject: need some suggestions in CPU % usage and lapse time