When we first started seeing XML parsers in Java there was a LOT of effort put into speeding them up. That was a long time ago. I think you can rely on the standard library parsers as being quite fast.
I suspect that processing time is going to be dominated by other decisions.
I would bet that using a SAX or Stax parser input with hand optimized bean creation will be LOTS faster than using the java.beans.XMLDecoder API. (and use less memory too)
How complex is your target "bean" object?
Joined: Mar 18, 2008
I have around 100 properties in my bean. I will get input from a ESB as xml . Before processing trading xml we need to store in a db and load the data from db and process trade data.
When trying to optimize things -- let's say speeding them up, since you said "fastest" -- you should normally target the parts of the things which take the most time.
It doesn't appear you have done your basic homework yet. Have you established that of this series of events (database updating, XML parsing, data processing), the XML parsing takes the most time? Or did you just pick on the XML parsing randomly?
Author and all-around good cowpoke
Joined: Mar 22, 2000
++ to Paul's comment on basic homework. I get the impression that DB updating is the slowest thing by far in applications like this but you really need to start measuring.
Do you really need the "bean" style interface Java object at all, or is that a requirement?
For fun browsing, google "premature optimization root of all evil"