Here's my situation - I am loading data from a RDBMS and putting it into a hashmap for quick access in later processing. I get a ResultSet, iterate through, and populate my hash.
I want to init my HashMap so it is efficient... about the right size to avoid rehashing, but not bigger than I need. But, until I get through the ResultSet, I don't know how much stuff I am dealing with... 50 records, 100, 1000...
When you are iterating through the ResultSet and populating the HashMap as and when you encounter a new row, you should have no problems with the efficiency of HashMap. As far as my lnowledge goes, HashMap's size is decided dynamically.
That doesn't sound right. If the poster is thinking of moving to the bottom on the ResultSet and back to the top, that will probably take more time than using a forward-only record set and populating the Map in the simplest possible way.
To the original poster: don't sweat the efficiency until you discover it's a problem. How many rows are we talking about anyway? If this really is a concern (which I doubt), I would execute an efficient query written to *estimate* the size of your proper query. Then use that size to intial the capacity of the Map.