a). I choose HashTable to store millions of quantity values from DB. Key is String and value is float. I choose HashTable than HashMap for not storing null values in key and value. So i have to compromise retrieval speed. I am not assigned initial capacity and load factor for Hashtable. Hence hash table grows double. Is it right approach for good performance?
b). If i do not implement equals() and hashCode() for the HashTable, may i get wrong values for the particular key?
I choose HashTable than HashMap for not storing null values in key and value.
This doesn't sound like a good reason to prefer Hashtable over HashMap. If the map is not supposed to contain null keys or values, then that's a rule that should be enforced (and handled) by the application code.
If i do not implement equals() and hashCode() for the HashTable, may i get wrong values for the particular key?
I'm confused. You said the keys are String objects (a class which implements these methods), so where do you see a potential problem?
Using the automatic size increase in HashTable will cause a real performance hit as the size grows. I would certainly try for an initial size which is sufficient. I dont think you are going to get "millions" of Strings and Float objects in any reasonable memory.
subject: HashTable - performance issue for millions of values stored