• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

stackoverflow cloning hashtable

 
Ranch Hand
Posts: 41
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
My application uses 2 hashtables. After running for a while, both hashtables contain about 30.000 entries. Both hashtable have Strings for key, and CacheItems for values (containing 3 longs and 1 short).
Every 10 minutes, I make a backup of both hashtables, using 'clone'. After running for a couple of hours, cloning the second hashtable fails, with a stackoverflowerror. Stacktrace:
java.lang.StackOverflowError
at java.util.Hashtable$Entry.clone(Hashtable.java:860)
at java.util.Hashtable$Entry.clone(Hashtable.java:860)
at java.util.Hashtable$Entry.clone(Hashtable.java:860)
etc...
I use jdk1.4.1. I know Stackoverflows are usually caused by infinite loops, but I cannot find one (I didn't check the hashtable clone method though). Would it help to increase stacksize when starting java (and how do I do that?)?
annekee
 
Annekee Dufour
Ranch Hand
Posts: 41
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
This might be relevant: I do my cloning in two TimerTasks attached to one and the same Timer: this means it all happens in one Thread. The hashtable that causes the stackoverflow is slightly smaller than the one that doesn't.
 
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
It it possible that one of the Hashtables, directly or indirectly, holds itself as a value?
 
Annekee Dufour
Ranch Hand
Posts: 41
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
That's the first thing I checked. No, only Strings for keys and CacheItems for values.
BTW, my collegue just told my that we use a thread stacksize of 256 kb.
 
Annekee Dufour
Ranch Hand
Posts: 41
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I explicitely added the hashtable to itself for testing. Before I got to the clone I got a ClassCastException somewhere else. So there was no StackOverflow.
 
Annekee Dufour
Ranch Hand
Posts: 41
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Does anyone out there have a clue why I keep getting this StackOverflowError? It happens in Hashtable.clone(), and I am positive that the hashtable does not contain any other hashtables. Problem is, I could not replicate the problem in my testenvironment.
Could it maybe have something to do with the keys I am using (keys are very similar except for the social security number in it)?
Would a HashMap help?
Should I increase thread stack size?
Should I switch to another java version?
 
Greenhorn
Posts: 25
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
You mentioned "CacheItems for values (containing 3 longs and 1 short)."
Is CacheItems a custom class you built? What does it look like and have you provided your own implementations for equals/hashCode etc?
 
John Ipe
Greenhorn
Posts: 25
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Another thought: Could it have anything to do with the fact that Hashtable's clone gives you a shallow copy? What do you do with the cloned hashtables?
 
Ranch Hand
Posts: 1365
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The culprit is probably this section of the Hashtable implementation:

It will use recursion to make a copy of the list of elements with the same hash code. If that's the source of the problem then you must have an impressively large number of hash collisions. Any hashtable with so many collisions would have to have absolutely terrible performance -- does "get()" tend to return quickly on this hashtable?
Basically the moral of the story is what everyone has been saying for years: Don't use Hashtable and Vector, use HashMap and ArrayList. HashMap uses techniques which decrease the chances of ending up with a hash function and hashtable size that cause a lot of collisions. It also has a clone() method that should avoid any stack overflows.
 
Annekee Dufour
Ranch Hand
Posts: 41
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thanks! I already created a hashmap version, but I had trouble convincing people here that that was a good idea.
How is hash computed, is it simply the hashcode of String (my keys are all Strings), or does the capacity of the hashtable also have something to do with it? My hashtable is created with an initial capacity of 80000.
I sampled 1000 keys and computed the hashcode. The values were all negative but all different.
In answer to John: I have not implemented equals or hashcode in my CacheItem class, since CacheItems are values, not keys. I use cloning when I want to select the items that need to be removed during maintenance, and when I want to write the table to a file. I don't want to lock the original hashtable for too long.
 
David Weitzman
Ranch Hand
Posts: 1365
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The way the hashtable implementation maps a hashCode() onto a slot in the hashtable is simply to compute the value of the hashCode modulo the hashtable capacity. Many computer science texts recommend using a prime number since you minimize the chance of data elements having factors in common with the modulus, which can lead to patterns. For example, no even number can map onto an odd slot using that hashing scheme if the modulus is 80000, thus cutting off half of the slots to half of the data. Now that's not really so bad if your data is uniformly distributed since half the slots are saved for odd numbers, but real data rarely is. Other odd patterns can pop up because of common factors, so generally people like to use primes and factor patterns aren't a problem. 80021 is a prime in that numeric area, although for HashMap don't bother initializing to a prime because it'll just ignore you anyway.
 
Annekee Dufour
Ranch Hand
Posts: 41
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
It (HashMap) works! I still think it's strange though: in my sample of 1000 keys I only had 6 entries with the same hash twice, and no triplets.
Anyway: No more Hashtables for me. Thanks for all your patience & help.
 
With a little knowledge, a cast iron skillet is non-stick and lasts a lifetime.
reply
    Bookmark Topic Watch Topic
  • New Topic