well originally I was actually using a ConcurrentHashMap (actually originally I was using ...HashMap..naturally that led to issues), the concurrentHashMap caused a significant performance drop..I believe the main reason is the code does lots and lots of puts, and those are blocking calls.
As such my new strategy is to use the concurrentHashMap as an index into the array (reads are all non-blocking into the map) (be it just int[], or AtomicIntegerArray, or AtomicInteger[]).
That I still need to figure out.
As of right now my array will never shrink (an object added to it will never be removed, and the index will always reference the same object), and it does have a max size, 500 (yeah..atm) 500 AtomicIntegers don't scare me even if 400 of them don't actually ever get used. So I think I'm still leaning towards AtomicInteger[] at the moment.
AtomicInteger[]:
Will insure that no 2 threads will read the same value and increment assuming I use the increment function, and not read/set.
int[] on the other hand:
would allow 2 threads to read the same value and both increment to same value (not good, not disastrous in my case).
but..from what I remember in the early days..jdk 1.2 i++ is NOT thread safe, I think the bits can actually get corrupted since it would allow threads to see partial updates from other threads. based on the
word boundaries..this is where I'm not sure.
I think regardless I will start out with AtomicSomething, and perf
test it, it I regain the vast majority of the perf drop, then its good enough, with the added advantage it being fully atomic
Adding final on an array declaration doesn't make the references in the array final (just the reference to array itself)..so that wouldn't help in this case..I wouldn't think.