# Regarding traversal time in Map

saurabh kuma

Greenhorn

Posts: 15

posted 2 years ago

Hi Friends,

I read somewhere

I didn't get this line, Please help

Thanks

Saurabh

I read somewhere

Traversal of a map is through one of its collection-views.For an underlying LinkedHashMap, the traversal time is proportional to the size of the mapâ€”regardless of its capacity. However, for an underlying HashMap, it is proportional to the capacity of the map.

I didn't get this line, Please help

Thanks

Saurabh

saurabh kuma

Greenhorn

Posts: 15

posted 2 years ago

- 1

Do you understand the difference between a collection's capacity and its size?

The capacity is (generally) the number of Objects it can hold, while its size is the number of Objects it does hold. What the quote says is the LinkedHashMap's speed depends on the number of Objects it actually holds, while the HashMap is dependent more on the number of Objects it can hold.

The capacity is (generally) the number of Objects it can hold, while its size is the number of Objects it does hold. What the quote says is the LinkedHashMap's speed depends on the number of Objects it actually holds, while the HashMap is dependent more on the number of Objects it can hold.

Steve

Ivan Jozsef Balazs

Rancher

Posts: 972

5

posted 2 years ago

A vague guess:

A linked hash map offers a way to wade through the elements, so the traversal time corresponds to the number of the elements, that is, the size.

If you have to go through all slots, then the capacity comes into the picture.

Does it matter much?

Well, iet us take a map, let us stuff it with elements so both the size and the capacity go high.

Then let us delete most of them, so the size goes down - does the capacity remain high?

If the capacity gets adjusted, that is, it also goes down, then there is not much difference.

A linked hash map offers a way to wade through the elements, so the traversal time corresponds to the number of the elements, that is, the size.

If you have to go through all slots, then the capacity comes into the picture.

Does it matter much?

Well, iet us take a map, let us stuff it with elements so both the size and the capacity go high.

Then let us delete most of them, so the size goes down - does the capacity remain high?

If the capacity gets adjusted, that is, it also goes down, then there is not much difference.

saurabh kuma

Greenhorn

Posts: 15

posted 2 years ago

Hi Friends

Thanks for the help

Steve, as you said:

I understood that in LinkedHashMap traversal time is dependent on number of elements. But I have a doubt in case of HashMap. In HashMap if we have say 50 buckets(which is capacity) and 20 elements linked to this buckets, then traversal time will be 50+20 ?

Thanks

Saurabh

Hi Friends

Thanks for the help

Steve, as you said:

The capacity is (generally) the number of Objects it can hold, while its size is the number of Objects it does hold. What the quote says is the LinkedHashMap's speed depends on the number of Objects it actually holds, while the HashMap is dependent more on the number of Objects it can hold.

I understood that in LinkedHashMap traversal time is dependent on number of elements. But I have a doubt in case of HashMap. In HashMap if we have say 50 buckets(which is capacity) and 20 elements linked to this buckets, then traversal time will be 50+20 ?

Thanks

Saurabh

posted 2 years ago

How HashMap stores element you should learn at first.

Then you will know that your mappings in hashmap are spread around the backing array god knows on each index and some Entries even occupy the same index.

This code is from HashIterator used by entrySet() method to return next entry:

At first we check if there is one more Entry at the same bucket by (next = e.next) == null and if there is no then we move inside to while loop. In this while loop we iterate backing array(called table) untill current index becomes equal to length or new Entry at index++ is found. If new Entry at new bucket is found this while loop breaks due to condition - (next = t[index++]) == null.

To know java collections profoundly and be confidently please read my tutorials

In HashMap if we have say 50 buckets(which is capacity) and 20 elements linked to this buckets, then traversal time will be 50+20 ?

How HashMap stores element you should learn at first.

Then you will know that your mappings in hashmap are spread around the backing array god knows on each index and some Entries even occupy the same index.

This code is from HashIterator used by entrySet() method to return next entry:

At first we check if there is one more Entry at the same bucket by (next = e.next) == null and if there is no then we move inside to while loop. In this while loop we iterate backing array(called table) untill current index becomes equal to length or new Entry at index++ is found. If new Entry at new bucket is found this while loop breaks due to condition - (next = t[index++]) == null.

To know java collections profoundly and be confidently please read my tutorials

True person is moral, false is right!

posted 2 years ago

In a worst case scenario, that is about right. We know to traverse the table, you need to hit every bucket, so the traversal will be at minimum 50. In the worst case, all Entries will be in the same bucket, so after the first one in the bucket is touched by the bucket-traversal, you would have 19 additional operations, and you would get 50+19. In the best case scenario (well dispersed hashcodes) each of the 20 elements would be in its own bucket, and the traversal would be 50+0 additional traversal operations.

- 2

saurabh kuma wrote:In HashMap if we have say 50 buckets(which is capacity) and 20 elements linked to this buckets, then traversal time will be 50+20 ?

In a worst case scenario, that is about right. We know to traverse the table, you need to hit every bucket, so the traversal will be at minimum 50. In the worst case, all Entries will be in the same bucket, so after the first one in the bucket is touched by the bucket-traversal, you would have 19 additional operations, and you would get 50+19. In the best case scenario (well dispersed hashcodes) each of the 20 elements would be in its own bucket, and the traversal would be 50+0 additional traversal operations.

Steve