How set collection manages to remove duplicate objects from it. If entry of every object checked by equals from all present object then it is a very lengthy process to do it. Is it by the same way or some other way around?
The HashSet for example just uses an internal HashMap to manage its elements. That way it's possible to guarantee that certain operations only take constant time, i.e. it's equally fast, no matter how big the set may grow. The API documentation of HashSet says the following:
This class offers constant time performance for the basic operations (add, remove, contains and size), assuming the hash function disperses the elements properly among the buckets.
Basically that's possible because HashMap uses the hashCode() method to handle its elements which makes it unnecessary (depending on the quality of the produced hashes) to compare new elements to each existing element.