Well... As I've looked around, I've seen a lot of people with the same problem.
Their problem is that they're iterating and changing the <? extends Collection>
in the same loop. By using iterator.remove(); it's all good. I've seen some guys
talking about using Foo too. Ranch Topic, and yet
it has not solved my problem... The other way that seemed good was to use
ConcurrentSkipListSet, or some other concurrent collection. Which I could not
get to work as well... And here it is:
I'm getting this exception in the recursive invocation of this method. Trying to
get it to work with a different approach, I've used the following piece of code:
And yet, I STILL get the same error, in the same lines. I truly don't know where
I've wronged. Can you guys shine a light? :S
Your lines 8 to 10 can be written shorter and simpler like this:
You don't need the loop to add the elements of set to returnSet element by element.
It's hard to see in your code, but I'm sure that somewhere you're modifying a collection while you're looping over it somewhere. Maybe you're modifying the collection in a recursive call, which makes it difficult to see at first sight.
I don't see however how you can get a ConcurrentModificationException in line 12. The loop (which goes to line 23) does effectively nothing. Is this your real code?
Well.. As you've probably got from the method's name, it's a webcrawler.
I'm basically getting all the link from a website, and then going through each
of the URLs I caught, and getting more URLs, until the iteration gets to 3.
I didn't post that part of the code because it's really not important relating
to to the error... Oh, and tyvm on the initiation of the set. Didn't know that.
If you want me to post the full method, here it is:
I would probably use a set and a queue for this. The set would contain all URLs that have been found (regardless of whether they were processed already), and the queue would contain unprocessed URLs. When you find a URL, you'll try to add it to the set, and if it was not already present, add it to the queue as well. Take URLs to process from the queue. When the queue is empty, stop.
Well... As I'm using this to catch news from BIG news websites, that will
ALWAYS direct me to other huge websites ranging from e-commerces,
to other news websites, to forums. So basically It'd go on forever, while
my focus is just getting the news. As soon as this starts working I'll put
on a test which certifies that the URLs caught has some presence of the
original query. (Like... www.uol.com.br as a starting point, MUST contain
uol somewhere in the URLs that preceed it)
It works! I just used an extra set inside that iterator and it worked. Thank you so very much, Jasper!
It may not be the best solution, but while I don't make it faster, I'll stick to that. Here's the piece of code