Piotr Nowakowski

+ Follow
since May 13, 2013
Merit badge: grant badges
For More
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Piotr Nowakowski

Anayonkar, thanks for your answer.

And what if we make the problem more interesting and assume we have inifinite number of incoming lines (data) ?
Let say we have a constantly completing BlockingQueue as an input and we want to process the data while preserving the order mentioned in my first post.
One thread that will be processing the data and splitting it into n BlockingQueues (consumers), according to some additional structure (e.g. a map), works but maybe there is more effective solution.

I have a question about the Producer/Consumer problem.

Let's assume I have a file with lots of data.
Each line of the file is defined as a pair
The task is to read the file line by line using one/multiple thread(s) (producer(s)) and consume the data by multiple threads (consumers).
The problem is that each line with a given type must be ordered with other lines with the same type.


And we need to ensure that A;1 is processed before A;2.
We can, however, process B;10 before these two lines (A;1 and A;2).

My first solution was to create one thread (a procuder). The producer was responsible for reading the file line by line and putting pair into a BlockingQueue in the order they occur in the file.
I also created two additional BlockingQueues that provided data to two consumer threads. To achieve the proper order I created a ConcurrentHashMap where I stored pairs <String, Intetger> (which means that the data of type X should go to Y-th queue). To achieve this I created an AtomicInteger counter which determines the number of a queue into which the data is put.

For the example above, the scenario could be:
1) first thread read "A;1", counter is 0, the mapping is not available in the map, so I need to add it and increment the counter (the mappig is: A->0)
1.1) first thread add "A;1" into the first output queue
2) second thread read "A;2", counter is 1, but the mapping is available (A->0), so I don't need to decrement the counter
2.1) second thread add "A;2" into the first output queue
3) first thread read "B;10", counter is 1, the mapping is not available in the map, so I need to add it and decrement the counter (the mappig is: B->1)
3.1) first thread add "B;10" into the Second output queue
4) second thread read "C;100", counter is 0, the mapping is not available in the map, so I need to add it and increment the counter (the mappig is: C->0)
4.1) first thread add "C;100" into the first output queue

Every data of the type "A" and "C" will be processed by consumer1 and every data of the type "B" will be processed by the consumer2.

The solution behaves well in the scenario described above, but in real world the second thread can put the line "A;2" before the first thread put the line "A;1".

My second solution was to modify the first one. I created only one thread which was responsible for data split. The solution works, but I wonder if it is effective.

How can I implement such producer/consumer problem and preserve the order for each type?
Ok, but can you please provide any example in which we can observe the "happens-before" partial ordering using volatile.
I think I don't quite understand the "happens-before" and I cannot find any good explanation of it.

I've got another question.
This time with the volatile.

Here is a code:

Why is it possible to print "NOT OK" ?
According to the Java specification - "a volatile write in one thread happens-before any subsequent volatile read in another thread".
Thank you for the explanation.
Thanks for the reply, but is there any documentation/specification quote which says that "thread death causes all waiting threads to be signalled" ?
If I used different obejct as a lock - let's say Object lock = new Object() - the problem wouldn't occur and the threads would be in WAITING state.
So it means the thread death wouldn't cause all waiting threads to be notified, except for the case in which the object which is locked is the thread object itself.

When I execute the following code

a result is:
Waiting for Calc
Waiting for Calc
Waiting for Calc
Waiting for Calc
Waiting for Calc
Result is = 99
Result is = 99
Result is = 99
Result is = 99
Result is = 99

so it surprisingly contains "TIME OUT" and "Result is = 99" lines although the Reader threads are (or should be) in the WAITING state.
The code wouldn't change its behaviour if I used notifyAll() after "done = true;" line in Calc class.

In my opinion a scenario for the code is:
1) one of the threads acquires lock on Calc object
1.1) if the thread is the Calc thread, it goes to sleep for at least 1 second
1.2) if the thread is one of the Reader thread, it changes its state to the WAITING state and release the lock (some other Reader threads can also change their state before Calc object acquires the lock)
2) others Reader threads change their state to WAITING and release the lock
3) Calc object acquires the lock and executes its code
4) Calc object release the lock (and don't execute any notify method)
5) Reader threads remain in WAITING state

but in that case the 5th step is following:
5*) Reader threads change their state to READY and then to RUNNING and they execute the rest of their code

Why does it happen ?

I've tried the following code:

The result is '1', so it seems that the compiler prefers method in which the list of parameters in a byte code matches to the list of arguments.
In this case the list of arguments is int[] and the list of parameters from the 1'st method is int[] and from the 2'nd method is int[][], so the first method is more specific.

Please correct me If I'm wrong.

And what about these methods:

Executing compute(new int[] {}, new int[] {}), the result is 1.

According to your post, these methods are:
static void compute(int[] x, int... y) {...} -> static void compute(int[], int[])
static void compute(int[] x, int[]... y) {...} -> static void compute(int[], int[][])

The second method is still valid for above arguments.
Why are the first method more specific than the second one ?


Fixed arity takes precedence over variable arity. And with fixed, one method is a perfect match (one parameter with an int array) and the other one doesn't match (two parameters of int array).

Could you please explain why you refer to the variable arity methods as fixed arity methods?

As I understand, the compiler see fun(int[] x) and fun(int[] x, int[] y) instead of theirs variable arity versions... Am I right ?


could you please explain why the following code:

results in "1" ?

When I remove the //1 line the result is "2".

Why doesn't a compiler report ambiguous methods ?