I tried creating 10 files containing 100000 hardcoded texts using Thread pool mechanism and also Sequential execution mechanism for performance testing.
It looked like Sequential exection performs much better than Thread pool. Can somebody tell me if I'm wrong in implementing the Thread pool program or this is what the performance is. Sequential approach takes 875 milli seconds to write 10 files. Multi Threading takes 5000 milli seconds. This is too much.
As of now, I've a sequential file creation in my program. But it is not performing well. I thought of using multithreaded approach using Thread pool mechanism. But Sequential execution was outstanding compared to multi threaded approach with Thread pool.
This is very urgent for me. Can somebody please help me with this.
Introducing a thread pool is not a magic mantra to increase performance. In fact, it can degrade performance in many cases due to excessive context switches.
Performance of a multi-threaded program depends on a lot of factors, few of them being:
Number of processors available and number of threads running
Whether the program logic is inherently sequential or parallel
The work done by the threads is CPU bound or IO bound
On top of it, there must be enough work to break even the cost of creating new threads.
So, i would say that you take a bigger sample size i.e. instead of writing 10 files write let us say 100 files (if this is what is the expected work magnitude.)
Why do you do a Thread.yield() at the end of the task? In a threadpool, as soon as a thread has finished a task, it will start polling the queue like the other threads. I do not think of any use of a Thread.yield() here.
How many processors does the machine has where you are running this test? It will be nice to see the CPU utilization and context switch for the test. In linux you can see it using vmstat and top commands.
Way back in 1985, Commodore came out with the Amiga Personal Computer - the first mass-market machine that came from the factory with full pre-emptive multitasking as a standard core feature. The Mac was using co-operative multitasking at the time and IBM PCs, well, never mind.
Along the way, the original forces at the helm of Commodore (Jack Tramiel and friends/relatives) departed and attempted to rush out a competitor - the Atari ST. Because it was a rush job they had to use off-the-shelf OS components, which basically was the MS-DOS/CP/M architecture and didn't multi-task.
Snipers on the Atari side used these kinds of benchmarks to "prove" the inferiority of the Amiga's OS, but they were missing the point. There [b]is[/i] a penalty for multi-threading. It makes the program more complex, it introduces a whole new universe of possible non-reproducible bugs, and there's the overhead of the task manager.
However, multi-threading begins to shine when you can play these liabilities off against the benefits. Copying multiple files is a case in point.
File copy apps are I/O bound, not CPU bound. So they spend a lot of time sleeping while waiting for I/O to complete. Multi-threading can take advantage of this sleep time to schedule another thread to work while the first one's waiting. And so on and so forth, until the CPU's maxed out.
Or (and this is more likely) the I/O hardware is maxed out. Since you have a fairly small number of data channels to your network and disk drives, once those channels are booked solid, adding additional requests will simply back things up and no further performance gains will be seen.
If you want to optimize a multi-file copy operation, I suggest making a configurable thread count limit and tuning accordingly.
An IDE is no substitute for an Intelligent Developer.
Basically you are sending the hard disk thrashing all over the place writing X different files in X different places, hardly surprising it takes longer. Take a look at your CPU utilization, I bet it is mostly waiting on IO.
Years ago I wrote a Java text indexing program that had separate Threads for:
1. Read raw text
3. Merge with existing data
4. Write result
With 2 cpus the best it was able to do kept CPU utilization around 70% so dont expect miracles even if you have a clear application.
I took you program* and profiled it using JRockit Mission Control. See the attached picture.
The brown parts are when the JVM is waiting for files to be written to disk, the pink parts are when the JVM is waiting for the memory system to fetch a TLA and the green parts are when the JVM is running. When running with a thread pool size of 4 the CPU utilization goes up to almost 100%(see the graph at the top) and your problem doesn't seem to IO-bound, at least not on my machine. Even though the brown parts look quite large on the picture they are really small if you zoom in on them.
Looking at what methods that were running the most I found that the JVM was copying/allocating objects a lot. The application also spent a fair amount of time in the method IsTerminated() in ThreadPoolExecutor.
* I changed your program so that the amount of data written were 10 times as much. I wanted to get profiling data for at least a couple seconds.