Ranganathan Kaliyur Mannar wrote:Hi,
You have to use Callable in place of Runnable in this case.
API quote:
"The Callable interface is similar to Runnable, in that both are designed for classes whose instances are potentially executed by another thread. A Runnable, however, does not return a result and cannot throw a checked exception"
Callable is ideal for your situation because there is a possibility of exception being thrown by the task. The signature of the 'call()' method has a throws clause. Your code could look like this:
Winston Gutkowski wrote:
pravin gate wrote:This exception I am getting when I am trying to connect 4000 users to single port (means 4000 users will be logged in after successful login) .
Port, schmort - shouldn't make a whit of difference (except that you may be taxing a particular bit of inetd (or whatever the port listener du jour is these days)).
Also: nothing to do with 'flow memory'; or so it would seem.
The only thing I see is that there are an awful lot of what look like Swing or AWT methods involved. Any particular reason for that? It may be perfectly reasonable, but is it possible that you're retaining more than just the login credentials from your process?
Fraid I'm no expert when it comes to GUI apps, but 4,000 logins sounds pretty reasonable to me, even if they aren't doing anything. I've worked on some medium size servers that didn't allow that many user logins.
Apart from twiddling with the heap size, I can't suggest much more; if a 10% increase allows you 10% (or roundabout) more logins, I suspect you've found your bottleneck.
Winston
Winston Gutkowski wrote:
pravin gate wrote:So is it there any way where I can provide small amount of jvm (thread) memory for small tasks and good amount of memory for large tasks.
You can certainly allocate more memory for a JVM (in several ways), but I'm not sure whether that include limits per Thread (never tried).
However, I'm not sure that either task you list is going to be affected much by throwing memory at it. As you say, the login task is small, and the file transfer task is likely to be limited by the speed of the connection rather than the amount of available memory. In fact, if it was me, I think I might be looking at limiting the amount of memory available to a file transfer process rather than expanding it, simply because it might be hanging around for a long time; however, I think I'd probably use a mechanism like a blocking queue to do it, rather than mucking about with startup params.
My advice would be to do profiling to find out
(a) Whether you really do have a problem, and
(b) Exactly where it is.
Winston
Jayesh A Lalwani wrote:Yes, starting infinite number of threads will perform much better than using a thread pool when all your threads are doing is sleeping
However, realistically, your threads won't be sleeping all the time. They will be using memory and CPU. Change your demo to make your Runnables do some work that simulates the real work been done
ALso, a ThreadPool doesn't really give you performance. What it gives you is better failover. Let's consider a scenarion where your task is CPU bound.
Let's assume that in a real life scenario, each "task" runs for 10 seconds and spends 50% of the time using the CPU and 50% in IO. Let's say you have 4 cores. Now, let's say you have 200 tasks. Using your first solution, you will start 200 threads, all of which will be trying to fight for the CPU. Since you have 4 cores, it comes to 50 threads per core. The core can only execute one thread at a time. So basically, it will be constantly switching contexts between threads. So, the time taken to execute 200 tasks is
t = threads per core*execution time per thread*percentage of CPU time + contextswitchingoverhead for 200 threads
= 50*10s*0.5 + contextswitchingoverhead(200)
= 250s + contextswitchingoverhead(200)
Now, let's see how long it takes with a thread pool. Let's say you put 8 threads in your thread pool. Each thread has 25 tasks
t = tasks per thread * execution time per thread + context switching overhead for 8 threads
= 25*10s + contextswitchingoverhead(8)
= 250s + contextswitchingoverhead(8)
Note that contextswitching becomes exponentially worse as the number of threads increase. contextswitching between 200 threads is going to be much much worse than context switching.
The thread pool is not really giving you performance; ie; when you have low number of tasks, it will give you same performance as starting threads, but when your number of tasks are much greater than the resources in the system, the thread pool guarantees that you don't overwhelm the CPU with too many threads.
Jeff Verdegan wrote:Hi, Pravin, and welcome to the Ranch!
If you look at the constructors for the ServerSocket class, you'll see that a couple of them take a backlog parameter. This specifies the queue depth for incoming connection requests. If you continue to look at that documentation, you'll see that the default value is 50.