pravin gate

Greenhorn
+ Follow
since Feb 20, 2012
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by pravin gate

I am trying your way, But this

is forcing me to either throws exception or handle it? Can You explain it?

Ranganathan Kaliyur Mannar wrote:Hi,
You have to use Callable in place of Runnable in this case.

API quote:
"The Callable interface is similar to Runnable, in that both are designed for classes whose instances are potentially executed by another thread. A Runnable, however, does not return a result and cannot throw a checked exception"

Callable is ideal for your situation because there is a possibility of exception being thrown by the task. The signature of the 'call()' method has a throws clause. Your code could look like this:

Ok.Thanks Winston for your helpful suggestions.

I also tried to increase Jvm heap size (-Xms and -Xmx.) But It's not showing me any effects on output.I am still not able to login 4000 users.


Winston Gutkowski wrote:

pravin gate wrote:This exception I am getting when I am trying to connect 4000 users to single port (means 4000 users will be logged in after successful login) .


Port, schmort - shouldn't make a whit of difference (except that you may be taxing a particular bit of inetd (or whatever the port listener du jour is these days)).

Also: nothing to do with 'flow memory'; or so it would seem.

The only thing I see is that there are an awful lot of what look like Swing or AWT methods involved. Any particular reason for that? It may be perfectly reasonable, but is it possible that you're retaining more than just the login credentials from your process?

Fraid I'm no expert when it comes to GUI apps, but 4,000 logins sounds pretty reasonable to me, even if they aren't doing anything. I've worked on some medium size servers that didn't allow that many user logins.

Apart from twiddling with the heap size, I can't suggest much more; if a 10% increase allows you 10% (or roundabout) more logins, I suspect you've found your bottleneck.

Winston

12 years ago
I am trying to create a simple Thread program using newSingleThread Executor.

So here is my sample code.





As we can see here I have just created a object of ExecutorService and I am trying to execute a task.Up to this it's clear.

But If I am getting any exceptions I just want to restart my task So in this scenario how can I do that?

Any suggestions will be helpful.
This exception I am getting when I am trying to connect 4000 users to single port (means 4000 users will be logged in after successful login) .


And I am also printing Rutimefree memory of heap size which shows latest value 5328 before exception.
I am able to connect (login) around 3000 users without any exception.

So what should I need to do to login more users?

12 years ago
ok. So here I have created a Test application from which I am able to login(means creating connections to a singke port) number of users.
But after connecting 3000+ users , program giving me Outofflowmemory memory heap space exceptions.

I was trying to increase memory heap size like this way in java runtime settings :-Xms200m -Xmx500m.
But still I am not able to connect more users than 3000+users.

Why is it so? What should I need to do ?

Winston Gutkowski wrote:

pravin gate wrote:So is it there any way where I can provide small amount of jvm (thread) memory for small tasks and good amount of memory for large tasks.


You can certainly allocate more memory for a JVM (in several ways), but I'm not sure whether that include limits per Thread (never tried).

However, I'm not sure that either task you list is going to be affected much by throwing memory at it. As you say, the login task is small, and the file transfer task is likely to be limited by the speed of the connection rather than the amount of available memory. In fact, if it was me, I think I might be looking at limiting the amount of memory available to a file transfer process rather than expanding it, simply because it might be hanging around for a long time; however, I think I'd probably use a mechanism like a blocking queue to do it, rather than mucking about with startup params.

My advice would be to do profiling to find out
(a) Whether you really do have a problem, and
(b) Exactly where it is.

Winston

12 years ago
I have 2 tasks in which I am using multithreading, in which one task is short task like login other is longer such as file transfer.

So is it there any way where I can provide small amount of jvm (thread) memory for small tasks and good amount of memory for large tasks.

For both tasks I have separate servers.

I have heard that we can increase size of java heap space .

Suppose I have a 2 GB memory RAM, So can I allocate 2 GB or 1.5 GB to my heap memory? I s it good way. What are positive and negative facts are there regarding it?

when and how should I use these JVM command line options -Xms and -Xmx for which kind of applications?


Any suggestions will be helpful.
12 years ago
Thanks jayesh for providing such useful guidence.

Jayesh A Lalwani wrote:Yes, starting infinite number of threads will perform much better than using a thread pool when all your threads are doing is sleeping

However, realistically, your threads won't be sleeping all the time. They will be using memory and CPU. Change your demo to make your Runnables do some work that simulates the real work been done

ALso, a ThreadPool doesn't really give you performance. What it gives you is better failover. Let's consider a scenarion where your task is CPU bound.

Let's assume that in a real life scenario, each "task" runs for 10 seconds and spends 50% of the time using the CPU and 50% in IO. Let's say you have 4 cores. Now, let's say you have 200 tasks. Using your first solution, you will start 200 threads, all of which will be trying to fight for the CPU. Since you have 4 cores, it comes to 50 threads per core. The core can only execute one thread at a time. So basically, it will be constantly switching contexts between threads. So, the time taken to execute 200 tasks is

t = threads per core*execution time per thread*percentage of CPU time + contextswitchingoverhead for 200 threads
= 50*10s*0.5 + contextswitchingoverhead(200)
= 250s + contextswitchingoverhead(200)


Now, let's see how long it takes with a thread pool. Let's say you put 8 threads in your thread pool. Each thread has 25 tasks

t = tasks per thread * execution time per thread + context switching overhead for 8 threads
= 25*10s + contextswitchingoverhead(8)
= 250s + contextswitchingoverhead(8)

Note that contextswitching becomes exponentially worse as the number of threads increase. contextswitching between 200 threads is going to be much much worse than context switching.

The thread pool is not really giving you performance; ie; when you have low number of tasks, it will give you same performance as starting threads, but when your number of tasks are much greater than the resources in the system, the thread pool guarantees that you don't overwhelm the CPU with too many threads.

12 years ago
I am trying to find out about the performance difference between normal multithreading and multithreading using executor (to maintain a thread pool).

The below are code examples for both.

Without Executor Code (with multithreading):



With executor (multithreading):



For sample output please see attached image.


When I run both programs, it turns out the executor is more expensive than normal multithreading. why is this so?

And given this, what is the use of executor exactly? We use the executor to manage thread pools.

I would have expected the executor to give better results than normal multithreading.

Basically I'm doing this as I need to handle millions of clients using socket programming with multithreading.

Any suggestions will be helpful.
12 years ago
Thanks, for showing interest .

If the queue depth for incoming connection requests default value is 50, does It means that I can handle my port say '5000' can handle only 50 requests at a time.


Jeff Verdegan wrote:Hi, Pravin, and welcome to the Ranch!

If you look at the constructors for the ServerSocket class, you'll see that a couple of them take a backlog parameter. This specifies the queue depth for incoming connection requests. If you continue to look at that documentation, you'll see that the default value is 50.

I am creating a web application having a login page , where number of users can tries to login at same time. so here I need to handle number of requests at a time.

I know this is already implemented for number of popular sites like Gtalk.

So I have some questions in my mind.

"How many requests can a port handle at a time ?"

For e.g . As we know when we implement client server communication using Socket programming(TCP), we pass 'a port number(unreserved port number)to server for creating a socket .

So I mean to say if 100000 requests came at a single time then what will be approach of port to these all requests.

Is he maintains some queue for all these requests , or he just accepts number of requests as per his limit? if yes what is handling request limit size of port ?

Summary: I want to know how server serves multiple requests simultaneously?I don't know any thing about it. I know we are connection to a server via its ip address and port number that's it. So I thought there is only one port and many request come to that port only via different clients so how server manages all the requests?

This is all I want to know. If you explain this concept in detail it would be very helpful. Thanks any way.