We have a web app running in stress test with Silk Performer. When we use co-located test agents it scales nicely to hundreds of users. When we add even a few test agents located across town in our other office it slows down all agents horribly and scales only to a fraction of the users. We have ten T1 lines to the other office, but all the agents pile onto one of them for some unknown reason.
My working theory is: Over a slow network every HTTP request holds the connection & server process open longer. As we increase the time we reduce the number of available connections until all agents are waiting for connections and even the co-located agents slow down.
I'm shooting in the dark here - not a network or Apache expert. Does that idea ring true to anyone? [ April 07, 2005: Message edited by: Stan James ]
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
Another potential bottleneck: Are you holding some shared resource from beginning to end of a request, so that requests can only be handled sequentially?
The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
Joined: Jan 29, 2003
Certainly not on purpose. This scales well with as many local agents as we can throw at it so I think the design must be pretty fair. It goes all to heck in a hurry on remote agents. I'm struggling to think of how they might be different except for something related network latency.
Last night the test scaled to many hundreds of user nicely, then ran up to 100% database CPU when adding another user group into the mix. That's going to be one group-specific query somewhere. An much easier hit than the remote agent mystery, I'm betting.