adam spline

Greenhorn
+ Follow
since Sep 29, 2010
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
In last 30 days
0
Forums and Threads

Recent posts by adam spline

I am trying out the Persistent Manager with the FileStore. I frequently get the following message in my log:

org.apache.catalina.session.PersistentManagerBase swapIn
SEVERE: persistentManager.swapInInvalid


This message seems to occur once the sessions have timed out. However, the messages do not appear as a response to a request. It may make sense that a message is displayed if a request has attempted to access an invalid session, which could lead to an invalid swap (why that would be severe I do not know). But in this case, the message seems to appear once the session has timed out.

[1] Why is it even trying to swap in a session if there was no request? Why do that?
[2] Is this message harmless (see this discussion)
[3] If it is harmless how can I suppress this message.

Here is my setup:

Thanks for your help!

-Adam
11 years ago
I am using Apache Commons FileUpload with Tomcat 6. I have a standard requirement in which users can upload images via an html web form. I am trying to prevent users from uploading huge files. I would like to be able to cut off the files if they are too big without having the entire large file be uploaded before it is rejected. This is to prevent abuse. I know I can use some html5 validation on the client, but I need a solution on the server to prevent abuse.

Here is what I have tried and does not work for me:

1) Setting the setFileSizeMax and setSizeMax of ServletFileUpload requires the entire file to be uploaded before this exception is thrown. Thus, a 1 GB file will completely upload, and then when it is parsed it throw this exception.

2) Checking the request's content length does not prevent the entire file from being uploaded

3) Setting the maxPostSize in server.xml does not work for content-type: multipart/form-data and this value is ignored in this situation. See this post http://tomcat.10.n6.nabble.com/The-purpose-of-maxPostSize-td2158253.html

4) The best I have so far is to use the FileUpload Streaming API http://commons.apache.org/fileupload/streaming.html. Using this method we are able to process the bytes as they come in. And if the number of bytes gets too big, we simply stop writing them to disk. However, I am not able to stop the upload. When I try to close the client's inputstream, the stream does not close, but blocks and waits until the entire file is uploaded. Similarly, if I not not close the inputstream (which seems to be bad practice) and just "return" out of the servlet, the client still continues to upload the file, and it appears that tomcat is still happily accepting the bytes, even though the bytes are not written to file.

This seems like a standard situation, but I am not able to find a clear solution. Any help will be appreciated. Thanks!

12 years ago
Thanks for all your and your suggestions.

To answer some of your questions,

Will you be able to implement your application faster in PHP than in Java? What's the cost of implementing a solution in PHP vs Java?



While Java is my primary language, I was dabbling in PHP for this project, because some of web services I was using seemed to have PHP clients that were easier to integrate. So, I started taking a shot at a php solution. I was able to start the development really fast because the PHP client functions were so easy to drop into the code. But, I noticed my memory usage was high per connection with PHP (as I indicated in my original post). So, I took a little time to re-look at it, and was able to get everything working fine in Java with a bit more effort. Thrashing it with Jmeter, the Java solution is faster, and I can handle a larger number of connections on smaller server (using tomcat standalone). Once I get through a bit of the hurdles of setting up things up, I think the cost development cost with Java will not be too much higher than the PHP. So, I am going forward with the Java solution. (I do not want to make this a flame war on languages, I am just sharing my experience and how I decided to deal with it).

Although modern webservers generally boost performance using a "keep-alive" mechanism, the core HTTP protocol determines the number of connections by the number of active requests being processed and not by the number of users logged in.



I was using the term "users" rather loosely . Yes, http is a request/response protocol. However, I do not think it is terribly incorrect to think of webserver load in terms of "users", depending on how we define the term. For example, we can think of users as those who are actively clicking around, (as opposed to those who are logged in). With Keep alives on (within say apache), an entire process is held up until the keep alive is done. And often those processes are heavy with mod_php, so each keep alive process will be consuming quite a bit of memory. So, with apache keep alives and 100 maxclients... if a system has 100 users clicking around every 15 seconds, the system is pretty much maxed out, regardless of how fast an individual request/response time is. Of course we can make the keep alives shorter to make the situation better (which I think is a pretty good idea), but the defaults for apache and tomcat are 15 and 20 seconds... This is more along the lines of what I was thinking about when I spoke of "users."


12 years ago
Hi,

My understanding is that the PermGen (in some sense) holds the class code in memory. Often we have lots of jar files in our servlet apps, which themselves have lots of classes. When a jar file is included in the classpath (say in the lib directory of tomcat), are all the classes of all those jars automatically loaded into the PermGen?

In a similar question, once a class of a jar file is used, does PermGen load all the classes in that jar file, or just the class that is used (and then later load the rest of the class files when necessarty)?

Thanks! -Adam
12 years ago
Hi everyone,

I looked around a bit, and really could not get a solid answer on this (and perhaps there is no solid answer without profiling). I am just wondering, in general, how much an extra concurrent servlet connection will take in memory. I know there are likely many factors that will affect this (I am using tomcat 7, BIO, on Ubuntu 32 bit, sun JVM). But ignoring code that I might add, around much memory does each thread take?

I do not want to make this into a flame war, but the reason I ask this is I am starting a new project and was considering using PHP. However, I was noticing that each php process was taking about 8-10 MB (which kind of makes sense considering how php does some things). This seemed a bit high for our server and the number of users we will want. So, I am just wondering if anyone has a good guestimate with this within tomcat.

Thanks,

-Adam
12 years ago
Hi everyone,

I am working on a project that reduces (in some sense) to a chat room. My prototype application uses Tomcat 7, NIO connector, and Async Servlet 3.0.

To receive updates from the server, the client opens up a connection to the servlet. The servlet does not close the connection, and streams data (when appropriate) to the client. Periodic bits are set to keep the connection alive.

I based the server (in some sense) on this article: http://www.javaworld.com/javaworld/jw-02-2009/jw-02-servlet3.html?page=3

But the general idea is that when a new client joins, I create an AsyncContext and add that to a List. When new events occur on the server, I loop through the list and print data to that client using the AsyncContext.

My questions are the following:

1) I can't find a clean way of detecting if a client has left (ie moved to another page). I currently handle this by catch an exception when trying to write to that client's AsyncContext, if the exception is caught I assume that the client left and I remove the client's AsyncContext from the List of clients who need to receive data. Is there a better way of doing this?

2) I am wondering how well the solution at the above link will really scale? It seems that Async works best if there is some processing that needs to occur, which then frees up the thread to service other requests... but in this case the lots of clients would be listening and likely be receiving data back every few seconds. Does the above link present scenario that would allow this to scale well?

3) If one could expect that the client would be receiving *new* data every few seconds... would it be better to simply poll using a standard Blocking IO HTTP connector. Since Keep Alive should keep the connection persistent (in some sense) could this be better? It seems like the above link works well if new data is not often needed for the client, but would traditional polling work well if new data is needed often (ie at every poll new data is likely).

Any other thoughts or insights would be great. Thanks.

12 years ago
Karthik,

thanks again for helping me understand this. I think apache with mod_jk to external tomcat servers will be a fine design for now, and we can just add more tomcats as needed. If it seems to be taking some hits, I will also explore looking at some of the other solutions you have mentioned. The only concern with those is I will need sticky session. mod_jk seems to handle this well out of the box. I could not find quick info on the other LB's you mentioned for this feature. But I am sure I could write some kind of rule or something to handle this. Anyway, I think it is far off until something like that will be necessary.

Thanks again. -Adam
13 years ago
Thanks for your answer, it was very helpful. To answer a few of the info I left out. I am using ubuntu server, and I likely will sever some static content off of apache, so it would be convenient to use apace as a load balance and static content (as opposed to a mere reverse proxy).

Let me get one more clarification if possible:

So, spache sends the request to a tomcat instance. The apache process blocks and waits until it receives the responses from tocmat, which it then sends to the client. While waiting, Apache is in a loop. I assume that this loop does not take much off the cpu. (correct?) and this is one of the reasons why it can scale: we can have many apache processes because each of them are doing very little (simply waiting for responses from the tomcat servers and then sending them back to the client)... is this correct?

thanks.

-Adam
13 years ago
Hi everyone,

Up until now, I have only dealt with systems that use a single sever, but I am now considering a system that is load balanced with apache mod_jk to several servers that run tomcat.

I need some help understanding how this works, so please let me know if I have the right idea, thanks!.

So I have 3 cpus lets call them apache0, tomcat0 and tomcat1. Apache0 runs mod_jk and is only accessible to the public via http requests. A request comes into apache0, and is forwarded to one of the tomcat servers to service the request. (and here is where I am a little unsure)... then does the response pipe back to apache to be sent to the client? If this is the case, what happens to that apache service thread during this whole process? Since apache needs to "wait" for tomcat to send back the response, does apache block the IO on that thread (or does it do something similar to tomcat's NIO connector)? Does this waiting time take a lot of resources for apache (my assumption is no)?

Thanks for any help you can give me.

-Adam
13 years ago
Hi all,

I am just writing to make sure that I understand how the keepAliveTimeout works in Tomcat.

I am using Tomcat 7. The client (which happens to be a sliverlight app) polls the server every few seconds for new data. Since, it is every few second is below the keepAliveTimeout, I assume that it keeping the connection alive on the server, and thus will not have to renegotiate TCP. I installed an app to view the http headers, it does look like keep alive is one, along with content lengths. Would I see something in the http headers if it was closing the connection somewhere?

According to the tomcat docs, it appears that the default for the keepAliveTimeout is set to the connectionTimeout (which is 60 secs). So, as I understand it, the server will keep that TCP connection open with the client for 60 secs (so long as the client does not send a close request). Right?

If the number of threads is getting too high, does tomcat first close out these open connections before rejecting new connections?

We will likely be changing our model soon to Comet, but I just want to make sure I understand this portion.

Thanks,

-Adam
13 years ago
thanks everyone for the clarification.

-Adam
13 years ago
Hi all,

This might be a silly question, but I could not find an exact answer.

When you place an object into a session (session.setAttribute), is that object stored in memory on the server or is it serialized to disk?

Same question with placing objects into the application scope? Thanks for all your java help!

-Adam
13 years ago
Hi all,

I am working on an application (and applet) that needs to be signed (not just self-signed). I am looking at using GoDaddy.

Unlike most things in Java world, with code signing it does not seem to be easy to test things without putting out the cash first. So, I would like to ask some quick questions.

[1] for java signing, I imagine you can use any computer to sign the code (ie it does not need to be signed on the computer that is the webserver). Correct?

[2] Is there any relationship between the computer I use to sign the code and the signed code itself. For example, lets say I buy a code signing cert from godaddy, but then I change my development computer. Can I still use that same cert on a different comptuer, or is it somehow "linked" to the computer that is used? In other words, if my dev machine crashes, will I need to buy a new cert from GoDaddy (or wherever).

[3] Has anyone used GoDaddy for code signing. Are there any gotchas?

[4] I assume that once a code is properly signed, it should work on any platform with a proper JVM... correct?

Anyway, just thought I would ask these questions before I put out the cash with GoDaddy.

Thanks,

-Adam
14 years ago
Well, I tried it with both 0 and 1 for the acceptCount.

I get the feeling that acceptCount does not work. And there seems to be some other posts that seem to say the same thing. So, i am not quite sure why it is buffering my requests when it should send back a connection refused. Of course, buffering requests is not such a bad thing, I just am trying to explore how the app would work under high stress.

If anyone has an idea.. please pass it on.

Thanks,

-Adam
14 years ago
to clarify a bit about what I meant by simultaneously.

I have a client app that launches a bunch of threads. Those threads make a url connection to my servlet. When the servlet receives the request it does a bunch of fake work (for about 5 seconds) before returning a response. So, if there are 30 client threads which have sent a request to the servlet... there would be 30 servlet threads that are occupied (for around 5 seconds each).

So, if I have 30 client threads and atleast 30 maxThreads, then everything should work fine. Each client thread has a corresponding servlet thread. But, if I drop the number of maxthreads to 10. Then only 10 clients can get serviced at a time. This seems to work out fine. In the first 5 seconds, the first 10 clients get serviced... then the next 10, and then the next 10. All in about 5 second intervals.

This seems to be as expected, EXCEPT for the fact that I have acceptCount set at 0. Which means that after the frist 10 servlet threads start working, the other 20 should get a rejection from the server (b/c we have used up all the threads and there are no accpetThreads).

But this is not what happens. Instead it seems to buffer the other requests and service the rest of the threads in due time. But the documentation would lead me to think that the clients should have got a connection refused. Any idea what is going on?

-Adam



14 years ago