I am trying to set up a cluster of tomcat servers to serve up my dynamic content with session replication. Although failover seems to be working with each node aware of the other, Session replication does not work quite as expected. I get the error "WARNING: Context manager doesn't exist" warning. For testing I use the Sessions example in the examples web application of the tomcat distribtion. Although I see session variables created through one node the other node does not receive the variables.
Here is how I have set up the nodes:
Zither: NFS server serving apache/tomcat binaries and config files Z2: node1 running Apache 2.2 and Tomcat 6 Z3: node2 running Apache 2.2 and Tomcat 6
I was having a similar problem, and in reading through posts (by the same user as the original note on this post), found that he fixed his problem by changing the '<Host name=...' on his config to be 'localhost')... Well, I had a similar problem, except my Host entry already had 'localhost', and my error was:
After verifying that the web.xml in the webapp has the <distributable/> tag (it does), and we use the same '.war' file on each host in the cluster, I was stumped.
Also, a note about my setup - I was only getting the WARNING on _1_ host out of my 2-host cluster.
Anyway, I think I figured out the problem - and thought I would document here in case anyone else is searching for a solution.
1) Our setup uses multiple webapps in our cluster of tomcats. However, only 1 webapp is using session replication - so the rest don't setup to use the cluster manager.
2) If one of the hosts starts up first, it deploys the 'clusterable' webapp before the other host. But the other host starts it's cluster manager and gets a request for the session state of "localhost/<webapp>"... So it throws the WARNING message to alert the local tomcat to the fact that it's being asked for session state on a webapp that it doesn't have (yet). After the server completes startup, this problem goes away, because the webapp eventually gets deployed - and the session state is available to be replicated normally. (Doh!)
To eliminate the erroneous warning messages (for our production environment - no errors/warnings is "good" if they aren't worth worrying about), we added the following 2 lines to our 'logging.properties' (we're using the JULI logger) file:
org.apache.catalina.ha.session.ClusterSessionListener.level = SEVERE
org.apache.catalina.ha.session.ClusterSessionListener.handlers = 2localhost.org.apache.juli.FileHandler
This sets the ClusterSessionListener class to report only 'SEVERE' errors instead of a bunch of useless WARNING level messages.