This week's book giveaway is in the OCPJP forum.
We're giving away four copies of OCA/OCP Java SE 7 Programmer I & II Study Guide and have Kathy Sierra & Bert Bates on-line!
See this thread for details.
The moose likes Spring and the fly likes Clustered Environment Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of OCA/OCP Java SE 7 Programmer I & II Study Guide this week in the OCPJP forum!
JavaRanch » Java Forums » Frameworks » Spring
Bookmark "Clustered Environment " Watch "Clustered Environment " New topic
Author

Clustered Environment

nitin pokhriyal
Ranch Hand

Joined: May 19, 2005
Posts: 263
I don't know it is right forum to asked this question if not please move it to right forum.

I have application where there is a link to download logs as zip file and all the files are being configured using spring.
Now system has moved to clustered environment. Have no idea how to download log files from each server as application is deployed to each machine and should have logs seprately.

any suggestion or design will be appreciated.

Thanks in advance
Nathan Pruett
Bartender

Joined: Oct 18, 2000
Posts: 4121

I'm assuming you're using log4j to generate the logs and you're saving the logs as files...

You'll have to change how you handle logs to make them work in a clustered environment. Instead of each server saving log files, you'll need to log to a shared database or use something like Syslog. Log4J has appenders for either. You'll also want to change your log format to include the machine name / IP address so you'll know which machine in the cluster generated the log message.

As for downloading them as zip files - you'll have to create zips on the fly from whatever centralized approach you choose - you're probably doing something similar already though - I don't think there's a built in job that zips log files in Log4J.


-Nate
Write once, run anywhere, because there's nowhere to hide! - /. A.C.
nitin pokhriyal
Ranch Hand

Joined: May 19, 2005
Posts: 263
Thanks nathan for your replies. Now i have two question. i will start looking in this direction but before that i have question in mind.
1. If machnie A will write to machine B log then don't you think it wll be a issue for performance?
2. I don't have gone through api of log4j so far i ll do that but in case if you know that if machine B configured for logging and it goes down then is there backup machine ip can be given in log4j.xml


Thanks
Nathan Pruett
Bartender

Joined: Oct 18, 2000
Posts: 4121

Yes - there is network overhead with writing logs to a centralized location - but if you want *one* log from *multiple* servers, you're going to have to do some kind of centralized logging. You can configure appenders to only write certain messages to certain logs - so you can configure things like only write ERROR-level messages to get sent to the centralized log, but still log DEBUG-level messages in the local logger - something like this would reduce network traffic. Failover wouldn't be handled in the log4j configuration... it would need to be handled by whatever medium you're using to log (i.e. - if you're logging to a database, some databases can be configured for failover) or possibly your network configuration.
nitin pokhriyal
Ranch Hand

Joined: May 19, 2005
Posts: 263
Thanks a lot nathan for such a helpful reply. I appreciate it. There is another concern here we have some cache mechanism which was not written in keeping clustering in mind. So we have a button on click on which it reloads dropdown data which is in cache. Now clustering coming in picture, is there way to do similar stuff invoking another jvm in cluster. (we are not using ejb as i mentioned earlier).

Thanks again
Nathan Pruett
Bartender

Joined: Oct 18, 2000
Posts: 4121

What 'cache mechanism' are you using? Is it a specific project/product, or something your team built?

Also, where is it caching data to? Files? Memory? Application/Session scope of the web application?

nitin pokhriyal
Ranch Hand

Joined: May 19, 2005
Posts: 263
Sorry about incomplete details. The cache is nothing but static LinkedHashMap which contains Id, name of several basic data like organization names with id, program names iwth id, status names with id, etc. so there is a button if click on that then it drops all cache and reloads again from db.

Thanks
Nitin
Nathan Pruett
Bartender

Joined: Oct 18, 2000
Posts: 4121

It sounds like this static LinkedHashMap is being used as a 'read-only cache' - i.e. instead of making a round-trip to the database for these values every time they're just being read from the LinkedHashMap - the button just refreshes the values from the database. If this is the case, and you leave it like this, then each cluster will have it's own LinkedHashMap, and pressing the button will only update the LinkedHashMap in *that* cluster.

You could make some kind of RMI/distributed programming approach to this, but that would probably be overkill in this case - and I'm not sure about performance of something like this as opposed to just taking out the cache and accessing the database every time.

A better way may be to create a database value like 'last_cache_update' and a timestamp value. Have the button just write the current time into this database field. Then, create a kind of Singleton/Proxy that protects access to the Map rather than just accessing it directly statically. Whenever the map is requested, check the value of the 'last_cache_update' column in the database to see if you need to update values. This does mean every time the map is requested, 1 read to the database will be performed, so it's less performant than just accessing it in memory, *but* it's still more performant than doing away with the cache and performing *n* database reads every time.

This would work if the cache really is 'read-only' - if it's 'read/write' - i.e. you are both reading and writing to the cache to temporarily save values to be written to the database - then this approach would not work.
nitin pokhriyal
Ranch Hand

Joined: May 19, 2005
Posts: 263
Thanks nathan.. as a time constraint we are leaving current implementation but i ll keep these things in mind for other projects if we ll have a scenario where we can use that.
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: Clustered Environment