I would like to put some stress away from the database. I have ip ranges (say, ip_start, ip_end, blocked) table from which I check if customer should be allowed to view my page. I make check just once, when he comes to the page and his session bean is created. But still it hit's perfomance I believe. It would be nice if could load ip range database into memory and make check from it, right? But it is very difficult for me to design this solution right.
That's not enough of a basis to make modifications for the sake of performance. You need to be certain where the delay originates. This sounds like a simple check in a simple table, which should not take long at all. What is the query you're running? What size is the table? Are you using indexes?
I've made several thread dumps, often I can see that jdbc is waiting to run query from my ip check method.
I use indexes, size of the table have more than 80k rows (4 columns). But I only would like to cache the allowed ranges (40k rows). Query looks like this:
Joined: Mar 22, 2005
You might want to use the DB's query analyzer (or EXPLAIN option, or whatever it's called for the DB you're using) to make sure that the index is used for this particular query, and that there's not a table scan going on.
Is the query slow as well if you run it directly from the DB's command-line tool?
Originally posted by rajesh bala: Cant ip range checks be set as a bunch of rules in apache or your webserver itself?
Extending rajesh's idea: if its important and slow to use JDBC, don't use JDBC. I'm not sure I understand why you are doing this, but normal blacklist of IP addresses are fairly compact, you could load up a nice structure in memory once when the application/web service starts and do it in memory.