Win a copy of Terraform in Action this week in the Cloud forum!

Mattias Arthursson

Ranch Hand
+ Follow
since Jul 26, 2004
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Mattias Arthursson

There are a multitude of different reasons why there might be locks held in the database. First of all, are you absolutely sure that all resources are properly closed, no matter what happens? If an error is encountered, do you still close all resources? Always?

Another possible reason might be that you are running queries that take a long time to execute. For a reasonable amount of data with a fairly complicated query it's quite possible that the queries are running for a couple of minutes (if your indexes are not set up properly). If not explicitly stated otherwise the query will hold a lock on the table until it's finished, preventing updates in the meantime. In SQL server you can specify that no locks should be held by a query using the "WITH(NOLOCK)" option.
One serious problem with static methods is that using them significantly reduces testability, since they can't be mocked in any simple way. As Ulf noted, there are situations when they are useful, but IMO they should be avoided as far as possible.
15 years ago
It all depends on what you are going to use it for. There are a number of different LDAP servers out there, commercial as well as free.

Have a look at OpenLDAP, that's a good, free alternative. As for the commercial alternatives, Sun has a Directory Server and there's also the option of using Microsoft's Active Directory (AD) for some applications.
15 years ago
It seems it's not the TestRunner that is to blame for this but rather TestSuite. It creates each test instance (i.e. one for each test method in each test class) on initialization. I guess that TestSuite being dynamic requires it to have references to each actual Test instance.

Anyway, the result is the same - there are references to each actual test instance throughout the full test run, preventing any objects referenced by them to be garbage collected.

An interesting side effect is that this becomes even worse if you create objects in the constructor of your TestCase (or in the declaration of a field) - that will potentially allocate huge amounts of memory before test execution even started, which means that it doesn't even matter if you clean up in tearDown().

Conclusion: all references should be created in setUp() and released in tearDown().

I have no idea whether this applies to JUnit4 though.
15 years ago

Originally posted by Mike Himstead:
Pardon me for being so noisy, but why would they keep the reference? For displaying the results? Shouldn't it be possible to show them without a reference to the test? Or for a quick re-run?


It's a fair question, and one I have asked myself on several occations. It should indeed be possible to show the results without an actual reference. The fact remains though.
15 years ago
To clarify: the test runner keeping a reference to the test instance prevents any objects allocated by it to be garbage collected until it's finished.

Of course, StrutsTestCase and JWebUnit isn's really used for unit testing, rather integration or functional testing. It is however theoretically possible to imagine that the same problem could arise in an extremely large suite of unit tests. It all depends on the memory constraints.
15 years ago
Some junit test runners keep a reference to each test that have been run, so for very large test suites where each test creates memory-intensive objects in setUp() (e.g. using StrutsTestCase, JWebUnit, etc...) it is indeed necessary to set your instance variables to null in tearDown() in order to avoid OutOfMemoryExceptions.

For plain junit tests it's probably not necessary, but I still find it good practice to clean up in tearDown().
15 years ago

Originally posted by Jeanne Boyarsky:
For example, shuffle the deck check the cards aren't in the original order. While it is possible for the shuffled deck to be in the same order, you can reduce this possibility even further by shuffling multiple times.


Well, if you check that the cards aren't in the original order you verify that you did something to the collection, which might be considered better than nothing, but you're in no way checking that the correct method is called. IMO the benefit of that is very limited (as is - admittedly - just checking that the flag is set correctly).
15 years ago
Using the Collections.shuffle() method you should rely on that method being properly tested by its implementor; thus you really shouldn't need to test for randomness in your unit test. What you might want to make sure is that Collections.shuffle() is actually called. Then again, that's not all that easy to do, since static methods generally can't be mocked...

I'd probably be happy with testing that the flag is set correctly and then assuming that this is too simple to break.
15 years ago
Another common solution to avoid writing the same dull jdbc code over and over is to use Spring JDBC, a part of the Spring framework. It neatly encapsulates all of the plumbing (connection management, exception handling, looping through result sets, resource cleanup etc.) and lets you focus on the important stuff.
In your code you risk leaving the connection hanging if any exception is encountered during commit() or rollback(). I'd probably put the rollback() in the catch block and the commit() last in the original try block, like so:
It looks fine at a quick glance. The error message basically means that you need an authenticated context to perform your search. Since Principal and Credentials are supplied (which should normally result in an authenticated Context) in the environment, this leads me to think that either of them is incorrect (i.e. invalid login).

One would expect to get an error message when creating the DirContext instance, but Active Directory has been known to be less than friendly when it comes to error messages.
You should use sleep(), but notice that the argument to that method is the time to sleep in milliseconds. In order to sleep for a second, you need to call sleep(1000).
15 years ago

Originally posted by Roger Chung-Wee:
If there is little point in such object pooling, then why do EJB vendors provide this facility and explain how to configure pooling in order to tune applications?


It's all due to historic reasons - in the old days before generational garbage collection it was expensive to recreate and throw away objects. This is no longer an issue. If you don't have any pool of objects there will be no need to tune it.

Also, I suspect some (misguided) fear of threading issues as a reason for the object pooling. Normally, there would be no problem whatsoever in using the same stateless session bean instance over and over, and concurrently between clients, since it's not supposed to hold any state. Compare this to the lightweight design, using e.g. Spring. There you usually have only one instance of the service and dao objects and they are all used concurrently by the clients. As long as the service objects don't have any internal state (which they shouldn't have) this is not a problem.

Originally posted by Roger Chung-Wee:

What you are missing is connection pooling as a DBMS runs faster with dedicated connections than if it has to handle incoming connection attempts at run time. Furthermore, depending on the DBMS persistence option, pools may be mandatory so that the server controls the JDBC connection. This ensures your EJB transactions are committed or rolled back correctly and completely.


Connection pooling and object pooling is two completely different subjects and should not be confused. Object pooling (as in the EJB world keep a pool of EJB instances alive in order to avoid creating and garbage collecting objects) is virtually useless nowadays, whereas Connection pooling is extremely useful, as you describe above.