Joe Boxer

Greenhorn
+ Follow
since Oct 23, 2003
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Joe Boxer

Why bother implementing?
Take a look on 3rd party open source projects. I am happy with ehCache at the moment.

Joe
17 years ago
I am having a similar problem on �some� JSP�s. The debugger stops on e.g. abc.jsp but not on xyz.jsp. It stops correctly on my colleague�s machine!

Is anyone aware of any temp files for the debugger to delete?

Cheers
Joe
18 years ago
Warren,

Thanks for your explanation about the optimization, it stands to reason. I can imagine that due to the caching of variables a NULL object could be returned, if fetching of �created� will be executed later then return _me.

But why would the first line be skipped?

Under what circumstances could it happen? I mean, we write lines of code to be �executed� and not to hope that they �might be executed�. Is there a link/book you would recommend to read a bit more about that issue?

Thanks again
Joe
Guys,

What about this?


Why is this still broken?

There is an excellent article on that double checked locking issue:
http://www-106.ibm.com/developerworks/java/library/j-dcl.html?loc=j
and the classic one:
http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html


The trouble with the out of order writes (allocating the memory and then executing the constructor) is that the normal double checked locking checks the object against null if (instance == null). That might cause timing issues and returning an object that has not been correctly initialized.
but here we check a Boolean (boolean assignments are atomic).
In this scenario the Boolean created is set to true INSIDE of the sync block.
The first thread exiting the sync block will flush properly the created (setting it to true). So if the second thread passed the first check and got blocked because the first one is in the sync block, it will not pass the second check though.
I don�t see how a compiler could optimize the code above?
In the listing 7 of the first link I mentioned above there is a broken �fix� for the double checked idiom because of the JIT compiler optimization. But the solutions are different. Compare:



It seems obvious that the compiler would optimize it (move the line 5 inside of the sync block).


Where is the trouble in my snippet?

Any suggestions?

Thanks folks,

Joe
[ May 13, 2005: Message edited by: Joe Boxer ]

[ May 13, 2005: Message edited by: Joe Boxer ]
[ May 13, 2005: Message edited by: Joe Boxer ]
hi,

I have 2 enterprise application. Each of them is deployed on a different app server within the same node.
An session EJB from application A via a method doSomething() accesses an entity EJB in the application B.

Transaction attributes on doSomething are �transaction required�
Transaction attributes on the entity EJB are �transaction required�

The call to doSomething rolls back the transaction.
If I change the transaction attributes on the EJB to �not supported� or �required new� it works fine.

Wondering if I violate any transaction rules?
Is it not possible to share/carry a transaction over 2 app server?
Do the transaction setting only apply to transactions within one app server?
If it is not possible to carry the transaction initiated on app server A to app server B, how would I automatically roll back the transaction if it failed on the second app server? Is the only way managing the transaction programmatically?


Cheers

Joe
19 years ago
well,

shouldnt you first create the Hashtable (env) with the proper values and then create the InitialContext passing the env?
How would the InitialCOntext know about the properties otherwise?

check the port number for the RMI connector, too (just an idea)


cheers
Joe
19 years ago
Within WSAD (could be WAS as well) 2 app servers created to run independently (the same machine, no clustering, just to servers). Server1 with an enterprise application A, Server 2 with an enterprise application B. B uses EJBs from A.

Server1: the port for the naming service is 2809
Server2: the port for the naming service is 2810

JNDI of one of the EJBs running on A is e.g. A_abc

Within application B in order to resolve the A_abc I do following

Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.ibm.websphere.naming.WsnInitialContextFactory");

env.put(Context.PROVIDER_URL, "iiop://localhost:2809");

Trying to lookup the bean, I fail. Am not quite sure what to pass to the lookup method.

dumpNameSpace.bat � port 2809 for the Server1 gives back

(top)/nodes/localhost/servers/server1/A_abc

How do I look up the bean on Server1 within its naming space from the Server2

initialContext.lookup("nodes/localhost/servers/server1/A_abc�);
initialContext.lookup("cell/nodes/localhost/servers/server1/A_abc�);

do not work.



Funnily the beans bound to the names space of the Server2 have the same prefix

35 (top)/nodes/localhost/servers/server1/B_xyz

Why does the lookup cannot resolve the name, since I (I guess I did) initialised the InitialContext correctly?

What do I do wrong?

thanks for any help
Joe
19 years ago
yep, it helped. Your explanation makes sense.

thanks man!
It is more academic question then a code one.
XDE generated pretty same class diagram as the one at
http://home.earthlink.net/~huston2/dp/observer.html

This should help to understand my question.
Why do we have the ConcreteSubject?
Why not make the Subject class a non abstract and merge the ConcreteSubject class with the Subject one?

I know that a pattern is only a suggestion and we need to tweak it but �kicking out� the ConcreteSubject seems to me as a violation of the pattern idea and loosing some flexibility. But I fail to see the impact of it.

Cheers
Joe
Using the observer pattern code from XDE�s repository (I guess that is the proper pattern implementation) I was wondering about the need of having the ConcreteSubject class.

There is the abstract class Subject and the public class ConcreteSubject with the implementation of 2 additional methods getState() and setState(..).

Why wouldn�t we put the getState() and setState() in the Subject class and make it a non abstract one. And then the ConcreteObserver would use the Subject class instead ConcreteSubject?

What flexibility does the ConcreteSubject give us here?

Cheers
Joe
hi,
just wondering if there is a maximum length of characters we can assign to a varaible of the type String?
cheers
Joe
19 years ago
I have a question about using interfaces in sequence diagrams.

Scenario:
A client handles with objects via interfaces. Now, the client invokes only methods of an interface of an object. What is the proper way to model it in a sequence diagram? Do I describe the interaction only using the interfaces or should I use the impl objects instead? Or both? I could still describe the interaction as the main actor sends a message to an interface and the interface forward the message to the impl class. But that would increase the maintenance and complexity of the sequence diagrams.
Is using interfaces in sequence diagrams recommendable or bad practise?
cheers
Joe

Originally posted by Lewin Chan:
base64decoder stream is part of javamail-1.2, try sticking that in the classpath to your task.


it helped. found the class in mail.jar
cheers
20 years ago
Hi,
A quick one. What is the best practice on the following scenario:
we have a client, a session fa�ade and an entity EJB lets say Abc. The client passes a AbcDTO to the session fa�ade, now knowing that co-located EJBs (here the session and the Abc Entity) use local interfaces, the DTO is not a big help in therms of performance. But is it not better to still pass the DTO further to the entity bean instead of invoking a number of set methods? Would it not increase the readability and keep the number of methods in the interface of the entity bean minimal?
Any pros and cons on that?
Cheers
Joe
I am wondering if anybody faced a similar problem trying to install an EAR application to WebSphere using ANT.
I am running ANT outside (headless), WSAD using the CustomTags from wsadmin.jar.
I can start a server, I cant stop it, but trying to install an application I get
[24/10/03 09:39:40:749 EST] 5ff4e810 AbstractShell A WASX7093I: Issuing message: "WASX7023E: Error creating "SOAP" connection to host "localhost"; exception information: java.lang.NoClassDefFoundError: com/sun/mail/util/BASE64DecoderStream"
Strange why the script needs Sun�s class to connect over SAOP with the configuration service.
I checked all *.jars in $WAS_HOME/libs there is nothing by Sun.
Any ideas?
I tried to put every possible combination of parameters to invoke the Custom Tag com.ibm.websphere.ant.tasks.StartApplication
No effect. Must be the classpath�
cheers
Joe
20 years ago