Win a copy of Testing JavaScript Applications this week in the HTML Pages with CSS and JavaScript forum!

Ben Ethridge

Ranch Hand
+ Follow
since Jul 28, 2003
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Ben Ethridge

Assuming that it is a "well-designed" MVC struts app, is it easy to expose that strut app as a web service?

Ben
13 years ago
Thanks. Yes, essentially I'm trying to "simulate a whole request", i.e. mock a (pretty much) complete request.

If you were to run, say, a struts app in Rational Eclipse (WebSphere) or Eclipse/Tomcat, and set a breakpoint in the RequestProcessor class, at the start of the process method, and then looked over the contents of the HttpServletRequest object, you would see that it contains much, much more than the simple "request" that came from the client. The container and struts has now "decorated" the HttpServletRequest object with many other objects from the container (as opposed to objects decorated by the client that originally sent the HttpServletRequest). (I personally think this is bad design, since it's hard to separate the actual client "request" goodies from the container decorations, but that's another topic.)

The essential problem is that if you want to mock this HttpServletRequest, you have to somehow mock the Struts and the container-contributed "decorations" as well, I believe, and each container setup (WebSphere, BEA, Sun, etc.) apparently decorates it slightly differently.

If you don't mock the HttpServletRequest pretty much as the container would, your app doesn't run as a mocked POJO app.

Hopefully I'm just missing something simple here, coz if your app doesn't run "mocked", i.e. run just as it would if it were in the container, what's the point of mocking it at all?

Ben
13 years ago
So, how does it know how to (or how do you tell it to) mock all the websphere-specific objects, i.e. the objects that the container adds to the HttpServletRequest object?

Ben
13 years ago
Hi.

Have any of you successfully mocked an HttpServletRequest in Rational/Eclipse (or even Eclipse/Tomcat)?

I'm seeing all kinds of objects in a "real" HttpServletRequest, once the WebSphere container gets hold of it. (WebGroup, for example).

Anyone know how to go about doing this, with, say, Spring or EasyMock, or whatever?

Ben
13 years ago
This may help explain:

http://builder.com.com/5100-6370-5144546.html

...and this:

https://coderanch.com/t/233581/threads/java/kill-child-thread-its-parent

...though the latter may be more debate than you were looking for.

However, it is somewhat interesting to note that in both of the above, is that one little keyword, the word I've learned to hate seeing, the word that always seems to signal a potential design or coding flaw. And the word is:

"unfortunately".

Ben
Good points, Chris.

I agree that J2EE programmers tend to have less threading skills (myself included, but I am getting better on the threading now that I'm not relying on J2EE for everything).

Sebastian, why wouldn't J2EE solve your problem, kind of to Chris's point?

Ben
Yes, your questions/answers are clear enough, but I personally have no experience with multi-node (multi-computer) JVM, as far as multi-node AND multi-thread.

Perhaps someone else on this forum?

RMI sounds like it would also solve the problem, but it comes with its own set of baggage (learning curve, advantages/disadvantages).

However, based on what you say, I don't see why the multi-cpu (single computer) would not handle your concurrent threads as you desire. Am I missing something on this? Are the servers in distant locations? Why the need for the multi-computer? And if you go multi-computer, why do you then also need multi-thread? Why not just one thread per computer (per JVM)?

Kind of to Ernest's point above, we would need more details to help with a good solution.


Ben
Hi, Sebastian.

Assuming you mean what I think you mean, do you really need your app to be multi-node (i.e. a cluster)? ...or would multi-cpu (on a single "node") perform for you just as well:

https://coderanch.com/t/233607/threads/java/Multi-threading

If not, can you tell us a bit about what you mean by "node" and "cluster", (Do you mean something like a WebSphere node?)...and why you think you need a cluster of nodes as opposed to a "cluster" of cpus?

Ben
No problem. We'll agree to disagree.

As for the garbage collector, yes, I think it has a severe design flaw for essentially the same reason. You can "construct" an object whenever YOU say so, i.e. you have full control over that, but you cannot "destruct" the object whenever YOU say so. Garbage collector decides when that will occur, i.e. once again the "start" and the "stop" are not controllable AT THE SAME LEVEL.

So, imagine you decide to use java for your real-time app, say your new X-fighter (airplane), to control the ailerons, elevator and rudder. (I used to be a flight instructor, so I know a bit about this particular "app".) You, the pilot, decide to input some left aileron to miss that radio antenna that someone suddenly put up in front of you. Oh, S*^#!

Right at that same second, the garbage collector, knowing more about what need to be done when, than the you, the pilot, you who are now frantically moving the unresponsive yoke back and forth...and knowing more than the poor java app programmer, who did the best she could by giving the almighty GC "hints" in the form of System.gc()'s that the GC, by design, is free to ignore. (Right? I was taught that System.gc is not a guarantee to collect at that point, right?)

The GC, in its infinite wisdom...wisdom that Gosling or someone at his level of knowledge coded into it (maybe he was in a hurry and didn't have time to write a really good GC? If so, that's forgivable. Been there myself many times )..., anyway the GC decides that NOWS the time to collect the 100,000 objects that have been growing memory, objects it didn't have time to collect before, because so many objects were being quicky created as the pilot desperately attempted to maneuver through that bad thunderstorm a couple of minutes ago.

This, that the GC (the jvm programmer) knows more than the pilot (the user) is essentially one of Alan Cooper's points in his book, and yeah, I'd call that a "design fault". Once again, this isn't just me saying this. See "Garbage Collection":

http://www.cs.virginia.edu/~mpw7t/cs655/nojava.html

Note the date in the article above, because the good news is that it looks like, on this one, they've decided to fix the flaw:

http://java.sun.com/developer/technicalArticles/Interviews/Bollella_qa2.html

http://www.onjava.com/pub/a/onjava/2006/05/10/real-time-java-introduction.html

(Again, note the article dates above.)

Now, if they'd undeprecate the .stop(), I'd be a happy coder, and I'd love to know the technical reason that they can't, coz then maybe I could work around the crashes it's causing me in Eclipse.

Now, why should YOU (the java community) care about any of this?

Because Mother Nature loves efficiency, i.e. because if you don't, then some other language, a new, more elegantly designed language with less "scar tissue" (see Alan Cooper book above), a language that was not initially designed to run your toaster oven, a language written by some team with no significant investment in java, is going to come along and kick Java's a**. This would make me very sad, coz I've now invested so many years in it.

So, now that java's been open-sourced by Sun, will someone please, please, please undeprecate the .stop()...and make it work as it was originally intended?

Ben
Sorry, but I don't agree. I see no effective difference between killing a thread from the parent thread (or the java virutal machine itself, i.e. one of the child's ancestor threads), and killing a unix child process from its parent process (or the unix "machine" itself i.e. one of the child's ancestor process).

The unix authors could just as easily deprecated the kill command, and forced everyone to write self-kill logic in each threaded app, to protect them from killing apps that should not be killed. I don't see how unix process are any more or less "self-contained" than java threads. Both are dependent on the skill of the developer. By that I mean, unix processes are simply running "threads" of code: The app you coded, third-party libs or unix kernel itself. I see no essential difference. Sure, if you kill the wrong thread, you can lock up your app or the entire operating system, but that would be your fault for not knowing your app's behavior well enough. For the most part, the unix authors protected unix from your app. I've very, very seldom seen the kill of an app process take down unix, and I've had to kill many apps I didn't write...although sometimes, I admit I was holding my breath when I executed the kill command.

The jvm should protect itself from your app to an equal degree, in my opinion. Apps should be interruptible and stoppable from OUTSIDE the app, without special custom logic INSIDE the app. Otherwise, we're all moving backwards, not forward, in terms of state-of-the-art operating systems. (I consider the jvm to be an operating system within an operating system - thus the term "virtual machine".)

Also, it is not real-world to think that everyone "owns their own program" these days, if that's what you meant. In fact, I'll hazard a guess that about 95% of the time programmers are asked to hook into and communicate with code that is out of their control, from a code maintenance point of view (i.e. other apps in the company and third-party libs and frameworks)...even if they did have the time to refactor the app to gracefully accept a "stop" command.

To me, this just boils down to common sense: if you have an exterior start mechanism, you should have an exterior stop mechanism AT THE SAME LEVEL as the start mechanism.

To me, you both are sounding as if you are simply making excuses for a design flaw in java. Read the guy's article above (near the beginning of the article). This isn't just me saying this.

However, if you can give me a good technical reason why this cannot be done in java, a reason that does not simply expose another design flaw in it, then I'm all ears.

Ben
I was speaking more conceptually and philosophically than technically. It's the "less" in the "more or less" to which the analogy applies, i.e. you can do a lot of damage with a kill -9, if "care" is not taken...and if I remember my unix syntax You have to know enough about the technical details of the process (or thread) you are killing, to know whether it will or won't cause harm.

The problem with java, unlike unix, is that the designers decided that they know better than the users what they should and should not be allowed to kill easily, so they shut off the stop switch coz it didn't work.

If you read Alan Cooper's (the father of VB) book "The Inmates Are Running the Asylum", it is one of the main points of the book. Essentially, if YOU are able to "start" something, YOU should have the power to "stop" it.

With java, it's like a car that has an ignition switch into which you can put a key, and start the car. No problem there. But if you try to turn off the car with the key, the engine catches on fire. So, the manufacturer recalls all the cars and disables the engine-stop functionality. They send an addendum to the instruction manual to all the owners, that if they ever want to turn off the engine, they have to open the hood, pull off all the spark plug wires, and then disconnect the battery.

Mr. Cooper's point, as I understand it, is that in other engineering disciplines (like, say, automobiles), such "workarounds" are completely unacceptable. Who would buy a car like that? In the world of software engineering, such workarounds are considered the norm and are "accepted practice". (Just look at the responses to this forum thread.)

Why is this important to you? Because the software tools you use (including languages such as java) would be much more powerful in your hands if all software engineers kept his points in mind when designing/writing software...including MS Windows, and including java.

Ben

Originally posted by Peter Chase:

...the thread doing the killing often has no idea what the thread being killed might be in the middle of.



How is this any different than killing a unix process?

Ben
This was a pretty good article explaining the issue:

http://www.forward.com.au/javaProgramming/HowToStopAThread.html

Notice under the "Suggested Methods for Stopping a Thread" heading, it shows the suggested code. I imagine this is what you meant by "periodically".

The potential issue (question/problem) I have with this is:

What if the code is executing down in the repaint() method? If that method has not been "custom-coded" somehow to handle the thread-stopping logic, will the interrupt() signal that method to abort? I'm betting it won't coz it's not in the try/catch for the InterruptedException.

After reading the article, this appears to me to be a major design flaw in the java language, that everyone's just working around as best they can, yes?

Ben

Originally posted by Henry Wong:
Unfortunately, the recommended way to cause a thread to exit is to "arrange for it to exit" -- meaning setting some kind of "exit" flag that will be checked by the thread periodically.

The interrupt() does indeed cause certain methods to exit with an InterruptedException, but since it is a checked exception, I am willing to bet that you probably just ignore it and tried again. Furthermore, it is not guaranteed to work. For example, on Windows, I/O is not interruptable.

The stop() method is deprecated because of the way it works, it causes the thread to throw a throwable that is not checked (or even caught, as most developers didn't know about it). This had the effect of causing threads to exit in states that were not usable -- and in rare cases, even messing up some JVM internals.


So... no short cut to do this. You have to design an exit strategy for your child thread.

Henry



Not sure what you mean by "periodically". Do you mean that I should scatter if-conditions throughout my code at various strategic points? or do you mean somehow(?) send a signal after x number of seconds?

How would I have ignored the interrupt? I don't think I added code to intentionally ignore it.

Could you possibly provide a short "exit strategy pattern" example?

Ben
Hi, everyone.

I do a lot of java/j2ee work, but not much with the Thread class. I wrote a simple thread manager, which spawns two threads. Each of those threads then goes into infinite loops, with a Thread.sleep(3000) in the loop.

I'd like to know if there's a way to kill one of the child threads (but leave the other one running) from the parent thread that spawned them.

In Eclipse, I first tried to simply 'Terminate' one of the child threads, but alas, that terminates ALL the threads, children and the main parent.

I've tried a child2.stop() in the parent thread (which has a reference to the Thread child2) but that crashed my Eclipse IDE pretty bad, so I'm not going to try that deprecated method again. I guess they MEANT it when they deprecated it

I also tried child2.destroy(), but that threw an error....and didn't kill the child anyway.

I tried child2.interrupt(), hoping that would force child2 to throw some kind of interrupt exception, but no luck there.

In unix, it's quite simple to kill a given process. What's the trick to killing child threads in java?

One caveat: child2 can have a deep call stack, i.e. lots of object.methods called pretty deeply, so it may be, say, off in a jdbc call at the time I want to kill it.

Ben