Tim Holloway

Saloon Keeper
+ Follow
since Jun 25, 2001
Tim likes ...
Android Eclipse IDE Java Linux Redhat Tomcat Server
Long-time moderator for the Tomcat and JavaServer Faces forums. Designer and manager for the mousetech.com enterprise server farm, which runs VMs, a private cloud and a whole raft of Docker containers.
These days, doing a lot of IoT stuff with Arduinos and Raspberry Pi's.
Jacksonville, Florida USA
Cows and Likes
Cows
Total received
143
In last 30 days
2
Total given
24
Likes
Total received
1766
Received in last 30 days
31
Total given
114
Given in last 30 days
2
Forums and Threads
Scavenger Hunt
expand Rancher Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Tim Holloway

You're not likely to find my definition of optimization in the official Java docs. It's more of an axiom in general compiler design. and thus implied rather than explicitly stated.

While some of my older mainframe language manuals did stress that optimization was likely to affect resource usage - back then optimization was more likely to be an extra-cost option than the rule - these days I think we more or less take it for granted.
14 hours ago
Well! Small world! Are you pursuing a new career path here or just personal enrichment?

As you can see, I've been on the Ranch for a long time now. My actual primary focus is more Internet of Things than Java these days - designing and programming evil little gadgets, But I try to keep up.
15 hours ago

Himai Minh wrote:Hi,
I don't know much details about the code. But I believe Hibernate will automatically commit the changes to the DB after a transaction is completed successfully.
But we may also use Session.flush() to make sure changes are committed.
Reference about flush:
https://stackoverflow.com/questions/3220336/whats-the-use-of-session-flush-in-hibernate



It is important to realize that Hibernate operates in two flavours:

1. Legacy (proprietary) Hibernate, which is based on Session
2. Hibernate JPA, which implements the JEE Standard Java Persistence Architecture components of the Enterprise JavaBean subsystem. JPA uses an EntityManager.

I recommend using Hibernate JPA, not legacy Hibernate. Legacy Hibernate may go away some day and probably will not be as well maintained as Hibernate JPA. Plus JPA is a portable standard API. I have had occasion to switch between Apache OpenJPA and Hibernate JPA in order to gain features and/or dodge bugs and the switch between the two standards implementation is a lot simpler than switching in and out of a proprietary interface.

The flush() method pushes the contents of the Hibernate buffers out to the database server. JPA EntityManager flush() and proprietary Hibernate Session flush() are pretty much identical on that. But that doesn't mean that fetching data from the database will return the flushed-out data. Here you have to distinguish between local Transaction and database transaction contexts. If the local Spring Transaction context being used initiates a database Transaction context, then only when the database Transaction is committed will any other database reading processes see the changed. If Spring initiates a database Transaction automatically then it will automatically do a database Transactiont commit when the Spring framework Transaction is committed.

Spring does offer several different types of Transaction contexts, however, and some of them may allow multiple transactions/commits to be done within a single Spring Transaction. It all depends on how you set things up.

16 hours ago
You're not likely to. Compiler optimizations are almost entirely at the discretion of the compiler implementer, not part of the language definition. By definition, an "optimization" that does not exactly replicate the functionality (though not necessarily the timing and memory usage) of the code it replaces is a defective optimization (a/k/a bug) and not a true implementation of the language itself.

Also note that in many compilers, the aggressiveness of the optimizer and the techniques it will employ can be fine-tuned, as for example the "desktop" and "server" optimization levels on the Sun JVM.

But remember that trying to think ahead of the compiler is premature optimization. And if you try and optimise instruction sequences manually, you're actually likely to short-circuit optimization options that the compiler can take advantage of. It's best to write straightforward code and trust the compiler. And only AFTER a performance problem is seen should you start worrying about instruction-level optimization. Which leads to the 2 most fundamental rules of optimization:

1. The bottleneck is never where you "know" it's going to be (trust me, I've spent years observing this!)

2. Picking a more appropriate algorithm will speed things up much more than instruction twiddling 99.9997% of the time. As I frequently mention, I once optimized a process by using a "less efficient" sort algorithm because it performed optimally for the data it would work with whereas the "more efficient" sorts would all see that data as worst-case.
18 hours ago

Ankit Garg wrote:
As much as I know about memory model, when only a single thread is involved, any caching of values cannot break documented behavior. If multiple threads are involved obviously things get much more complicated.



And you should never assume that a JVM is only running one thread. In some, I think, the garbage collector has its own thread, and other internal housekeeping threads are also possible. I would venture to suspect that even in a totally single-threaded environment, the memory management system's use of internal unused block links would be subject to random variation, thus putting some "fuzz" into the memory subsystem.

Ankit Garg wrote:
My problem with my colleagues explanation is, by his logic any loop can become infinite if the expression value is cached. For example this code can be an infinite loop if the return value of isEmpty call is cached:



The expression value cannot be cached unless the compiler can definitely determine that the code is 100% idempotent. Just because you unroll a loop doesn't mean that you can ignore side-effects. Also, recall that I said that loops can only be unrolled if the compiler can determine that a fixed number of iterations will always be made. Once you add additional loop termination conditions, unrolling is no longer really viable - the whole point of loop unrolling is that you can delete the increment/decrement and test, and keeping the test even if you could avoid the increment would not save nearly as much. Indeed, on some machines, the test is more overhead than the incrementing.
18 hours ago

Angus Ferguson wrote:Hi all,

I created a project in Eclipse with name digital_marine22 (I pushed it to Github) then I renamed it in local to digitalMarineHS.

Now I would like to rename it also in Github.



There is room for confusion there, but chances are high that the Eclipse project name/Eclipse project folder name (not neccesarily the same thing) was digital_marine22. And that it was then published to GitHub as digital_marine22.

Making the (possibly incorrect) assumption that the project name/project folder name was then changed in Eclipse to digitalMarineHS and that it was designed to rename GitHub project digital_marine22 to digitalMarineHS.

That would actually require two operations. One would be to use the GitHub "Danger Zone" to rename the project on GitHub. But if you left it there, then future push/pull requests from Eclipse project digitalMarineHS would still be trying to work with the now-undefined GitHub digitalMarineHS.

So in addition to renaming on GitHub, you would also have to alter the "git remote" URLs for the local copy of the project to reference GitHub digitalMarineHS.

At least on GOGs, where the Git repository name is part of the URL. And I'd expect that GitHub works the same, simply because it's simpler that way.

Campbell Ritchie wrote:Doesn't that mean, after the superclass' constructor has completed and before any code in the current object's constructor(s) is executed?



That would seem logical. Otherwise the anonymous initialiser code wouldn't be able to reference superclass properties, and that would be rather untidy.
19 hours ago
I think actually Angus wants to rename the GitHub project name. I haven't used GitHub lately, since I have my own GitHub-like server (GOGS), but certainly GOGS does have an administrative function to do that.

It's worth noting that while the project name in git typically matches the pulled project directory name, it does not have to. There's a "git clone" option for using a different name locally. The project name on the server isn't literally a "directory name", it is, in fact, a project name.
You should never code with the expectation of an OOM other than to avoid one. Certainly you should not expect an OOM to be totally predictable, since it's the result of the sum total of everything running in the JVM plus the JVM's memory settings.

An Out of Memory Exception is generally NOT considered a mark of "valid code". It's more of an admission that you don't have things under control, and it's really not something to trap and recover from, since the overhead of the recovery process can itself trigger further OOMs. So OOM is almost universally instantly fatal.

There are several forms of loop optimization, but I think the one you're referring to is "loop unrolling". Since an increment/decrement and test take up a small, but finite amount of overhead, for small loops it's more efficient to simply replicate the loop body code and eliminate those parts. Note that in order to do that you have to be able to predict at compile time (or at least before execution) just how many iterations would be required, since you're generating a fixed block of code. Also, there's a limit on the count, since the amount of code generated is going to be "n" times the code in the loop body. For small loop bodies and small iteration counts (less than 10 or 20), the percentage of extra code versus true iteration is small, but as things get bigger, that advantage is offset by extra memory usage.
20 hours ago
Nameless initializer blocks allow complex initialization of classes and class instances independent of constructors.

Effectively, nameless static initializers are executed in the order that they appear when the class is first loaded. Nameless non-static initializers are processed similarly except that in effect they are all bundled up into an invisible pre-constructor method that gets invoked before the actual constructors (if any) start processing. Or, if there are no constructors, when an object is instantiated.
20 hours ago

Sam Peterson wrote:On page 202 in the book 'Certified Associate Java SE 8 Programmer I' by Jeanne Boyarsky and Scott Selikoff, there is a code sample that uses final fields in a constructor:
Since when can you pass parameters into a constructor without declaring them as instance variables first?



Since always. Modern compilers will generally whine if you never actually use parameters, whether directly or indirectly, but never in any programming language that I've ever heard of were parameters required to be assigned to fields in any context.

Note that the difference between "const" (like in C/C++) and "final" is that a const must be assigned at the point of definition, whereas a final is only required to be be assigned before its first use. Or in the case of a constructor setting a final field, the end of the constructor, whichever comes first.

20 hours ago
The Class.forName() method invokes the classloader to cause a class to be brought into memory. It's sometimes used, for example, to determine if a particular named class is actually in the classpath, but it can be used any time you want to fetch a class and prep it for use and/or introspection. It used to be used by JDBC before JDBC got smarter.

I don't see much point in letting users guess screen size, though. Pretty much any graphics system has a method that can be used to query screen sizes. Trusting users can get you into big trouble.

20 hours ago
Welcome to the Ranch, Bikasit!

I believe that transaction propagation is the default in Spring, although I could be wrong.

And I'm not sure if I understood the question, but are you attempting to query the database while a transaction is in progress and expecting the changes you've already made to be returned? A transaction isn't "real" until it's committed. Work in progress cannot be trusted as it may actually be sitting somewhere waiting for the commit and not yet in the database at all.
1 day ago
Thank you for your detailed analysis.

Actually, I'm surprised that you didn't get pushed to log4j2 long before. There are a lot of other ways that log4j has made its obsolescence known to me besides performance. Some of them very annoying indeed.

Moral of story, as in so many cases is that software may not "wear out", but that doesn't mean that it lasts forever. People who think that the up-front development cost is the total cost are simply wrong.
1 day ago
Here's something that everyone looking for page performance should be aware of. Most modern web clients do not make their requests serially. In other words, fetch CSS #1, JS #1, CSS 2, and so forth. Instead they run multiple requests in parallel. Last time I checked Firefox, for example, it would have up to 10 requests in process by default.

And, since network processes are indeterminate in duration, that means that the exact order in which they complete is also indeterminate.

So when you absolutely positively must assure that some things are there when other things need them, you have two choices:

1. Have the need-er include logic that causes it to wait on the need-ee.

2. Set up a dependency chain as Stephan demonstrated so that the need-er doesn't begin to get get fetched until the need-ee is ready for it.

Either option is good. Which one is best for your situation may vary.

This sort of thing dates way back. Nothing more annoying that trying to manipulate a DOM that hasn't been fully build yet.