Jesse Silverman

Ranch Foreman
+ Follow
since Oct 25, 2020
Jesse likes ...
Eclipse IDE Postgres Database C++ Java
Been programming "since forever" but Java was always a second (or third) language. It's moving to the front seat. I'm mostly harmless.
New York
Cows and Likes
Cows
Total received
13
In last 30 days
2
Total given
0
Likes
Total received
86
Received in last 30 days
24
Total given
207
Given in last 30 days
55
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Jesse Silverman

On page 133, we find, in the description for .replace() and .replaceAll() on Map interface:

These methods are similar to the Collection version except a key is involved.



Seemingly minor, but a major objective of this chapter is to remember which Interfaces and Classes covered on the exam contain which methods, so not so minor:
Collections helper class does have a replaceAll() method

Additionally, List interface has a .replaceAll() method, and has a way to replace things at an index called .set() (which you could forget and refer to as .replace(), ouch!!)

These are all things in the core of what I would want to make sure I won't be tripped up on at exam-time regarding Collection types/interfaces and method calls.

I would make that sentence more specific or remove it entirely, as it is too easy to "remember" methods on classes or interfaces where they don't exist as it is.

As both the book and earlier replies in this thread have stated, there is generally a huge difference between buffered and non-buffered IO performance, quite dramatic.

If enough is known about the hardware, drivers, OS and other environment and the data to be encountered, even further optimization can be obtained by tuning even farther for buffer size.
That level of optimization may not be portable across environments.

The idea that once one figures out an optimum buffer size, the work is done, and thus the Buffered* classes are unnecessary, because then one will only read or write that size of data at a time, is indeed deciding to take on the responsibility of buffering oneself, as I described earlier in the thread.

The buffered streams and readers and writers allow one to write code in a style natural to the logic of the problem space, rather than focusing the logic flow on a buffer size which is purely an artifact of the solution space, and not even a portable one at that.

I came from a lifetime of having to "do everything myself" (not in Java!!)
Having tried it more than enough, I don't wish to go back to that lifestyle.

The ratio of effort to results is just too high.
I will add that I certainly see why adding this to Java while maintaining full backward compatibility and working for all corner cases seems to be in-between "Very Difficult" and "Impossible":
http://openjdk.java.net/jeps/218

I won't hold my breath, but it is something I am reflecting on while learning Functional Interfaces and Streams to the standards of the OCJP 819.
There is a lot of "stuff" involved in doing everything efficiently avoiding auto-boxing, stuff to learn, stuff to remember, stuff to not screw up while coding...

I don't feel like not having primitives at all would be a better solution, as common as that is.
Among other things, I see that Oracle has extended the functionality of basic arrays to ease the pain...

Whatever, back to all the stuff.
3 days ago
While this formally violates the rule of "has nothing to do with Java", it does relate to "nothing of use to effectively programming in Java", so I'd rather bend this rule than pollute the Java forum.

As I review all the (many) primitive specializations of the standard functional interfaces in java.util.function, which while consistent have a taxonomy more commonly seen in biology than practical computing...

I remember that these were expected to be able to evaporate when Project Valhalla arrived.

I also remember that we were originally expecting it some number of Java releases back.

I further recall that unless I remember incorrectly, one could only get to Valhalla as a fallen hero, i.e. dead.

This seems to me to be as unfortunate a nickname as if they had code-named the slowest release of Java/JVM ever seen "Project Turtle" or "Project Snail".

I'd still much rather be working in Java than in PHP, assembly or C, and probably JavaScript, so I am not dissing Java.

But we have been waiting to be able to use primitives in ways we could use reference variables, without just a torrent of auto-boxing and unboxing for some time now, haven't we?

Valhalla indeed.
3 days ago
Thank you Stephan:

This is useful for me, because there are several different nomenclatures floating around for what should apparently simply be referred to as:
formal type
and
actual type

I used several of these terms I am afraid, possibly mixing and matching.
I do prefer to be using the same terms as colleagues and those I ask questions of.

Campbell Ritchie wrote:

Paul Clapham wrote:. . . the Oracle Swing tutorials are rather old-fashioned . . .

If only they would bring the Java™ Tutorials into the 2020s, all of them.



Or would open source them like the .Net documentations, which if you needed to, you can legally fork and maintain on your own...or just issue pull requests with the changes you'd like to see that they acknowledged 18 years ago and never got around to fixing...
5 days ago

Campbell Ritchie wrote: It isn't principally abstract methods that it catches, since there already is a mechanism for forcing their overriding/implementation.



Sure, but if you wanted your class to implements three interfaces, and forgot one when defining it, @Override would remind you of that case as well.
That is, you did provide an implementation for the abstract method from some interface, but never declared your class to implements that interface.
I believe I have done that in the heat of coding!
5 days ago
Hint, the @Override annotation never does anything useful for currently working code.

But boy does it do wonders for code that contains any errors regarding implementation of abstract methods in interfaces or either abstract or concrete methods in subclasses!

Did you intend for your class Demo to implement interface A?  It does not.

If it did, when it does, you would still need an object of your class Demo to call your method, it is an instance method, not a static method.
5 days ago
The quote that led us to suspect you still had residual confusion was your re-written, un-numbered version of Rule 2:

When an instance method of a class gets overridden and a (reference) identifier, which is declared to be of the class type, is used to access/reference the method, the compiler will construe that the overridden method in the class (not the overriding method in the subclass) is the method to be invoked at runtime; this rule applies whether the reference assigned to the identifier is for an object of the class or an object of the subclass.



That sounded/sounds like you believed that your example shown most recently, i.e.:



Would yield the output:
Method implemented in MyClass

Which it does not.  If you further updated your understanding/wording of that rule, we either failed to see it, or it did not post.
As it appears you are now no longer confused about this essential point of polymorphism, that is good and to be celebrated, but for those reading the thread in the future, what I repeated above, which was interpreted to be your understanding of the situation, was not correct.

Secondarily, but still importantly, the words:

the compiler will construe that the overridden method in the class (not the overriding method in the subclass) is the method to be invoked at runtime


are neither right nor properly speaking wrong, the compiler doesn't concern itself which implementation of a non-final instance method will be invoked at all, as that decision only gets made at runtime.
We could perhaps say that the compiler "knows that whatever most overridden implementation of the method that corresponds to any given instance assigned to it on a particular execution of that invocation during a given program run will be invoked at runtime, but doesn't particularly care, as long as it has determined a matching method declaration in the declared class of the reference variable.."
This would be an exceptionally verbose way to describe the magic of Runtime Polymorphism, but would be accurate.

EDIT -- I think I see where the problem lies in the interpretation of your re-formulated Rule 2:

When an instance method of a class gets overridden and a (reference) identifier, which is declared to be of the class type, is used to access/reference the method, the compiler will construe that the overridden method in the class (not the overriding method in the subclass) is the method to be invoked at runtime;



If your understanding of what actually occurs in this case is correct, I would change the wording to be consistent with the descriptions of the mechanisms of polymorphism in the materials I have learned it from, at least some of which we have in common.
For the process known as "overload resolution", or more pompously, "compile-time polymorphism", the compiler during compilation finds exactly one overload matching the declared static reference type of the reference variable that is the best fit to be called.  If it can not find any valid match, or if it should find more than one which are "equally good" since none are a perfect match, and it has no rule by which to select one, the compile fails.  Otherwise, the compiler at that moment knows the signature of the method that will be called, and the compilers work is done.

Later at runtime, each time that line runs, the most derived override of that overload chosen previously by the compiler that is associated with that object instance will be the implementation that actually executes.  The actual runtime class of the object may very well not have been known to the compiler at the time the call was compiled, in fact, it may not even have existed yet at that point because it hadn't yet been written.

This is the magic of polymorphism described to death in an apparent attempt to take all the magic away, in hopes of better coding and exam scores.
You have some very long physical lines making the code harder to read.

One of those is:


Which, if you are using Java 7 or newer, could be written:
6 days ago
All of the stuff that I mentioned (except how C++ or C# work somewhat differently) is in scope for the 819 exam, which from his materials he is apparently preparing for.

I agree that the excessively complex rules aren't the best strategy for everyone, I don't normally use them, but on the other hand, fi someone asked for them, if you know how everything works one should be able to write them out with some thought.  There are loads of questions where you have to answer "will this compile, and if so, what output gets produced?" that require the same knowledge as would be needed to write those things out (which nobody will ask you to do).

Both the Enthuware materials and the Sybex book break down the two-phase process of which things happen at compile-time and what happens only an runtime, there is no way someone could pass the much earlier end-of-chapter stuff without having all that clear in their heads, at least at that time.

Rob Spoor wrote:Note that I never said that compare and compareTo are interchangeable. compare belongs to Comparator and takes two arguments. compareTo belongs to Comparable and takes an instance to compare and another argument. What I said is that you can easily create a Comparable based on a Comparable.



What I think Rob meant there was:

Rob corrected wrote:What I said is that you can easily create a Comparator based on a Comparable.



To continue to remember which is which even when jumping around from language to language and not doing any comparisons for a long time, I always remembered Comparator as being short for "Custom Comparator".

When there are multiple different ways to possibly compare two objects of some class T, and one is considered natural or normal enough to implement Comparable<T>, but you may have good reason to sometimes sort them in other ways, you can use "Custom Comparators" to obtain those variant orderings where you need them.

I used to sometimes forget the difference between Comparable<T> and Comparator before that, now I never do.

Having more experience now, I think I wouldn't forget either anymore because I recognize that Comparable, like Serializable or Cloneable or Closeable denotes an inherent property of a class that implements it.  I still say "Custom Comparator" in my head tho, and I haven't confused them for a very long time.

6 days ago

Paul Clapham wrote:

Nyeng Gyang wrote:(2.) When an instance method of a class gets overridden, a (reference) identifier, which is declared to be of the class type and that is used to access/reference the method, will invoke only the overridden method in the class (not the overriding method in the subclass); this rule applies whether the reference assigned to the identifier is for an object of the class or an object of the subclass.



That still seems wrong to me. Let me try to apply the original example to that.

When an instance method of a class gets overridden... The instance method read() of the class Reader gets overridden.

a (reference) identifier, which is declared to be of the class type and that is used to access/reference the method... An identifier is declared and is used to reference the method: Reader in = new BufferedReader(); followed by int value = in.read();.

will invoke only the overridden method in the class (not the overriding method in the subclass)... The call to the read() is invoked by a BufferedReader object and so it invokes the overriding method in the subclass BufferedReader.

So under this interpretation, the rule is not correct.



What Nyeng is saying above can happen in C++ if a method is not marked virtual and optionally in C# using a rare arcane feature, but is never what happens in Java.

I had suggested the link:
https://coderanch.com/forums/shingle/redirect/955
(How my dog learned Polymorphism)

But this is also gone over extensively in the Sybex book(s) many chapters back, as well as all the Enthuware materials I had read.

This is one of those times I think he just needs to go back and review that material.
If I recall correctly it would be impossible to get thru the end-of-chapter tests in the appropriate areas successfully while remaining confused about this issue.
(Muses): Am I super-patient or super-stubborn?  Is there an objective experiment that could be performed to tell the difference??

the context, in which the sentence is provided, suggests that the performance improvements discussed in that section of the book are those very same ones that we have agreed are not performance improvements offered by the buffered classes (i.e., the buffered classes do not provide the performance improvements, over the non-buffered classes, of enabling a programmer to copy larger data chunks).



If you use unbuffered I/O (which sometimes is required for certain purposes, there is a reason everything isn't buffered by default whether you ask for it or not) then:

If you read or write 8192 bytes at a time in one shot, Java will be transferring 8192 bytes in each (expensive) interaction it has with the Operating System and hardware.

If you read or write 2048 bytes at a time in one shot, Java will be transferring 2048 bytes in each (expensive) interaction it has with the Operating System and hardware.

If you read 128 bytes at a time in one shot, Java will be transferring 128 bytes in each (expensive) interaction it has with the Operating System and hardware.

If you read and write single characters, Java will be transferring one, two or four bytes in each (expensive) interaction it has with the Operating System and hardware.

So 8192 bytes of data written can mean somewhere between one and 8192 (expensive) interactions between Java and the Operating System and hardware.

If you Bite the Wax Tadpole and use BufferedReader/BufferedWriter, on the other hand:
If you read or write 8192 bytes at a time in one shot, Java will be transferring 8192 bytes in each (expensive) interaction it has with the Operating System and hardware.

If you read or write 2048 bytes at a time in one shot, Java will be transferring 8192 bytes in each (expensive) interaction it has with the Operating System and hardware.

If you read 128 bytes at a time in one shot, Java will be transferring 8192 bytes in each (expensive) interaction it has with the Operating System and hardware.

If you read and write single characters, Java will be transferring 8192 bytes in each (expensive) interaction it has with the Operating System and hardware.

So 8192 bytes of data written always means exactly one (expensive) interaction between Java and the Operating System and hardware.

I mentioned flushes for when you are in a rushes (decrease latency by sacrificing thru-put).  Why are flushes even a thing if you aren't using BufferedReader/BufferedWriter etc.?

From the javadocs:

Flushes this output stream and forces any buffered output bytes to be written out. The general contract of flush is that calling it is an indication that, if any bytes previously written have been buffered by the implementation of the output stream, such bytes should immediately be written to their intended destination.
If the intended destination of this stream is an abstraction provided by the underlying operating system, for example a file, then flushing the stream guarantees only that bytes previously written to the stream are passed to the operating system for writing; it does not guarantee that they are actually written to a physical device such as a disk drive.



So remember, the BufferedReader/BufferedWriter or BufferedOutputStream, etc. is wrapping some code we couldn't look at because it wasn't Java, but native code provided with the JDK?

Flush can be telling that native code we can't look at to make sure it isn't holding anything back or leaving anything waiting for it.
This is good if someone just can't wait for that time critical data until some big fat buffer fills up.
But for thru-put over the long haul, it is way more efficient to do things with an appropriate buffer size.

For true optimization, you would need to base the buffer size on things you can't know in advance.  Hardware, OS, various settings, which file system you are reading or writing from...

The book makes clear tho that ANY buffered stuff will be way, way, way faster for the reasons I stated above.

Just ask to read the whole file in one shot and be done with it?  If you know it is small enough for that to work well, fine.
But Java is commonly enough used for enormous data sizes where that approach is just nuts.

Reading 896 GB of 80 character lines for an application that only actually needs to see one line at a time, for instance, is just nuts.

Paul Clapham wrote:I could also mention that you're building those rules based on the idea that the person reading the code can know the type of the variable and also the type of the object it refers to. But that isn't the case. Consider this code:



This code calls a method which returns an object of type Reader, presumably based on the parameters passed to it. But the reader can't tell whether that's actually a Reader or whether it's one of the many subtypes of Reader, and neither can the compiler. This means that your proposed concepts of "syntactic" and "semantic" type are not all that useful to a person trying to understand the code.



Well, if Reader is a final class then both the reader and compiler CAN know this.
But yeah, the whole POINT of
Polymorphism at its heart means to the compiler "I don't know what the object will turn out to be at runtime, but whatever it may be at that point, it is for sure a Reader and can definitely do Reader things, and that is good enough for me as the compiler.  As long as you only ask to do Reader things that all Reader sub-classes can do, it isn't for me to judge or interfere.  The code's gonna do what the code's gonna do if and when it gets to that line at runtime, so whether Reader code or that of some sub-class of Reader that hasn't even been written at the time I am compiling, not my problem, man!  It's all good to me!"