Win a copy of Beginning Java 17 Fundamentals: Object-Oriented Programming in Java 17 this week in the Java in General forum!

Matthew Bendford

+ Follow
since Dec 01, 2020
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Rancher Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Matthew Bendford

Actually as someone still using notepad + cmd after 15 years for the majority of my time I always encourage to start learning your first language without an IDE.
Why? Two points:
1) to learn what black magic is actually happen under the hood
2) to learn your way around without autocompletion and code generation

Yes, if you already have experience with another language and know the stages of maybe pre-processing, compiling and even linking to get a final binary from some sources and make money with it - an IDE is an required all day tool to get your job done.
But if you new to the scene you first have to learn how to crawl before you can stand up and walk around.

To me it is actual a required skill of being able to copy some source into an editor and compile it from command line - How often we see beginners struggle to reply on simple tasks like "cut down to most basic compileable example - what's the output" - or: "when I click on a *.jar 'nothin happens' - run it from cli - 'How am I do that?'". It's some basic skill you just HAVE to know when you start programming - or, how I like to put it: "if you want to tell that magic machine what to do - please first make yourself familiar with how to use it in the first place".

Or to put it that way: We all here have seen so many threads like "I have a deadline tomorrow and our teacher taught is nothing" ... and I'm just sick of it when you ask to just provide a simple example and many fail to even that simple task because "yea, mostly the IDE does what the prof told us" without understanding what's actually happening behind the scene.
Part of that also comes from personal job experience past 10 years: so called "professioal devs" not understanding a sh!t of my job get about twice the salary and fail in so many ways (as a "recent" example: some change back in april this year(!) caused our customers to have issues with their banking account information and the company lost quite some money by taking over direct debit returns - ok, I guess it's just cheaper to pay couple of minimum wage customer service employees instead of just rolling back what ever was changed in april - we have the money for it because we're one of the four nuclear power plant operators in germany with a two-digit million figure of customers - who give's a damn?).
Seriously - I'm  a hobbyist dev for past 15 years and I'm able to fix a simple android app withing minuts to hours - this issue goes on for OVER HALF A YEAR!
3 weeks ago
Aside from a few errors and missing exception handling a catch block always should contain at least a Throwable.printStackTrace() or some sort of logging.

I also don't really see how the thread could fail: You open the connection and immediately send the data. Checking for some error on the write() would only make sense on really big data - otherwise it can be considered somewhat of an atomic call.
There maybe solutions fit your needs out there already, depending on the protocol used. To implement an interruption handling on your own first requires understanding of possible failures and their cause. If the remote endpoint just doesn't expect the size of your message and hence errors out due to badly written exception handling just resending it would result in the same error and by that in an endless loop.

Maybe there's something else I miss due to the given example.

Campbell Ritchie wrote:Making a cylinder extend a circle is incorrect inheritance; you cannot say that a cylinder IS-A circle.

If you project a cylinder from 3d into 2d it very well is both a circle as well as a rectangle.
Although usually the circle face is only drawn if it contains additional information that can't be drawn in the other axis - like a crank shaft as a very common example: For correctly specifying the angle between phases one has to draw it from the y axis - so its left side face becomes the new front face. Otherwise one always had to assume a fixed angle - which is not allowed in technical drawings.
4 weeks ago
More interesting is the part of the question: Which was the old version you upgraded from?
Also: Adding tls1.2 explicit shouldn't be required on modern jvm - what java version do you use? I suspect an old java8 or maybe even some way older java7 or such ...
1 month ago
That'll break the moment when content-length is either not provided or if it's wrong.
Yes, a http server should provide a correct length header - but it can be wrong or not provided at all.
Whoever gave you this task should still slap you for that bad implementation.

Also: When dealing with HTTP - it's way easier to just use URL and let the already provided implementation deal with the rest. As for writing your own http server - DON'T - use provided implementations like apache commons.
1 month ago
Two things:

1) DON'T use POP3 and set the SocketFactory to SSLSocketFactory - use POP3S instead!
2) Adding this security provider shouldn't be required - this looks like some very old JVM - you should consider a java upgrade to somewhat more modern.

As an addition: 3) You don't have to tinker with breaking the PKI cert path checking - just add the root ca certificate to your truststore.

Also: Use IMAP in favor of POP3 - it has the advantage that the e-mails are not deleted on the server while they're downloaded to the client. If your mail system doesn't offer or support imap get in touch with your it admins.
1 month ago

Tim Moores wrote:

Matthew Bendford wrote:If you need to transfer data use proper protocols like http or ftp.

There's a place in this world for socket communications where trying to fit it into HTTP doesn't necessarily make sense. And FTP should be taken out and shot :-)

Well, implementing a whole http stack may be too much for some situations, I agree on that, and ftp has it's flaws (although it still can be set up in a rather secure way) but they are decades old protocols and both server and client implementation are available in many languages. Reinventing the wheel instead of using existing ones also requires some justification.

As I mentioned: Don't use that code this way - it has quite some flaws and many resource leaks. Also: Doing the same three times? Use a method.
I suspect the issue to be the server code. SSL/TLS is meant to be fully transparent to the connection and sits between the TCP connection and the HTTP layer. So, first check if your code works without TLS. If so, adding TLS shouldn't change it.

Btw: DataInputStream and InputStream.available()? That already sounds wrong. Read again about how HTTP works and what InputStream.available() acutally reports back. Replace it with reading the content-length header and a counter loop.

Also: Show us the server code please. All else is randomly guessing in the blue.
1 month ago

Prajanya Shrestha wrote:Can I get the working code please.

Aside from this forum doesn't work in this way I highly recommend against what was used back then. If you need to transfer data use proper protocols like http or ftp.
this - can't be null
this.getClass() - can't be null
this.getClass().getClassLoader() - can't be null
this.getClass().getClassLoader().getResource(String) - can be null
So I would start there: What does getResource(String) actual gets you before calling .getPath() on it? Do a simple sysout to see if it's null. Progress from there onwards.
1 month ago

Jesse Silverman wrote:I hope Matt doesn't mind the semi-hijacking of the thread ...

Not at all. Please - keep them coming.
As some of you mods know me: I'm often the one non-intentionally hijacking threads - about how threads had to split of? About a dozen? I'm sorry.
[cut out about half a page and about 2 hours of re-writing during watching StarTrek 1]
As re-rewriting while also re-reading some topics I spent a couple of lines - I guess my initial question and all of your quite interesting replies (thank you to all of you - I learned A LOT by them) comes down to my misunderstanding not about how C (the whole family) differs from Java - but how much they have in common - and hence within their tool chains.

So, I guess my initial question isn't about the compilers itself but rather about the build tools around them.

My let me re-phrse it: From what I learned from your replies: JavaC is better compared to CC while Maven is about what Make is - a build tool.
So, it's about the build tool "orchestrating" the actual tool chain to do several indepent stuff as possible.

As a real world example: I'm using Apache James since its 3.0.0-beta5 - couple of years ago - today we're up to 3.7-RC - and for some reason I don't know myself, I always cloned the GIT repo and built it myself - at first because up to about 3.2 or so no binaries released by the team but only the "finalized" tags as GIT snapshots. And over the years the sub-packages increased from about 120 or so up to over 250 today - mostly due to splittng several of the earlier packages into smaller ones. But: Maven still do them one-by-one, one-after-another - although several packages are independent of one another and hence could be done in parallel.
To speed up this process and make use of modern multi-core (or as my root server: multi-cpu) machines it would be required to figure that out upfront. I'm not sure about the differences about Make vs. Maven - but by splitting huge projects into several small parts it's possible to parallize them - and Make seem to somehow do quite a very good job of it where Maven just reads in a couple of lines imperatively and does them in a queue like fation.

I'm sorry I had to cut out so much and hence this reply may sound even more confusing as my previous one - but I don't want to spam this forum with some nonesense based upon my misunderstanding.
1 month ago
So, it seems there's quite a misunderstanding on my side you gave me some input for. I guess this comes down to the fact that I'm not used to build tools in general.
May I give it another try:
I'm aware that Make is just another build tool like Maven, ant and whatever they're all called that's just "a higher level controller" using maybe multiple compiler and/or linker instances like CC and LD. So same goes for Maven and Java I guess.
From your input it seems I still have to learn a lot - not just about Java or C/++ but rather build tools in general. Although I'm using Java for about 15 years or so - I never really got myself into using an IDE or build tools - but still compiled one of my biggest projects with about a dozen classes + a couple of interfaces just using a terminal using a single compiler instance. Well, at least it worked and I got a working battle ships with network support ...
Another point I seem not to know enough about yet is how to compare interfaces in java to header files in C/++ - so I may on wrong track here. I'm a big fan of using interfaces so I think I at least somewhat understand the difference between writing code against an interface and writing code against a specific implementation (the well known List<T> list=new ArrayList<>(); comes into my mind) - but it seems in C/++ it's way more focused on that as that's how different source files get information about others.
In an example I based a project on I looked into the source and headers and noticed that although the implementing source file has a lot extra stuff, the header file that gets imported by a different source only exposes very few functions and by this that's all the second source knows about the first one. It aso seems common practice that each source has a companion header - but I guess that's guess of that function prototyping that has to be done before an actual implementation. I still haven't figured out that part yet.

So, if I understand your replies correctly - it sure would also be possible to parallelize java compilation by using interfaces as kind of headers and each implementation is only done against those.

One thing I still have to ask: As Java is object oriented one has to get an instance of a specific implementation - even if the type in source is just specified as the interface. I C++ this seem to work differently.
As an example:
Given an interface "Plugin", a specific implementation "PluginImpl" and an overall main driver class "Main". Even when all code within the main driver class only uses the interface "Plugin" as type - at some point in the code there has be a specific Plugin plugin=new PluginImpl(); - which "hard links" the main driver to the specific interface implementation.

According to some sample C++ code the "main.cpp" includes a generic header, called "ext.hpp" in this example, which is implemented by a specific "ext.cpp". But in difference to Java, in C++ the call still is just Ext *extension = new Ext(someString);. So the main driver is still just compiled against the header but when linked at runtime it creates an actual instance of the implementation.

Back to Java I'm aware of ServiceLoader (which, as one example I know, is used by JDBC drivers) - which gives back a loosely coupled instance of a specific implementation of an interface. But digging thru the source of ServiceLoader and its reflections magic all it does calling Class.forName(someString).getDeclaredConstructor().newInstance() - which itself under the hood just works a bit like Class.getDeclaredMethod("<init>").invoke() (a bit more complicated - but it comes down to it).

I may still seem to not get the connection (or the de-coupling rather ... sorry for that bad pun) between how headers work in C/++ and how it compares to or differs from interfaces in java - and hence may still not fully understand why parallelizing works better in C/++ than in Java - although using specific design patterns I looks like it can be done about as equally. It's just properly not common, hence not used much/often and by it informations are hard to find.

Sorry for that bit weird reply of mine - I still struggle to comprehend what you wrote and to wrap my head around it. Maybe I just need a bit more time on that maybe not so important topic.
1 month ago
So, as I'm tinkering around with Linux a lot lately I'm also compiling a lot from source. I noticed one can speed up this process by tuning the -j option to specify how much can be done in parallel. I then tried to get my head around this and to search for more in depth info about how this works and why it's possible in the first place. From as far as I understand - and I maybe already be wrong here - quite a lot can be done simultaneously because it can be done independently from each other. Like preprocessing and compiling individual source files into object files because of those additional header files.
In Java on the other hand if I want to make use of an interface I have to compile it first before any class implementing it can be compiled. If I want to compile a class that uses other classes I have to compile the used classes first before I can my class. This can get quite complicated and hence a long chain in what order interfaces and classes has to be compiled.
Is it because of that structure that compiling Java source can't be as parallelized as C/++ ? Or am I misunderstanding something here?
1 month ago
Although this starts to get off-topic (Sorry, I didn't meant to hijack.), as you all tagged with the staff badge I guess one of you may decide to split this off eventually.

Back to my point: I agree with you with my todays knowledge and concerns, but even running a dnssec and dns-over-https enabled local full recursive resolver it's still no guarantee to be safe against the mentioned attack vectors. DNS spoofing is possible at all levels, even up at verisign as the root zone maintainer. Sure, we all can witness the key ceremonies live on youtube - but what about the us-administration telling verisign to do some changes to the .com tld also maintained by them? They're also a major big player in the tls pki industry. So, with just one order the us government can silently and fully take over any .com domain while still maintain full dnssec and provide a valid certificate.
Seen from this point there's effictively no difference between a good secured site an another one without any at all. That's why at least for me as long as I just read stuff from such site without actively interact with it like sending over user credentials it doesn't matter if it's secured or not.
Don't get me. I have my own domain secured pretty well (at least I hope so) and all I use it for since the past years is for my own e-mail and host a bit of stuff. I don't really need it and it sometimes bothers me to maintain it - but aside from a maybe worse anti-spam score rating it wouldn't affect my domain or my visitors at all.
1 month ago
I don't see the lack of TLS as something to really worry about. Unless you exchange sensitive data with it doesn't really matter anyways. And lets be honest: We all used to freely enter credentials reused on other sites about 20 years ago and not worried about if the data were transmitted over a secured channel, although ssl/tls are around since the 90s when the browser war happened. And most users today still have that mindset if a browser warns about unsecured transmission they ignore it anyway.
I once played around with simple rsa/aes encryption based on some simple javascript - and done "properly" one can replicate the security of modern pki tls without using it. But this goes towards "don't roll your own crypto".
2 months ago