Jesse Silverman

Ranch Foreman
+ Follow
since Oct 25, 2020
Been programming "since forever" but Java was always a second (or third) language.  It's moving to the front seat.  I'm mostly harmless.
New York
Cows and Likes
Cows
Total received
8
In last 30 days
0
Total given
0
Likes
Total received
50
Received in last 30 days
12
Total given
36
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Jesse Silverman

Because an interface can extend another interface, adding one or more additional abstract methods to be implemented by an implementing class.

In fact, they could also add one or more default or static methods at the same time, if they chose.

Historically, an interface needs to be implemented to actually be used, but you can fill an interface nowadays with default and static methods and call them.

The following is all legal, I just compiled and ran it:


Bottom line, classes can only be extended (unless final).

Interfaces can be either extended by other interfaces (normally called sub-interfaces) or implemented by classes.
If the classes are not final, they can be further extended by a sub-class that may decide to implement the interface differently, or simply add other functionality.
Or neither, but there would be no point to doing that.
They can test (in Java 11) to make sure you know that default and static methods in the interface don't count towards the total!

That is important, useful and fair.

This question has a serious "Simon Says!!" feel to it due to the inherited Object methods ambiguities.

Maybe showing them quite explicitly and having you recognize that they came from Object so they don't count might be fair play, going beyond that just dives into murk and mire.

This is an example of why I get so OCD about minor details when I am studying Java with an intent towards certification.

Some people are like "Dude, that is a tiny minor detail, relax!"

But many of those tiny minor details seem more major than other points that can easily cause you to blow a question or five.
Walking back just a tiny bit, I do see in the talk I linked to that he does say several times how nice it is when you can just look at a class definition and see that it has no leaks without even looking at any function bodies.  Of course, that isn't possible unless the resource in question is an inline instance data member, or a unique_ptr<Rsrc> as a data member.  In the later case, it could be lazily initialized, but would be held on to until the Object itself went away.

RAII can still apply to resources that are locally scoped stack objects, tho.

Maybe like Encapsulation, there are two definitions floating around?
Encapsulation Definition 1:
Functions or Methods are packaged together with data members and said to be "Part of a class".
Not the definition I use, it is necessary but not sufficient the way I use the term because when I say it, I also mean that...
Encapsulation Definition 2:
data members of the class are made private, using Properties or getters/setters as part of the interface to let class users inquire about or update the member values.
Some internal methods part of only the implementation and not the interface will also likely be categorized as private, the public methods define the API.

RAII Definition 1:
All resources needed anywhere in any method of a class are reserved up front in the constructor for that class, no matter how much or little of the object's lifetime they are actually needed for.  They are held on to in every case until the end of the Object's lifetime when they are freed up in the destructor.
RAII Definition 2:
Resources wrapped in objects obtained via. constructor and disposed via. destructor.  No explicit new/delete.  They may live as automatic variables on the stack of some local function if short-lived, inline data member variables of a containing class if sharing the lifetime of that object is appropriate, or indirected by unique_ptr<> or shared_ptr<> types from the STL....if other lifetime/lifecycles are necessary.
11 hours ago
I decided that maybe I was re-defining RAII to include cases that the definition didn't really cover.
I don't feel that I did.

Looking it up, I see two examples from Stroustrop:
https://www.stroustrup.com/bs_faq2.html#finally

In his only example, the FileHandle is tightly scoped to the function f() which requires it, which makes me think the lifetime is appropriate (never declared too early nor destroyed too late).

The fact that the next section talks about the long-deprecated, now-removed auto_ptr<> suggests this is not new but extremely classic.

More modernly:
https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#e6-use-raii-to-prevent-leaks

That also seems to keep the lifetime of the resource utilizing RAII to the scope of one function/method that needs it.

Yes, making the resource in question an inline instance member of the containing object type is only appropriate if the lifetime and scope of where it is needed lines up with the lifetime of the object.  But RAII applies to local stack variables too.

I am not sure if it properly applies to shared_ptr<T> as well, I was using it that way.

I may have misunderstood the source of Stephan's disenchantment with RAII, but if it was just that unnecessarily extends the lifetime of expensive resources, I don't get it, unless he was thinking that it is only RAII if you make it an inline data member of the containing object class, and not if you use the idiom as needed in local methods or functions.
13 hours ago
Lastly, a note about Herb Sutter.

For some reason, when I had first learned C++, despite getting A's in courses that half the class failed and most of the rest of the class got C's and D's, I found his materials soaring over my head.  I had a feeling I was pretty good at C++, but could still at best barely follow along with him.

One or more of the following three things must have happened:
1. Herb Sutter realized he was doing talks that the dumbest 95% of C++ programmers couldn't appreciate, and wanted to expand his audience, or Microsoft complained that he was scaring too many people away from C+, either way he is purposely being accessible.
2. The whole C++ community realized that they were working with a language that too many people felt was "too hard to use" and needed to make things easier, so most of the efforts at evolution went precisely to make things easier, leaving the nightmarishly hard parts there for those who enjoy them but able to be ignored by most programmers most of the time.
3. I got smarter?

Whatever happened, my reaction to Herb Sutter C++ talks went from filled with anxiety, fear and trepidation that "this stuff is really hard to get right!" to feeling that it is all there in a way that, once one learns it all, eventually becomes beautiful, simple and efficient.  Once one learns all the new better ways of doing things, that is.

I'm not there yet.
13 hours ago
I still to this moment don't understand why Java allows you to redeclare abstract methods inherited from Object in your interfaces -- I don't see what this really ever buys you except confusion.

I guess that someone decided that was allowed and legal far, long, way before there was any notion of Java 8's functional interface, they decided to let you do this for some reason, and we don't break compatibility without the utmost of necessity.  We kill ourselves before introducing new keywords, etc. too.

[Actually, straining my brain, I guess it could be to suggest that the implementing class should not be content with inheriting Object's implementation, but that they should be overriding it in an appropriate manner?]

From what I can tell here:
https://docs.oracle.com/javase/specs/jls/se11/html/jls-4.html#jls-4.3.2

There are eight, or nine or ten (I used to have it memorized, didn't have the energy to even count them right now) abstract methods belonging to Object that may optionally be re-declared in any Interface, none of which would count towards the total of exactly One that is required to be a legal @FunctionalInterface.

I think that the question is just trying to ensure we are aware of these rules, but I find it even more confusing the way it is worded.

I would protest this question, but most of the fault lies with Java's existing ability to re-declare Object methods.

Many free preparatory materials don't even go into the odd fact that interfaces can re-declare Object methods that don't count towards the "total of one".

While it is fair to write a question that would cause someone who used these materials to prepare for the exam to get the wrong answer, this feels more like the literacy tests that used to be used to disenfranchise voters.  I know more than I'd ever want to about the requirements for @FunctionalInterface and still have reservations about what answers they are looking for.

Working backwards from the answers,
B, not A because if the single abstract method happens to be a re-declaration of one from Object, then A itself isn't a functional interface.  Also, even if A isn't for that reason, maybe B declares one real, new single abstract method...

C certainly isn't true in two ways, either Cub declares no NEW non-inherited abstract methods, and Panther had one, or B declares one and Panther had none (tho then A wasn't a @FunctionalInterface)...

At least D is obviously wrong, if both methods were just re-declarations of Object methods, then Cub would need one to be a valid @FunctionalInterface, if NEITHER were, then it is absolutely hopeless, but they don't say that.

E seems more right to me than F, if the question isn't stating that A inherits one from Object and declares one new one.

It is not the fault of the mock question authors nor the exam authors (presumably) that Interfaces can (pointlessly?) re-declare abstract methods inherited from Object.  Since that can happen, it is proper to require knowing that this can happen and that this doesn't count towards the total of One that is necessary.

I'd go further and agree that one needs to know that neither default methods, private methods nor static methods, public or private count towards that total of one.  They can certainly be there or not, and do not conflict with the requirements.

Any questions about validity of candidates for @FunctionalInterface should be easily answered by someone that knows these rules.  From the part of the question that made it into this thread, this would be a questionable question.

I failed to find this in my Sybex 816 book -- so I haven't looked directly at the question, but from what I saw in this thread, I'd have been confused and stressed by the question and answers, and I think I could teach a mini-lesson on this correctly.
Hi Stephan:

I will first say that you are very likely a more competent programmer than I am in more than one language -- so I take everything you say very seriously.

I do feel that regarding recent talks that I have been watching, you may be mis-characterizing RAII in that it has a much larger (and umm, narrower) scope than you make it out to be.

I am watching a Herb Sutter talk that explains:
https://www.youtube.com/watch?v=JfmTagWcqoE&t=139s

About 5 minutes in, that 80% of RAII can be LOCALS or inline members.

10% using explicit unique_ptr<>

10% actually needing shared_ptr<> (which admittedly was overused by those who were like "Great, reference counting!!  I'll just use it everywhere!").

Initializing an expensive resource as a stack variable local in scope to some deeply nested if inside of a nested loop is still RAII, with a very short scope and not even a unique_ptr (or any pointer) in sight.  If the necessary lifetime of the resource availability coincides with that of the containing object, great, then it can be an inline member ('automatic' but only on the stack if the containing object also is).

Admittedly, as some point out, there is a lot of machinery in C++ dealing with the fact that you can have automatic/stack variables of endless custom data types, and things you need to avoid doing, like object slicing when passing a derived sub-class type by value to a base super-class value by assignment or as a parameter, or returning a pointer or reference to something about to evaporate, but the flip side of that is that all kinds of operations never need involve the heap at all.

The hard work (which may or may not have paid off, the question of the thread) was to extend this 'automatic' resource management to things that have a natural lifetime equal in scope to another object (an inline member), some subset of that (a member unique_ptr) and in fact to the sorts of things that seemed to demand reference counting( shared_ptr) and even those that seemed to scream for garbage collection because reference counting would result in islands of isolation (shared_ptr plus weak_ptr).

As someone who has done not only tons of C, but most of my C++ in pre-C++2011 "legacy" C++ styles, I can see that they feel they have significantly improved the functionalities available in terms of Memory Safety, Bounds Safety (ranges, array_view, string_view, etc.) and others outside the scope of this question -- memory safety.

I can also see they have added a lot to C++ that needs to be learned, without actually removing the "bad old ways" of doing things that I had already learned and used for years, so that  programs and programmers stuck in the past can continue working as they were.  There is a huge push to standardize and teach the newer, allegedly safer and better subsets, and get people to mostly ignore the older, more dangerous, worse subset by default...

It is a lot to make sense of, but I feel that RAII is a term that can be properly applied to stack-declared resources of the tiniest innermost scope, way up to shared pointers that may be accessed across different threads.  If you don't have explicit new and delete, that is all RAII, despite the obvious fact that you should still be scoping the lifetime and accessibility of expensive resources to the smallest scope possible in your design.
14 hours ago
Can you provide some more context for this question so that those who don't have the exact book can help to answer it?

I own the two-book version and was having some trouble finding the part of my text that corresponds to the CSG.

The authors will eventually see this, but like finalize() and resource management it might be better to not have to wait for them or someone else that has the exact version to come along.

The contents of the two book version are essentially the same as the CSG, but it is sometimes difficult to find the right pages.

Also, people who don't own the books may wish to help answer.
For that purpose you can provide more context from something you think may be wrong.

Don't post content from the Sybex Books in random places on the web.
Don't post extensive content from random copyrighted material here.

However, in this forum, it is permitted by the publisher to post enough context about questions about possible errata in the books to allow the forum as a whole to productively answer you.
I will take a stab.

The confusion comes because DoubleSupplier interface, while beginning with a capital 'D' because it is a name of a class or an interface (well, an interface here) does not return a Double, as the name might suggest, but a primitive double, small 'd'.

As long as the implementation supplies something that can be promoted to a double, we are good.
That would include a byte, a short, an int, a long, a float, or a double itself.

The Integer we return can be auto-unboxed (more easily done than pronounced) to an int, which is merrily converted to a double.

There is an annoying rule to remember that upcasting before boxing can't be combined in one statement, but boxing before upcasting is fine.

This means we can't say this and have it work:
jshell> byte b = 7;
b ==> 7

jshell> Long L = b;
|  Error:
|  incompatible types: byte cannot be converted to java.lang.Long
|  Long L = b;
|           ^

we also can't do this:
jshell> Byte B = 7;
B ==> 7

jshell> Long L = B;
|  Error:
|  incompatible types: java.lang.Byte cannot be converted to java.lang.Long
|  Long L = B;
|           ^

We have no problem getting a primitive double out of our Byte, however:
jshell> double d = B;
d ==> 7.0

Because a DoubleSupplier, despite the initial capital 'D' in the name only requires something that can be upcast to a double primitive, perhaps after auto-unboxing, we are all good.

I agree this is a bit confusing.
Fair enough.  I care rather a lot about them now, but I spent much of 11+ years largely fixing bugs in that category in the code of a bunch of smart, highly-paid developers who sometimes got messy in these regards.  It is also the single biggest change (there are a lot of others) in what "knowing C++ pretty well" meant between 2009/2010 and today.  Many of the younger people I work with never touched C++ before 2011 version, even if it is their main language.
2 days ago

Tim Holloway wrote:

Jesse Silverman wrote:One of the worst things about C, carried over to C++ was how easy it was to lose track of who's responsibility it was to free something.



That's probably why I have been so successful. I never defined ANYTHING unless I had a precise idea when I was going to create it, where I would keep it while in use and when I was going to release it.

And related unto this is the following: DO NOT create the same resource in more than one place, NOR free it from more than one place. That rule got reinforced when I worked with IBM's OS/2, which was rather infamous for having more than one way (and place) to manage resource resource files. It's a lot easier to put in logging, breakpoints, and other aids when you don't have to go running all over the system for stuff.


That's easy in small projects you have full control of, okay in large projects you have full control of if you have good discipline, and nigh impossible in large projects with numerous development teams building a large, classically architected system with hundreds of classes, thousands of sources, millions of lines of code.  I always had anxiety about those issues when I was working on such systems, even when fixing bugs (so easy to 'fix a memory leak' that would cause some weird, timing-dependent, data-dependent crash somewhere in a callback or something where someone had a copy of a raw pointer somewhere far away and used it.  Might or might not get found during testing if the tests were extensive enough...) I understand why people either want garbage collection or want something Modern C++ or Rust style that still fixes this issue by FORCING ownership and lifetimes to be clear.

Tim Holloway wrote:
I was re-reading the executive summary for Rust this morning and that's basically what the key to Rust is. They differ in that they can pass stuff along like a hot potato, but one and only one component owns a resource at any given time.


I am not sure how that properly compares and contrasts to the current trend of Unique/Shared/Weak Modern C++ solution, where only Unique and Shared pointers express ownership...

Tim Holloway wrote:
Incidentally, I'm technically doing C++ these days. Most of my programming has been for Arduino and while I've never seen a definitive yea or nay, the "Arduino Programming Language" is virtually indistinguishable from C++ to me. Though this is the kind of stuff that doesn't use advanced features like namespaces and VERY rarely do I allocate/free RAM, since there's so little of it. About the worst offenders are the String objects and the way that the Arduino String class is designed it's not that easy to leak memory. And since C++ allows overloading the subscript operators, you'd basically have to resort to deliberate pointer-rape to get a buffer overflow.


I definitely think null-terminated C strings are the most hazardous part of C, and the classical C arrays that get passed around essentially as pointers that you'd better not forget or confuse the size of.  Maybe tied with ownership and malloc()/free() confusions, maybe not.
I still can't believe that people think of memory leaks as the worst problem with accidents in memory management, by far the worst are use-after-free, double-free, and the worst of "reincarnation" so a really stale pointer has a pointer to a currently valid Student instance once again after free/delete but it isn't the one you thought it was.  On the other hand, some people misuse the term "memory leak" to cover all of those, which is wrong but fairly common.

My first few years of programming, mostly BASIC/6502 assembler/FORTRAN had no dynamic allocation.
I remember thinking "What if you don't know how many you will need?" and thinking the answer was "Well, you HAVE to know up front, that's programming!"  I was 12 or 13, but my very long-term memory is so good I remember when I learned PL/I (really PL/C first) and got to do true dynamic stuff how exciting that was (I don't remember the details there) and then how glad I was to see this in both Pascal and then the next year C.  "I'll never go back to compile-time limits again!" I thought.  I vaguely remember that there were two levels of Pascal, one of which allowed you to declare arrays of size only known at compile time, and the other let you do it at run time.  Who cares?  I can make a (singly) linked list!

So there is an important corner of programming where dynamic stuff and heap management isn't very relevant, but for most programming it still is.

In 2010 it felt like Garbage Collection won, and C++ was the hold out, I used to say "Everything before C++ you had to manage everything yourself, C++ gives you the tools (rule of three )but you still hold the responsibility, and everything newer than C++ just resorts to garbage collection and only experts even think about that".

Now, I feel Rust and possibly C++ (Modern Style) have largely changed that, but I am still learning/adapting to this world.
The difference between:


and:


has been making more of a difference than I thought it would to me, tho some people I should trust were implicitly stating that when they said the new normal should be: "No Explicit New, No Explicit Deletes" which inspired the name of this thread.

I feel most (all?) of them considered RAII to be a good thing, we have one vote so far that says "I never bought into RAII for reasons I will explain here."
I wasn't expecting that angle.
2 days ago
Does this scale for shared resources in a last-one-out-turn-off-the-llghts way?

When I first heard about reference counting, I thought "Great!" (many decades ago).

Then when I realized about cycles of references breaking that (A maintains a reference to B and B maintains a reference to A, 'islands of isolation') I figured "Well, that kills reference counting!!  The mirage was nice while it lasted."  I was a bit premature, or others are late.

Java makes no use of reference counting, everything is mark-and-sweep garbage collected except primitives.

Python started with reference counting, in the way I thought of it as a kid before I realized it was broken.

CPython (but not Jython or Iron Python or maybe PyPy and a couple of others) now has a Curious Combination of Classical Reference Counting for object types that can't have circular references, and Java-style for other things that can/do have them.  It is the Classical Reference Counting part that prevents removal of the Global Interpreter Lock (many people have looked into getting rid of it, it gets fiendishly complex to guarantee that everything "still works").

Why not get rid of the classical reference counting?  It provides relatively quick and deterministic clean-up for so many things, some are relying on that behavior.  This relates to why resource management via. finalize() in Java is deprecated, I believe, destruction "some time later, probably' isn't good enough due to the mismatch between memory pressure and expensive resource allocation/freeing.  So that would make many existing programs and their developers/users sad or angry.

In C++, despite having hook points for everything, constructors, destructors, copy constructors, assignment operators...there was still the inherited problem from C of being unsure about who was responsible for freeing some random pointer you see lying around, if any (automatic and static memory should never be freed by any user code, heap-based memory should be freed exactly once after nobody is trying to use it anymore -- but who and how do they know that?)  Some people like Tim,   rarely if ever made mistakes around this issue, but the vast majority of large production code found it a nightmare for whatever combination of business, technical and personal/personnel reasons.

As of C++ 2011, refined a bit in newer versions, there is a rallying cry of "no owning raw pointers, no explicit new or delete!"

The modern way of programming C++ involves three kinds of wrapped pointers.  Unique pointers, unique_ptr<T>, shared_ptr<T>, weak_ptr<T>

"Raw pointers" (the classical kind inherited from C that have no concept of ownership associated with them) are still fine, but they are never used to free anything, nor are they ever used to imply ownership or lifetime of objects...

A unique_ptr<T> guarantees recovery of the memory when it goes out of scope.  Additionally, custom deleters can be passed in on creation to do other clean-up work logically belonging there.
That unique_ptr could live on the stack, and will be cleaned up automatically just like an int or other value type when it goes out of scope normally or via. an exception.  If it lives in an object or a container, it gets destroyed at the appropriate time when that destructor gets called, freeing its resource.  This is all deterministic and precludes the necessity for any garbage collection.

A shared_ptr<T> uses classical reference counting to manage the resource.  Copies of it cause the reference count to go up, when the copies go out of scope the reference count goes down.  When it hits zero, the resource is free and an optional custom deleter is also called if it was supplied upon creation.

A weak_ptr<T> is a secondary or auxillary type always associated with a shared_ptr<T>, it is used solely to break the possible cyclic references (which I didn't think about when I and Python didn't think about when I was a kid, and then I thought destroyed reference counting as a viable solution for anything later).  The weak references can be turned into shared references during the object's lifetime, but do not count towards reference counting totals (they happen to be counted separately, I am not sure why).  If the object still exists, great, you can use it, else you will get back a null and in any case it won't delay or block destruction/freeing in and of itself.

Many will look at all this and say "I think I will stick to garbage collection, thanks!" but Java seems to agree that finalize() for expensive objects doesn't cut it, due to the aforementioned mismatch.  Try-with-Resources is the modern Java solution, but I have seen people say "This is great, provided I know I am done with the resource at the close of my method, but what about when I do not know when we are done with it, until we are done with it, far away from the close of the method?"

I was calling the Modern C++ strategy "Deterministic non-garbage-collected resource management" but that wasn't quite true.  Unique pointers are greatly preferred, and then it is deterministic.  When there is a resource that actually legitimately needs to be shared among entities with different lifecycles, shared pointers (and some weak pointers if necessary) do guarantee cleanup, but if they are in different threads, and possibly if not, there is a last-one-out-turn-out-the-lights behavior which I don't know if I want to call "deterministic" rather than "guaranteed to happen as soon as nobody is using it".

RAII often comes up when a Java programmer asks why C++ doesn't have a finally clause in try catch, the answer is "Because, we use RAII and stack unwinding takes care of that stuff automatically."

Since C++ 2011, the modern C++ idiom is to extend that to all owned resources.  It is often described now as a "Solved Problem!" and I was asking here if people perceive that as hyperbole or basically true.  We have one vote for "not impressed with RAII in general" so far if I count correctly.
2 days ago
That was a weirder than average question.

Perl has a single keyword for that, spelled oddly:

elseif
The "else if" keyword is spelled elsif in Perl. There's no elif or else if either. It does parse elseif, but only to warn you about not using it.



Python doesn't provide switch case structures due to its interpreted nature.
It seems like an easy thing to add, but it is by no means easy.  I once read Guido's rationale why they didn't have it, and I wanted to stop but just kept on going.

So the else if kinda tree becomes even more important.
It looks like this:


Why does Perl spell it 'elsif' but Python spells it 'elif'?

Elif I know.
Tim:

I will go further to make a stronger statement.
You are reasonably unique, I will point out, in that you rarely caused memory leaks AND also rarely caused double-frees or use-after-free, both of which are quite a bit WORSE than that.

Many people who almost never cause memory leaks have the unenviable side-effect of freeing things just in case, to be sure, and cause far more problems in terms of Undefined Behavior, sporadic crashes and bizarre behaviors, and larger numbers of licenses for Purify, etc. etc. when they have timing, data or path-dependent double-free or use-after-free bombs in their code.

Better than the people who would return freeable memory objects in 99% of the cases, but static or otherwise-owned in 1%, or vice versa.  Seen that more than a dozen times...kinda funny when someone asks you to "fix the caller"...

So yes, it is almost like RAII was just an empty slogan until 2011, when C++ finally started making use of it everywhere for almost everything because we suddenly realized we could?

I will endeavor to remember that this is a design pattern that is by no means restricted to malloc()/free()/new/delete but any resource that gets obtained, held, and released.

The Question Is Tho:
Does assiduous use of make_unique<>, make_shared<>, weak_ptr<> etc., with no visible delete nor visible new nor visible constructors except for stack objects in the code, no raw owning pointers anywhere -- effectively turn C++ into Rust in this particular regard?  I know there are many other dimensions along which to prefer one or another, I am only talking about whether we can actually say, as some seem to "Okay, we have solved the resource leak and use-after-free issues, let's go back to the other problems that are more interesting!!" or if that is prematurely declaring victory.  Of course, you already didn't have a problem before this, but plenty of others most decidedly did.

I once had been doing pretty well in a job interview like, 15 or 16 years ago.  When they heard I liked auto_ptr<> [long deprecated and now removed] there was no recovery from that, the whole rest of the interview became very unpleasant.  With a look of disgust, the interviewer asked with evident sarcasm, "So, you think that Smart Pointers can save us from memory safety problems???"  I nodded awkwardly....Apparently auto_ptr<> had not brought them joy and contentment.  I guess somebody copied them...

Kind of like some of the functional features that Oracle supplied in Java 9, 10 or 11 that made Streams much more fun to work with...make_unique<> feels WAY easier an API than before, because it combines things [the actual new, the initialization, the wrapping into a unique pointer] that you normally want together.  The earlier C++11 idiom reminded me of a joke I used to love to do in Junior High School and High School.  When someone forgot their notebook and came to borrow paper, I would slowly and deliberately pop open my loose leaf binder, then rip a page out straight sideways, then carefully close the binder rings again and hand them their page or pages with holes torn right thru to the edge.  Somehow I thought that was funny, in that it caused them to notice an Anti-Pattern they often engaged in, and to actually see it in a very obvious case.  unique_ptr<> without make_ptr<> feels like that to me at the moment.
3 days ago
I am trying to update my C++ style to current modern standards.

I already had memorized, as a slogan, that good modern code not only has no explicit calls to delete, but no explicit calls to new either.

One of the worst things about C, carried over to C++ was how easy it was to lose track of who's responsibility it was to free something.

I think I understand why the things first called 'Universal References' were changed to being called forwarding references, or at least one context where it makes more sense.

make_unique<T>() combines the allocation, the constructor and the wrapping into a unique_ptr all in one, no new but also no explicit constructor call, the args forward to the appropriate constructor.

It all makes more sense now.
3 days ago