Does this scale for shared resources in a last-one-out-turn-off-the-llghts way?
When I first heard about reference counting, I thought "Great!" (many decades ago).
Then when I realized about cycles of references breaking that (A maintains a reference to B and B maintains a reference to A, 'islands of isolation') I figured "Well, that kills reference counting!! The mirage was nice while it lasted." I was a bit premature, or others are late.
Java makes no use of reference counting, everything is mark-and-sweep garbage collected except primitives.
Python started with reference counting, in the way I thought of it as a kid before I realized it was broken.
CPython (but not Jython or Iron Python or maybe PyPy and a couple of others) now has a Curious Combination of Classical Reference Counting for object types that can't have circular references, and Java-style for other things that can/do have them. It is the Classical Reference Counting part that prevents removal of the Global Interpreter Lock (many people have looked into getting rid of it, it gets fiendishly complex to guarantee that everything "still works").
Why not get rid of the classical reference counting? It provides relatively quick and deterministic clean-up for so many things, some are relying on that behavior. This relates to why resource management via. finalize() in Java is deprecated, I believe, destruction "some time later, probably' isn't good enough due to the mismatch between memory pressure and expensive resource allocation/freeing. So that would make many existing programs and their developers/users sad or angry.
In C++, despite having hook points for everything, constructors, destructors, copy constructors, assignment operators...there was still the inherited problem from C of being unsure about who was responsible for freeing some random pointer you see lying around, if any (automatic and static memory should never be freed by any user code, heap-based memory should be freed exactly once after nobody is trying to use it anymore -- but who and how do they know that?) Some people like Tim,
rarely if ever made mistakes around this issue, but the vast majority of large production code found it a nightmare for whatever combination of business, technical and personal/personnel reasons.
As of C++ 2011, refined a bit in newer versions, there is a rallying cry of "no owning raw pointers, no explicit new or delete!"
The modern way of programming C++ involves three kinds of wrapped pointers. Unique pointers, unique_ptr<T>, shared_ptr<T>, weak_ptr<T>
"Raw pointers" (the classical kind inherited from C that have no concept of ownership associated with them) are still fine, but they are never used to free anything, nor are they ever used to imply ownership or lifetime of objects...
A unique_ptr<T> guarantees recovery of the memory when it goes out of scope. Additionally, custom deleters can be passed in on creation to do other clean-up work logically belonging there.
That unique_ptr could live on the stack, and will be cleaned up automatically just like an int or other value type when it goes out of scope normally or via. an exception. If it lives in an object or a container, it gets destroyed at the appropriate time when that destructor gets called, freeing its resource. This is all deterministic and precludes the necessity for any garbage collection.
A shared_ptr<T> uses classical reference counting to manage the resource. Copies of it cause the reference count to go up, when the copies go out of scope the reference count goes down. When it hits zero, the resource is free and an optional custom deleter is also called if it was supplied upon creation.
A weak_ptr<T> is a secondary or auxillary type always associated with a shared_ptr<T>, it is used solely to break the possible cyclic references (which I didn't think about when I and Python didn't think about when I was a kid, and then I thought destroyed reference counting as a viable solution for anything later). The weak references can be turned into shared references during the object's lifetime, but do not count towards reference counting totals (they happen to be counted separately, I am not sure why). If the object still exists, great, you can use it, else you will get back a null and in any case it won't delay or block destruction/freeing in and of itself.
Many will look at all this and say "I think I will stick to garbage collection, thanks!" but Java seems to agree that finalize() for expensive objects doesn't cut it, due to the aforementioned mismatch. Try-with-Resources is the modern Java solution, but I have seen people say "This is great, provided I know I am done with the resource at the close of my method, but what about when I do not know when we are done with it, until we are done with it, far away from the close of the method?"
I was calling the Modern C++ strategy "Deterministic non-garbage-collected resource management" but that wasn't quite true. Unique pointers are greatly preferred, and then it is deterministic. When there is a resource that actually legitimately needs to be shared among entities with different lifecycles, shared pointers (and some weak pointers if necessary) do guarantee cleanup, but if they are in different threads, and possibly if not, there is a last-one-out-turn-out-the-lights behavior which I don't know if I want to call "deterministic" rather than "guaranteed to happen as soon as nobody is using it".
RAII often comes up when a Java programmer asks why C++ doesn't have a finally clause in try catch, the answer is "Because, we use RAII and stack unwinding takes care of that stuff automatically."
Since C++ 2011, the modern C++ idiom is to extend that to all owned resources. It is often described now as a "Solved Problem!" and I was asking here if people perceive that as hyperbole or basically true. We have one vote for "not impressed with RAII in general" so far if I count correctly.