Many of the best-practices recommends using final classes to prevent fragile class-hierarchies, but is that actually followed in the applications you work-on?
I know that there are other ways to prevent sub-classing I'm just curious about this particular usage.
I have to say that I haven't followed that recommendation either.
However I haven't ever encountered the problem where somebody working on the same code base as me starts extending classes unnecessarily, either, so it hasn't mattered.
Let me also mention that "best practices" change over time. Back when I started writing Java (and code in other object-oriented languages) there was a "best practice" (although that phrase wasn't used back them) that an object should be responsible for everything about that object. So an object should be able to write itself to a database, and to render itself as HTML, and so on. Nowadays "best practice" says that rendering as HTML is part of the view's responsibility, not part of the object's responsibility, and that the database layer should be separated from the business layer.
It only took about a week for me to notice that having an object know how to render itself as HTML was a dumb idea, it happened as soon as I had two pages which needed to display different amounts of detail about the object. But I still have code in which objects know how to update themselves in the database. One day I'll have to try and redo that.
subject: using final classes in real life applications