Frank Carver

Sheriff
+ Follow
since Jan 07, 1999
Frank likes ...
Eclipse IDE Python C++ Debian Java Linux
Merit badge: grant badges
Cows and Likes
Cows
Total received
6
In last 30 days
0
Total given
0
Likes
Total received
49
Received in last 30 days
0
Total given
18
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Rancher Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Frank Carver

This is an unusual design. It invents new HTML tags, which is fine for CSS, but you are at the mercy of the browser designer as to how they are processed.

This page looks very different in different browsers!

Most browser-tolerant web design these days uses DIV and SPAN tags with classes and then applies the styles to the classes rather than to the tags themselves. This is a much more predictable approach.

Campbell Ritchie wrote:

Tim Cooke wrote:. . .

  • Effective Googling
  • Nobody else knows how to do that.



    I always like to begin my google searches with "Dear Mr Google, I would very much appreciate it if you could have a look for the following information. Thank you." It's only polite, really.
    2 months ago
    Congrats to everyone. I really enjoyed the discussions for this book!

    Tim Cooke wrote:There is a slightly different flavour of "mistake" that you might categorise as a deliberate mistake



    The more I think about this the more I keep seeing different types of mistakes. This makes me think of what you might call a "temporary" or "semi-deliberate" mistake.

    It's easy to slip into thinking of a relatively static view of a codebase, and categorising some parts as correct and others as mistakes. But there is another view, which looks at the evolution of the code over time, where we can observe that some mistakes exist only for a particular period. The problem comes when such mistakes linger long enough to cause actual problems.

    I'm sure we have all been there. In order to meet a deadline or complete a different feature, you hack or hard-code something which is not key to the task in hand. Test-Driven Development (TDD) actively encourages this approach. The danger is that this hack or hard-coded result may end up going live, particularly on projects with a frequent delivery schedule. Such mistakes are always intended to be fixed, but we can never know the future, and in some number of cases the deliberate error remains in the system for much longer than originally planned.

    These kinds of mistakes count as deliberate, in that some developer implemented them with eyes open, but the inclusion in the deployed software is an accidental or unintended consequence.

    Junilu Lacar wrote:The opportunity to teach less experienced or newer developers



    This is a key point, I think. There's an analogy with Test-Driven Development in here somewhere, let's call it Training-Driven Development, TDD  ;)

    A long-term project lives or dies by the quality of the maintenance. Initially it seems great to have a single developer who has a deep understanding of everything and can solve every problem quickly and efficiently, but eventually that developer is going to move on. What happens after that depends on how well the successors share that deep understanding.

    I have spent most of my career as a contract developer, typically brought in to work on codebases which have grown horrible through years of suboptimal fixes and "improvements". The story is almost always the same. A single hero developer (occasionally a small team) who was always too busy or too secretive to explain why and how. Then the hero moved on to a new project, beginning the decline into a ball of mud.

    Techniques such as pairing (and, as Junilu points out, mobbing) and code reviews seem like a needless slowdown, just as writing tests seems like extra work which just gets in the way when you know you are writing good code.

    Test-Driven Development has gained a lot more traction, though, probably because the benefits pay off sooner. But the analogy holds, and investing in the future through immersive training has a place too.

    I'm not going to claim that these training techniques should always be used, though.

    I'm a huge Test-Driven Development fan (you only have to look back at my Ranch history to see that) but even I admit that it's not always appropriate. Some will only have a very short life, such as the data analysis and graph drawing code I am currently writing for my PhD research, and the overhead of writing tests is pointless. Likewise, some projects will be so personal or so short-lived that they will never be handed over to other developers. In those cases the overhead of training successors is also pointless.

    But in the context of a complex, long-lived project, succession planning and training is as important as any other "non functional" requirement. In such cases, collective ownership, pairing/mobbing, and code reviewing are some of the easier ways to integrate training in with the day-to-day development work.
    I have got a lot out of pair programming over the years, and also a lot out of code reviews. Just like everything else, though, there are different ways of doing these things, and some work better or worse for different people.

    For me, pair programming is a very intense experience. It feels like being 100% "on", and this is mainly due to the extreme accountability of someone else looking at the code as you type it (or concentrating on the code your partner is typing) Because of this intensity, I can't do it for long periods (although, like gym time, you can build that stamina with repeated practice) After a pairing session, I always need some alone time to digest what we have done, and catch up with all the other things which might have distracted me if I was working on my own (cough Code Ranch)

    Pair programming also needs trust. Both partners need to trust that the other is paying attention and pulling their weight in the process. It also needs a good balance of skills.  If only one person is contributing, it becomes just another weight to drag along.

    When you get it right, pair programming can be very productive, and produce great code, but it's not for everyone. Like so many practices which originated from developers, it can easily chafe when imposed by management.

    Code reviews have some of the same benefits and drawbacks, and can be considered as a kind of asynchronous pairing. Code reviews need trust and a good balance of skills, but they also need good communication skills to explain what has been spotted or ask why something is as it is without causing confusion or offence. Unlike Pair programming, code reviews can work pretty well with non-developers such as business stakeholders by walking through a solution and explaining it in the language of the problem domain rather than the language of code. I have seen this catch misunderstandings many times.

    A good code reviewer, like a good editor for a writer, has to be someone who you genuinely want feedback from, and who you know will pay attention to the details and catch things you have missed. Just as with pair programming, Code reviews can just be a huge waste of time if they are mandated as part of a process, particularly if that process requires developers to send code out to multiple people and wait for their responses. On the other hand, code reviews can be hugely useful if the team has a culture of shared responsibility, and developers are encouraged to seek an appropriate person to take a look before pulling the trigger to deploy.

    Tomasz Lelek wrote:Each chapter tries to focus on a different set of trade-offs; for example, in chapter 1, you will find trade-offs between flexibility and the cost of the potential of bugs and the evolution of your code.
    Chapter 10 discussed trade-offs between consistency and atomicity in your systems. Chapter 6 focuses on Simplicity vs. the cost of maintenance for your API.



    I find it interesting that these are all in the domain of the software development process itself.

    There is, I think, another important trade off between ease/cost/simplicity of development and the cost/environmental impact of the deployed system.

    As some examples...
  • Building a system with a pleasant and effective user interface which generates static files for a website is usually more software development effort than building a dynamic application, but can be much faster and use less energy and need fewer servers when deployed.
  • You might want to write an embedded application in Java, say, but writing it in C might mean you can get away with a $1 microcontroller rather than a "proper" computer costing a hundred times as much and using at least ten times the power.
  • I agree with Tomasz on this one.

    The most effective shared code libraries I have used have been run like open source projects, even if they have been inside a corporate firewall.

    The key things to remember when maintaining an open source project are that:
  • you don't know who all of your users are
  • you can't assume everyone is using it the way you do or for the purpose you do.

  • When you work through the implications of those statements, you realise that you need proper versioning and API documentation, and you can't casually change something just because it works better for you.
    Picking the right components for a software system is always a problem, whatever level you look at it. To me the main issue is the lack of real information. Sure, there's plenty of "how tos", and tutorials, and code examples, and overexcited marketing material, but a distinct lack of concrete specifications and comparisons.

  • If you buy a car you want to know how many miles it does per gallon.
  • If you buy a kitchen appliance it usually comes with an energy rating
  • If you do any kind of electronics you always get a data sheet which goes into great detail

  • With software? Not so much.

    I completely agree that there is no such thing as "best", but in order to pick an appropriate component you need information. Without that, the choice is either to spend a lot of time (and money) evaluating lots of components, or select by "gut feel" or "word of mouth".

    Following on the "template engine" example. Even if we restrict the GitHub search to only include template engines in Java there are still several hundred to choose from. Admittedly, a lot of this is down to CodeRanch alum Lasse Koskela who wrote a book and used creating a template engine as a case study, but there are still a lot to choose from. I have spoken to many developers, and so far none have been able to come up with rational way to choose components.

    If you have ever worked with templated text in Java you have probably heard of Freemarker, Velocity, and Mustache. They get most of the coverage in articles and blog posts. They all do pretty much the same job.

    Any idea which is faster? Any idea which is most flexible?

    The same is true at an application level.

    Do you have any idea how much more electricity it takes to run a website using Wordpress compared to using a site generator and serving static files ?
    Do you know which is faster or which uses less energy when selecting between webservers such as Apache, Nginx, and Lighttpd ?

    Without this kind of information, how are engineers and software developers supposed to make sensible choices?
    I think that's fair.

    The programming book market is swamped with introductory books, and so an y book which takes on some more advanced or nuanced topics is very welcome!
    And following on from Campbell's question, what about the continual pressure to "upgrade" to a different library, framework, or even programming language?
    Code duplication is a really interesting topic. I've had lots of conversations about the merits (or not) of code duplication with the microservices crowd.

    It's long been the generally accepted wisdom that code duplication is bad, and part of the job of a responsible developer is to minimise it. That only really makes sense in the context of a single application, though.

    When you have separate applications with their own development teams or life-cycles, a reliance on shared code can cause all kinds of complications and slowdowns. It can often be much more productive to develop  each separate application independently, even if that leads to duplication. In particular the aim with the microservices approach is that you should be able to throw away and rewrite any of the microservices without affecting any of the others.

    This is a really nice idea, but just as with any other software development, you need to be aware of the technical debt inherent in the duplication. If a commonly-used external service changes its API or semantics, for example, then every service which uses it needs to be updated to conform. With shared code that might be an easy fix in one place, but with a system which consists of a soup of microservices, the same change may need to be made in slightly different ways in several different places, which is obviously a lot more risky and harder to test.
    This is where code reviews and pair programming come in.

    I've been in academia for a while where, unfortunately, such things are frowned upon as somehow "cheating", but multiple pairs eyes on the code has saved my bacon many times in "the real world" ;)
    I'm not suggesting that this is an approach which is universally appropriate, but I have found that software can be a lot easier to reason about  if dates and datetimes are stored and processed as seconds-from-the-epoch and only converted to timezones and human-readable formats at the edge of the system, such as for display on a screen.

    This does, of course, raise the issue of when was the "epoch". In most systems it is usually 01-01-1970, but I have seen others.
    I understand that the title of a book should be short and to-the-point, but I would like to dig into the idea of "mistakes" a bit more

    It seems to me that there are a lot of potential categories of "mistakes" (a.k.a. "bugs"), and the approach to dealing with them should probably be very different.

    On the one hand we have what you might call shortcuts or deliberate mistakes, where  something has been done "wrong" for the sake of expediency. The important thing here is that (at some point at least) a developer was aware that a mistake had been made, and so there is at least some chance of a plan to deal with it. This group overlaps somewhat with the trade-offs from the book title, but choosing to create "wrong" code feels qualitatively different from options such as choosing one library over another, or deferring a feature until later.

    On the other hand we have the accidental and unknown mistakes. Both of these have the characteristic that they are to some degree invisible. The difference is that in the accidental case it is an explicit action (or inaction) by a developer which has resulted in the mistake, and in the unknown case it is a corollary or side-effect of an otherwise reasonable choice.  For example, using the wrong variable in a calculation would count as an accidental mistake, but choosing a third-party library which passes the tests but has a security hole might count as an unknown mistake.

    I am really intrigued by the idea that there might be a development approach which can cope with such wildly different kinds of problems.