Do you always follow the 80-character line width? I always find 80-character line to be somewhat too small and not to mention that most screens are getting wider nowadays. Furthermore, if you let the IDE format the source code for you, the wrapped line can be very ugly and not so readable. Hence, I will always try to keep the line around 90-100 characters such that there is no need for me to do horizontal scrolling.
What is the maximum line width that you guys always use?
SCJP 5.0, SCWCD 1.4, SCBCD 1.3, SCDJWS 1.4
That's right. The reason is that screen width isn't the only thing that counts. Occasionally I find it more useful to kill a tree, print out code whip out the infamous pink highlighter pen and start drawing. While 80 characters is find for punched cards (actually they usually only used 72), 65 is 6.5 inches at 10 characters per inch, allowing the usual margins on a printed page.
There's another reason as well. Although my monitor has a 21-inch diagonal, 1600 pixel width, modern IDEs tend to have me working with multiple panes at one time, slicing and dicing the screen into much smaller parts. So keeping the width down is a good idea, even onscreen.
Hard to believe I spent a lot of my career working with a 24-line "green-screen" text-only display. But that's how I know what programs on punched cards are like.
Customer surveys are for companies who didn't pay proper attention to begin with.
I gave up on 80-character lines about 3-4 years ago, at least in Java. It's a rare occasion my lines *are* that long (almost never over 80), so when I print out code it's a pretty rare occasion that my lines will wrap--and when they do, I don't have a problem reading them.
IMO if things are being properly refactored the need for really long lines is greatly diminished.
Ick. I type abominably. And I've been harbouring a suspicion that there's a predictive word-completion component active on my system. Which often predicts the wrong word.
One thing that tends to lead to really wide listings is many layers of conditionals. I get around this using 2 techniques:
1. Keep methods simple. If code starts to march across the page, refactor it into private sub-methods. I trust the optimizer to ensure that I won't see a performance penality.
2. Filtering. Rather than IF/IF/IF/IF-style stuff, I put the conditional stuff up top and simply bail if one of the IF-filters fails. A bunch of IF-RETURN/IF-RETURN/IF-RETURN sequences doesn't indent. And while this may not be ideologically pure Structured Programming, if it's done in a predictable, standardized way, I think it may actually be more maintainable than keeping a stack of conditions in one's head. It's certainly a whole lot more readable, and you're less likely to get killed by a misplaced brace. It is, however, critical that all the filters be done before any processing logic. Mix-n-match is a recipe for disaster.
I think the "single return point" convention is finally dying--at one point it made a lot more sense; in GC languages not (necessarily) as much.
It's always far clearer to say something likethanThere's less cognitive overhead: if you're trying to figure out the execution path for a null foo you're done on the second line. In the second version you have to move past the !null check in order to find out how the story ends. Boo hiss.
If methods are appropriately short it can be pretty straight-forward either way, but I tend to write really, really short methods.
It goes back further than that. Some dialects of FORTRAN supported both multiple entry points and multiple exit points. It was widely considered to be a nightmare to have both in the same module.
Structured Programming, after all, was developed to, er, "structure" code so that more time could be spent figuring out what something did as opposed to how it did it. It took a wild collection of spaghetti, with gotos, entries, returns et al. and reduced it to a set of simple consistent constructs. People tend to forget that this was quite a radical concept back then - after all, spaghetti code was more "efficient". An assertion that was open to argument even then. But structured code is easier for a machine process to optimize, so no one argues anymore.
It's not so much that I adopted a "rules are made to be broken" philosophy, as that I added a rule of my own.
Diverging a second level, however, I'd like to observe that an academic of some note, whose name, alas, I forget claimed that the concept of "not" in and of itself is troublesome to a lot of people to the degree that he'd actually arranged special instruction on the subject. Myself, I studied the Propositional Calculus and worked with IBM mainframe JCL (If "condition" is NOT true, then DO NOT execute this program). Probably explains why when I first tried to make my own pretzels they came out so well.