Feb 212011
 

In Robert C. Martin’s book Clean Code he writes:

“Comments are not like Schindler’s List. They are not “pure good.” Indeed, comments are, at best, a necessary evil. If our programming languages were expressive enough, or if we had the talent to subtly wield those languages to express our intent, we would not need comments very much — perhaps not at all.”

When I first read that, and the text that followed I was not happy. I had been teaching for a long time that prolific comments were essential and generally a good thing. The old paradigm held that describing the complete functionality of your code in the right margin was a powerful tool for code quality – and the paradigm worked! I have forgotten enough stories where that style had saved the day that I could fill a book with them. The idea of writing code with as few comments as possible seemed pure madness!

However, for some time now I have been converted and have been teaching a new attitude toward comments and a newer coding style in general. This past weekend I had opportunity to revisit this again and compare what I used to know with what I know now.

While repairing a subtle bug in Message Sniffer (our anti-spam software) I re-factored a function that  helps identify where message header directives should be activated based on the actual headers of a message.

https://svn.microneil.com/websvn/diff.php?repname=SNFMulti&path=%2Ftrunk%2Fsnf_HeaderFinder.cpp&rev=34

One of the most obvious differences between the two versions of this code is that the new one has almost no comments compared to the previous version! As it turns out (and as suggested by Robert Martin) those comments are not necessary once the code is improved. Here are some of the key things that were done:

  • Logical expressions were broken into pieces and assigned to well named boolean variables.
  • The if/else ladder was replaced with a switch/case.
  • A block of code designed to extract an IP address from a message header was encapsulated into a function of it’s own.

The Logical Expressions:

Re-factoring the logical expressions was helpful in many ways. Consider the old code:

        if(                                                                     // Are we forcing the message source?
          HeaderDirectiveSource == P.Directive &&                               // If we matched a source directive and
          false == ScanData->FoundSourceIP() &&                                 // the source is not already set and
          ActivatedContexts.end() != ActivatedContexts.find(P.Context)          // and the source context is active then
          ) {                                                                   // we set the source from this header.

There are three decision points involved in this code. Each is described in the comments. Not too bad. However it can be better. Consider now the new code:

            case HeaderDirectiveSource: {

                bool HeaderDirectiveSourceIPNotSet = (
                  0UL == ScanData->HeaderDirectiveSourceIP()
                );

                bool SourceContextActive = (
                  ActivatedContexts.end() != ActivatedContexts.find(P.Context)
                );

                if(HeaderDirectiveSourceIPNotSet && SourceContextActive) {

The first piece of this logic is resolved by using a switch/case instead of an if/else tree. In the previous version there was a comment that said the code was too complicated for a switch/case. That comment was lying! It may have been true at one time, but once the comment had out-lived it’s usefulness it stuck around spreading misinformation.

This is important. Part of the reason this comment outlived it’s usefulness  is because with the old paradigm there are so many comments that we learned to ignore them most of the time. With the old paradigm we treated comments as a running narrative with each line of comment attached to a line of code as if the two were “one piece”. As a result we tended to ignore any comments that weren’t part of code we are modifying or writing. Comments can be much more powerful than that and I’ll talk about that a little later.

The next two pieces of logic involve testing conditions that are not otherwise obvious from the code. By encapsulating these in well named boolean variables we are able to achieve a number of positive effects:

  • The intention of each test is made plain.
  • During a debugging session the value of that test becomes easily visible.
  • It becomes easier to spot errors in the “arithmetic” performed for each test.
  • The matching comment is no longer required.

So you don’t miss it, there was a second bug fixed during this task because of the way the re-factoring clarified this code. The original test to see if the header directive source had already been set was actually looking at the wrong piece of data!

Finally, the if() that triggers the needed response is now perfectly clear because it says exactly what it means without any special knowledge.

At 0-dark-hundred, starting your second case of Jolt Cola (or RedBull, or your other favorite poison) we’re all a little less than our best. So, it helps if what we’re looking at is as clear as possible.

Even if you’re not pulling an all-night-er its much easier if you don’t have to remember that (0UL == ScanData->HeaderDirectiveSourceIP()) really means the header IP source has not been set. Much easier if that bit of knowledge has already been spelled out – and quite a bonus that the local varible HeaderDriectiveSourceIPNotSet shows up automatically in your debugger!

Encapsulating Code:

In the previous version the code that did the heavy lifting used to live inside the test that triggered it. Consider the old code:

        if(                                                                     // Are we forcing the message source?
          HeaderDirectiveSource == P.Directive &&                               // If we matched a source directive and
          false == ScanData->FoundSourceIP() &&                                 // the source is not already set and
          ActivatedContexts.end() != ActivatedContexts.find(P.Context)          // and the source context is active then
          ) {                                                                   // we set the source from this header.
            // Extract the IP from the header.

            const string digits = "0123456789";                                 // These are valid digits.
            unsigned int IPStart =
              Header.find_first_of(digits, P.Header.length());                  // Find the first digit in the header.
            if(string::npos == IPStart) return;                                 // If we don't find it we're done.
            const string ipchars = ".0123456789";                               // These are valid IP characters.
            unsigned int IPEnd = Header.find_first_not_of(ipchars, IPStart);    // Find the end of the IP.
            if(string::npos == IPEnd) IPEnd = Header.length();                  // Correct for end of string cases.
            ScanData->HeaderDirectiveSourceIP(                                  // Extract the IP from the header and
              Header.substr(IPStart, (IPEnd - IPStart))                         // expose it to the calling scanner.
            );
            Directives |= P.Directive;                                          // Add the flags to our output.
        }

Again, not too bad. Everything is commented well and there isn’t a lot of code there. However it is much clearer the new way:

                if(HeaderDirectiveSourceIPNotSet && SourceContextActive) {
                    ScanData->HeaderDirectiveSourceIP(
                      extractIPFromSourceHeader(Header)
                    );
                    Directives |= P.Directive;                                  // Add the flags to our output.
                }

All of the heavy lifting has now been reduced to two lines of code (arguably one). By moving the meat of this operation off to extractIPFromSourceHeader() this block of code becomes very clear and very simple. If(this is going on) then { do this }. The mechanics of { do this } are part of a different and more focused discussion.

This is helpful not only because it clarifies the code, but also because if you are going to refine and test that part of the code it now lives in it’s own world where it can be wrapped with a test function and debugged separately. Not so when it lived deep inside the code of another function.

Powerful Comments:

In the old paradigm comments were a good thing, but they were weakened by overuse! I hate to admit being wrong in the past, but proud to admit I am constantly learning and improving.

When comments are treated like a narrative describing the operation of the code there are many benefits, but there are also problems. The two biggest problems with narrating source code like this are that we learn to ignore comments that aren’t attached to code we’re working on and as a result of ignoring comments we tend to leave some behind to lie to us at a later date.

The new paradigm has most of the benefits of the narrative method implemented in better encapsulation and naming practices. This tight binding of intent and code virtually eliminates the biggest problems associated with comments. In addition the new method gives comments a tremendous boost in their power.

Since there are fewer comments we tend to pay attention to them when we see them. They are bright markers for bits of code that could probably be improved (if they are the narrative type); or they are important messages about stuff we need to know. With so few of them and with each of them conveying something valuable we dare not ignore them, and that’s a good thing.

My strong initial reaction to Robert’s treatment of comments was purely emotional — “Don’t take my comments away! I like comments! They are good things!”

I now see that although the sound byte seems to read “Eliminate All Comments!”, the reality is more subtle and powerful, and even friendly to my beloved comments. Using them sparingly and in just the right places makes them more powerful and more essential. I feel good about that. I know that for the past couple of years my new coding style has produced some of the best code I’ve ever written. More reliable, efficient, supportable, and more powerful code.

Summary:

If I really wanted to take this to an extreme I could push more encapsulation into this code and remove some redundancy. For example, multiple instances of “Directives |= P.Directive;” stands out as redundant, and why not completely encapsulate things like ScanData->drillPastOrdinal(P.Ordinal) and so forth into well named explicit functions? Why not convert some of the tests into object methods?

Well, I may do some of those things on a different day. For today the code is much better than it was, it works, it’s clear, and it’s efficient. Since my task was to hunt down and kill a bug I’ll save any additional style improvements for another time. Perhaps I’ll turn that into a teaching exercise for some up-and-coming code dweller in the future!

Here are a few good lessons learned from this experience:

  • It is a good idea to re-factor old code when resolving bugs as long as you don’t overdo it. Applying what you have learned since the last revision is likely to help you find bugs you don’t know exist. Also, incremental improvements like this tend to cascade into improvements on a larger scale ultimately improving code quality on many vectors.
  • If you write code you should read Clean Code – even if you’re not writing Java! There are lots of good ideas in there. In general we should always be looking for new ways to improve what we do. Try things out and keep the parts that work.
  • Don’t cast out crazy ideas without first looking them over and trying them out. Often the best ideas are crazy ones. “You’re entirely bonkers. But I’ll tell you a secret. All the best people are.” – Alice
  • Good coding style does matter, if you do it right.