Dec 202019
 

It seemed obvious enough. I mean, I’ve been using these for as long as I can remember, but the other day I used the term SOP and somebody said “what’s that?”

An SOP, or “Standard Operating Procedure” is a collaborative tool I use in my organizations to build intellectual equity. Intellectual equity is what you get when you capture domain knowledge from wetware and make it persistent.

In order to better explain this, and make it more obvious and shareable, I offer you this SOP on SOPs. Enjoy!

This is an SOP about making SOPs.
It describes what an SOP looks like by example.
SOPs are files describing Standard Operating Procedures.
In general, the first thing in an SOP is a description of the SOP
including some text to make it easy to find - kind of like this one.

[ ] Create an SOP

  An SOP is a kind of check list. It should be just detailed enough so 
  that a reasonably competent individual for a given task can use the
  SOP to do the task most efficiently without missing any important
  steps.

  [ ] Store an SOP as a plain text file for use on any platform.
  [ ] Be mindful of security concerns.
    [ ] Try to make SOPs generic without making them obscure.
    [ ] Be sure they are stored and accessed appropriately.
      [ ] Some SOPs may not be appropriate for some groups.
      [ ] Some SOPs might necessarily contain sensitive information.

  [ ] Use square brackets to describe each step.
    [ ] Indent to describe more specific (sub) steps such as below...
      [ ] View thefile from the command line
        [ ] cat theFile

        Note: Sometimes you might want to make a note.
        And, sometimes a step might be an example of what to type or
        some other instruction to manipulate an interface. As long as
        it's clear to the intended reader it's good but keep in mind
        that this is a check list so each instruction should be
        a single step or should be broken down into smaller pieces.

      [ ] When leaving an indented step, skip a line for clarity.
      < > If a step is conditional.
        [ ] use angle brackets and list the condition.

  [ ] Use parentheses (round brackets) to indicate exclusive options.
    Note: Parentheses are reminiscent of radio buttons.

    ( ) You can do this thing or
    ( ) You can do this other thing or
    ( ) You can choose this third thing but only one at this level.

[ ] Use an SOP (best practice)
  [ ] Make a copy of the SOP.
    [ ] Save the copy with a unique name that fits with your workflow
      [ ] Be consistent

  [ ] Mark up the copy as you go through the process.
    [ ] Check off each step as you proceed.
      Note: This helps you (or someone) to know where you are in the
        process if you get interrupted. That never happens right?!

      [ ] Brackets with a space are not done yet.
      [ ] Use a * to mark an unfinished step you are working on.
      [ ] Use an x to mark a step that is completed.
      [ ] Use a - to indicate a step you are intentionaly skipping.
      [ ] Add notes when you do something unexpected.
        < > If skipping a step - why did you skip it.
        < > If doing steps out of order.
        < > If you run into a problem and have to work around it.
        < > If you think the SOP should be changed, why?
        < > If you use different marks explain them at the top.
          [ ] Make a legend for your marks.
          [ ] Collaborate with the team so everybody will understand.

    [ ] Add notes to include any important specific data.
      [ ] User accounts or equipment references.
      [ ] Parameters about this specific case.
      [ ] Basically, any variable that "plugs in" to the process.
      [ ] BE CAUTIOUS of anything that might be secure data.
        [ ] Avoid putting passwords into SOPs.
        [ ] Be sure they are stored and shared appropriately.
          [ ] Some SOPs may require more security than others.
          [ ] Some SOPs may be relevant only to special groups.

[ ] Use SOPs to capture intellectual equity!
  [ ] Use SOPs for onboarding and other training.
  [ ] Update your collection of SOPs over time as you learn.
    [ ] Template/Master SOPs describe how things should be done.
      [ ] Add SOPs when you learn something new.
      [ ] Modify SOPs when you find a better process.
      [ ] Delete SOPs when redundant, useless, or otherwise replaced.

    [ ] Completed SOPs are a great resource.
      [ ] Use them for after action reports.
      [ ] Use them for research.
      [ ] Use them for auditing.
      [ ] Use them to track performance.
      [ ] Use them as training examples.

  [ ] Collaborate with team mates on any changes. M2B!
    [ ] Referring to SOPs is a great way to discuss changes.
    [ ] Referring SOPs keeps teams "on the same page."
    [ ] Referring SOPs helps to develop a common language.
    [ ] Create SOPs as a planning tool - then work the plan.

  [ ] Keep SOPs accessible in some central place.
    [ ] GIT is a great way to keep, share, and maintain SOPs.
    [ ] An easily searchable repository is great (like GitTea?!)
      [ ] Be mindful of security concerns.

Aug 032016
 

I recently upgraded my primary laptop from Ubuntu 14.04 to 16.04 and suddenly discovered that when I ssh into some of my other boxes all the colors are gone!

As it turns out, a few of the boxes I manage are running older OS versions that became confused when they saw my terminal was xterm-256color instead of just xterm-color.

Since it took me a while to get to the bottom of this I thought I’d share the answer to make things easier on the next few who stumble into the spider web. First: Don’t Panic! — always good advice, unless you’re 9 and then you’ll hear that “Sometimes, fear is the appropriate response…” but I digress.

So, what happened is that my new box is using a terminal type that isn’t recognized by some of the older boxes. When they see xterm-256color and don’t recognize it they run to safety and presume color isn’t allowed.

To solve this, simply set the TERM environment variable to the older xterm-color prior to launching ssh and that will be passed on to the target host.

At the command line:

TERM=xterm-color ssh yourself@host.example.com

If you’re telling KeePass how to do a URL Override for ssh then use the following encantation:

cmd://gnome-terminal --title={URL} -x /bin/bash -c 'TERM=xterm-color ssh {USERNAME}@{URL:RMVSCM}'

The colors come back and everything is happy again.

Oh, and one more thing before I forget. If you’re like me and sometimes have to wait a long time for commands to finish because you’re dealing with, say, hundreds of gigabytes of data if not terabytes, then you might find your ssh sessions time out while you’re waiting and that might be ++undesirable.

To fix that, here’s another little tweak that you might add to your .ssh/config file and then forget you ever did it… so this blog post might help you remember.

Host *
    ServerAliveInterval 120

That will save you some frustration 😉

Nov 092012
 

I recently read a few posts that suggest any computer language that requires an IDE is inherently flawed. If I understood the argument correctly the point was that all of the extra tools typically found in IDEs for languages like C++ and Java are really crutches that help the developer cope with the language’s failings.

On the one hand I suppose I can see that point. If it weren’t for all of the extra nudging and prompting provided by these tools then coding a Java application of any complexity would become much more difficult. The same could be said for serious C++ applications; and certainly any application with mixed environments and multiple developers.

On the other hand these languages are at the core of most heavy-lifting in software development and the feature list for most popular IDEs continues to grow. There must be a reason for that. The languages that can be easily managed with an ordinary editor (perhaps one with good syntax highlighting) are typically not a good fit for large scale projects, and if they were, a more powerful environment would be a must for other reasons.

This got me thinking that perhaps all of this extra complexity is part of the ongoing evolution of software development. Perhaps the complexity we are observing now is a temporary evil that will eventually give way to some truly profound advancements in software development. Languages with simpler constructs and syntax are more likely throw-backs to an earlier paradigm while the more complex languages are likely straining against the edges of what is currently possible.

The programming languages we use today are still rooted in the early days of computing when we used to literally hand-wire our systems to perform a particular task. In fact the term “bug” goes all the way back to actual insects that would occasionally infest the circuitry of these machines and cause them to malfunction. Once upon a time debugging really did mean what it sounds like!

As the hardware of computing became more powerful we were able to replace physical wiring with machine-code that could virtually rewire the computing hardware on the fly. This is still at the heart of computing. Even the most sophisticated software in use today eventually breaks down into a handful of bits that flip switches and cause one logic circuit to connect to another in some useful sequence.

In spite of the basics task remaining the same, software development has improved significantly over time. Machine-code was better than wires, but it too was still very complicated and hardware specific. Remembering op codes and their numeric translations is challenging for wetware (brains) and in any case isn’t portable from one type of machine to another. So machine-code eventually evolved into assembly language which allowed programmers to use more familiar verbs and register names to describe what they wanted to do. For example you can probably guess that “add ax, bx” probably instructs the hardware to add a couple of numbers together and that “ax” and “bx” are where those numbers can be found. Even better than that, assembly language offered some portability between one chunk of hardware and another because the compiler (a hardware specific chunk of software) would keep track of the specific op codes so that software developers could more easily reuse and share chunks of code.

From there we evolved to languages like C that were just barely more sophisticated than assembly language. In the beginning, C was slightly more than a handy syntax that could be expanded into assembly language in an almost cut-and-paste fashion. It was not uncommon to actually use assembly language inside of C programs when you wanted to do something specific with your hardware and you didn’t have a ready-made library for it.

That said, the C language and others like it did give us more distance from the hardware and allowed us to think about software more abstractly. We were better able to concentrate on algorithms and concepts once we loosened our grip the wiring under the covers.

Modern languages have come a long way from those days but essentially the same kind of translation is happening. It’s just that a lot more is being done automatically and that means that a lot more of the decisions are being made by other people, by way of software tools and libraries, or by the machinery itself, by way of memory managers, signal processors, and other specialized devices.

This advancement has given us the ability to create software that is profoundly complex – sometimes unintentionally! Our software development languages and development tools have become more sophisticated in order to help us cope with this complexity and the lure of creating ever more powerful software.

Still, fundamentally, we are stuck in the dark ages of software development. We’re still working from a paradigm where we tell the machine what to do and the machine does it. On some level we are still hand-wiring our machines. We hope that we can get the instructions right and that those instructions will accomplish what we have in mind but we really don’t have a lot of help with those tasks. We write code, we give it to the machine, we watch what the machine does, we make adjustments, and then we start again. The basic cycle has sped up quite a bit but the process of software development is still a very one-way endeavor.

What we are seeing now in complex IDEs could be a foreshadowing of the next revolution in software development where the machines will participate on a more equal footing in the process. The future is coming, but our past is holding us back. Right now we make educated guesses about what the machine will do with our software and our IDEs try to point out obvious errors and give us hints that help our memory along the way. In fact they are straining at the edges of the envelope to do this and the result is a kind of information overload.

The problem has become so bad that switching from one IDE to another is lot like changing countries. Even if the underlying language is the same, everything about how that language is used can be different. It is almost as if we’ve ended up back in the machine-code days where platform specific knowledge was a requirement. The difference is that instead of knowing how to rewire a chunk of hardware we must know how to rewire our tool stack.

So what would happen if we took the next step forward and let go of the previous paradigm completely? Instead of holding on to the idea that we’re rewiring the computer to do our bidding and that we are therefor completely responsible for all of the associated details, we could collaborate with the computer in a way that allows us to bring our relative strengths together and achieve a superior result.

Wetware is good at creativity, abstraction, and the kind of fuzzy thinking that goes into solving new problems and exploring new possibilities. Hardware is good at doing arithmetic, keeping track of huge amounts of data, and working very quickly. This seems like two sides of a great team because each partner brings something that the other is lacking. The trick is to create an environment where the two can collaborate efficiently.

Working with a collaborative IDE would be more like having a conversation than editing code. The developer would describe what they are trying to do using whatever syntax they understand best for that task and the machine would provide a real-time simulation of the result. Along the way the machine would provide recommendations about the solution they are developing through syntax highlighting and co-editing, hints about known algorithms that might be useful, and simulations of potential solutions.

The new paradigm takes the auto-complete, refactoring, and object browser features built into current IDEs and extends that model to reach beyond the code base for any given project. If the machine understands that you are building a particular kind of algorithm then it might suggest a working solution from the current state-of-the-art. This suggestion would be custom fitted to the code you are describing and presented as a complete simulation along with an analysis (if you want it) of the benefits. If the machine is unsure of what you are trying to accomplish then it would ask you questions about the project using a combination of natural language and the syntax of the code you are using. It would be very much like working side by side with an expert developer who has the entire world of computer science at top of mind.

The end result of this kind of interaction would be a kind of intelligent, self-documenting software that understands itself on a very deep level. Each part of the code base would carry with it a complete simulation of how the code should operate so that it can be tested automatically on various target platforms and so that new modifications can be regression tested during the development process.

The software would be _almost_ completely proven by the time it was written because unit tests would have been performed in real-time as various chunks of code were developed. I say, _almost_ because there are always limits to how completely any system can be tested and because there are always unknowns and unintended consequences when new software is deployed.

Intelligent software like this would be able to explain the design choices that were made along the way so that new developers could quickly get a full understanding of the intentions of the previous developers without having to hunt them down, embark on deep research efforts, or make wild guesses.

Intelligent software could also update and test itself as improved algorithms become available, port itself to new platforms automatically as needed, and provide well documented solutions to new projects when parts of the code base are applicable.

So, are strong IDEs a sign of weak languages? I think not. Instead, they are a sign that our current software development paradigm is straining at the edges as we reach toward the next revolution in computing: Intelligent Development Environments.

Aug 252012
 

I needed to create MD5 hashes to populate a password database for apache. This seemed like a very simple thing. So, when I wanted an MD5 hash in hex for my JSP app I expected to find a utility ready and waiting inside java. No such luck. No problem, I thought — I’ll just “google” it.

I was surprised to find that there are lots of half-baked solutions out there posted on various discussion forums, but none of them were solid, simple, and self-explanatory; or at least they didn’t look like code I would be comfortable with. So I decided to write it up and make it easy to find just in case someone else has the same experience.

My solution breaks out into three tiny functions that might be re-used lots of places.

import java.util.*;
import java.security.*;

String HexForByte(byte b) {
    String Hex = Integer.toHexString((int) b & 0xff);
    boolean hasTwoDigits = (2 == Hex.length());
    if(hasTwoDigits) return Hex;
    else return "0" + Hex;
}

String HexForBytes(byte[] bytes) {
    StringBuffer sb = new StringBuffer();
    for(byte b : bytes) sb.append(HexForByte(b));
    return sb.toString();
}

String HexMD5ForString(String text) throws Exception {
    MessageDigest md5 = MessageDigest.getInstance("MD5");
    byte[] digest = md5.digest(text.getBytes());
    return HexForBytes(digest);
}

HexForByte(byte b) gives you a two digit hex string for any byte. This is important because Integer.toHexString() will give you only one digit if the input is less than 16. That can be a real problem if you are building hash strings. Another tricky bit in this function strips the sign from the byte before converting it to an integer. In java, every kind of number is signed so we have to watch out for that when making conversions.

HexForBytes(byte[] bytes) gives you a hex string for any array of bytes. Each byte will be correctly represented by precisely two hex digits. No more, no less.

Wrapping it all up, HexMD5ForString(String text) gives you an MD5 digest in hex of any string. According to the apache documentation this is what I will want to put into the database so that mod_auth_digest can authenticate users of my web app. To see what started all of this look here: http://httpd.apache.org/docs/2.4/misc/password_encryptions.html

With the code above in place I can now do something like:

HexMD5ForString( user + ":" + realm + ":" + password );

From the look of it, the Java code on the apache page looks like it will work; and it may be faster; but doing it my way the code is less obscure and yields a few extra utility functions that can be useful other places.

 

Feb 212011
 

In Robert C. Martin’s book Clean Code he writes:

“Comments are not like Schindler’s List. They are not “pure good.” Indeed, comments are, at best, a necessary evil. If our programming languages were expressive enough, or if we had the talent to subtly wield those languages to express our intent, we would not need comments very much — perhaps not at all.”

When I first read that, and the text that followed I was not happy. I had been teaching for a long time that prolific comments were essential and generally a good thing. The old paradigm held that describing the complete functionality of your code in the right margin was a powerful tool for code quality – and the paradigm worked! I have forgotten enough stories where that style had saved the day that I could fill a book with them. The idea of writing code with as few comments as possible seemed pure madness!

However, for some time now I have been converted and have been teaching a new attitude toward comments and a newer coding style in general. This past weekend I had opportunity to revisit this again and compare what I used to know with what I know now.

While repairing a subtle bug in Message Sniffer (our anti-spam software) I re-factored a function that  helps identify where message header directives should be activated based on the actual headers of a message.

https://svn.microneil.com/websvn/diff.php?repname=SNFMulti&path=%2Ftrunk%2Fsnf_HeaderFinder.cpp&rev=34

One of the most obvious differences between the two versions of this code is that the new one has almost no comments compared to the previous version! As it turns out (and as suggested by Robert Martin) those comments are not necessary once the code is improved. Here are some of the key things that were done:

  • Logical expressions were broken into pieces and assigned to well named boolean variables.
  • The if/else ladder was replaced with a switch/case.
  • A block of code designed to extract an IP address from a message header was encapsulated into a function of it’s own.

The Logical Expressions:

Re-factoring the logical expressions was helpful in many ways. Consider the old code:

        if(                                                                     // Are we forcing the message source?
          HeaderDirectiveSource == P.Directive &&                               // If we matched a source directive and
          false == ScanData->FoundSourceIP() &&                                 // the source is not already set and
          ActivatedContexts.end() != ActivatedContexts.find(P.Context)          // and the source context is active then
          ) {                                                                   // we set the source from this header.

There are three decision points involved in this code. Each is described in the comments. Not too bad. However it can be better. Consider now the new code:

            case HeaderDirectiveSource: {

                bool HeaderDirectiveSourceIPNotSet = (
                  0UL == ScanData->HeaderDirectiveSourceIP()
                );

                bool SourceContextActive = (
                  ActivatedContexts.end() != ActivatedContexts.find(P.Context)
                );

                if(HeaderDirectiveSourceIPNotSet && SourceContextActive) {

The first piece of this logic is resolved by using a switch/case instead of an if/else tree. In the previous version there was a comment that said the code was too complicated for a switch/case. That comment was lying! It may have been true at one time, but once the comment had out-lived it’s usefulness it stuck around spreading misinformation.

This is important. Part of the reason this comment outlived it’s usefulness  is because with the old paradigm there are so many comments that we learned to ignore them most of the time. With the old paradigm we treated comments as a running narrative with each line of comment attached to a line of code as if the two were “one piece”. As a result we tended to ignore any comments that weren’t part of code we are modifying or writing. Comments can be much more powerful than that and I’ll talk about that a little later.

The next two pieces of logic involve testing conditions that are not otherwise obvious from the code. By encapsulating these in well named boolean variables we are able to achieve a number of positive effects:

  • The intention of each test is made plain.
  • During a debugging session the value of that test becomes easily visible.
  • It becomes easier to spot errors in the “arithmetic” performed for each test.
  • The matching comment is no longer required.

So you don’t miss it, there was a second bug fixed during this task because of the way the re-factoring clarified this code. The original test to see if the header directive source had already been set was actually looking at the wrong piece of data!

Finally, the if() that triggers the needed response is now perfectly clear because it says exactly what it means without any special knowledge.

At 0-dark-hundred, starting your second case of Jolt Cola (or RedBull, or your other favorite poison) we’re all a little less than our best. So, it helps if what we’re looking at is as clear as possible.

Even if you’re not pulling an all-night-er its much easier if you don’t have to remember that (0UL == ScanData->HeaderDirectiveSourceIP()) really means the header IP source has not been set. Much easier if that bit of knowledge has already been spelled out – and quite a bonus that the local varible HeaderDriectiveSourceIPNotSet shows up automatically in your debugger!

Encapsulating Code:

In the previous version the code that did the heavy lifting used to live inside the test that triggered it. Consider the old code:

        if(                                                                     // Are we forcing the message source?
          HeaderDirectiveSource == P.Directive &&                               // If we matched a source directive and
          false == ScanData->FoundSourceIP() &&                                 // the source is not already set and
          ActivatedContexts.end() != ActivatedContexts.find(P.Context)          // and the source context is active then
          ) {                                                                   // we set the source from this header.
            // Extract the IP from the header.

            const string digits = "0123456789";                                 // These are valid digits.
            unsigned int IPStart =
              Header.find_first_of(digits, P.Header.length());                  // Find the first digit in the header.
            if(string::npos == IPStart) return;                                 // If we don't find it we're done.
            const string ipchars = ".0123456789";                               // These are valid IP characters.
            unsigned int IPEnd = Header.find_first_not_of(ipchars, IPStart);    // Find the end of the IP.
            if(string::npos == IPEnd) IPEnd = Header.length();                  // Correct for end of string cases.
            ScanData->HeaderDirectiveSourceIP(                                  // Extract the IP from the header and
              Header.substr(IPStart, (IPEnd - IPStart))                         // expose it to the calling scanner.
            );
            Directives |= P.Directive;                                          // Add the flags to our output.
        }

Again, not too bad. Everything is commented well and there isn’t a lot of code there. However it is much clearer the new way:

                if(HeaderDirectiveSourceIPNotSet && SourceContextActive) {
                    ScanData->HeaderDirectiveSourceIP(
                      extractIPFromSourceHeader(Header)
                    );
                    Directives |= P.Directive;                                  // Add the flags to our output.
                }

All of the heavy lifting has now been reduced to two lines of code (arguably one). By moving the meat of this operation off to extractIPFromSourceHeader() this block of code becomes very clear and very simple. If(this is going on) then { do this }. The mechanics of { do this } are part of a different and more focused discussion.

This is helpful not only because it clarifies the code, but also because if you are going to refine and test that part of the code it now lives in it’s own world where it can be wrapped with a test function and debugged separately. Not so when it lived deep inside the code of another function.

Powerful Comments:

In the old paradigm comments were a good thing, but they were weakened by overuse! I hate to admit being wrong in the past, but proud to admit I am constantly learning and improving.

When comments are treated like a narrative describing the operation of the code there are many benefits, but there are also problems. The two biggest problems with narrating source code like this are that we learn to ignore comments that aren’t attached to code we’re working on and as a result of ignoring comments we tend to leave some behind to lie to us at a later date.

The new paradigm has most of the benefits of the narrative method implemented in better encapsulation and naming practices. This tight binding of intent and code virtually eliminates the biggest problems associated with comments. In addition the new method gives comments a tremendous boost in their power.

Since there are fewer comments we tend to pay attention to them when we see them. They are bright markers for bits of code that could probably be improved (if they are the narrative type); or they are important messages about stuff we need to know. With so few of them and with each of them conveying something valuable we dare not ignore them, and that’s a good thing.

My strong initial reaction to Robert’s treatment of comments was purely emotional — “Don’t take my comments away! I like comments! They are good things!”

I now see that although the sound byte seems to read “Eliminate All Comments!”, the reality is more subtle and powerful, and even friendly to my beloved comments. Using them sparingly and in just the right places makes them more powerful and more essential. I feel good about that. I know that for the past couple of years my new coding style has produced some of the best code I’ve ever written. More reliable, efficient, supportable, and more powerful code.

Summary:

If I really wanted to take this to an extreme I could push more encapsulation into this code and remove some redundancy. For example, multiple instances of “Directives |= P.Directive;” stands out as redundant, and why not completely encapsulate things like ScanData->drillPastOrdinal(P.Ordinal) and so forth into well named explicit functions? Why not convert some of the tests into object methods?

Well, I may do some of those things on a different day. For today the code is much better than it was, it works, it’s clear, and it’s efficient. Since my task was to hunt down and kill a bug I’ll save any additional style improvements for another time. Perhaps I’ll turn that into a teaching exercise for some up-and-coming code dweller in the future!

Here are a few good lessons learned from this experience:

  • It is a good idea to re-factor old code when resolving bugs as long as you don’t overdo it. Applying what you have learned since the last revision is likely to help you find bugs you don’t know exist. Also, incremental improvements like this tend to cascade into improvements on a larger scale ultimately improving code quality on many vectors.
  • If you write code you should read Clean Code – even if you’re not writing Java! There are lots of good ideas in there. In general we should always be looking for new ways to improve what we do. Try things out and keep the parts that work.
  • Don’t cast out crazy ideas without first looking them over and trying them out. Often the best ideas are crazy ones. “You’re entirely bonkers. But I’ll tell you a secret. All the best people are.” – Alice
  • Good coding style does matter, if you do it right.
Apr 282010
 

No, I’m not kidding…

Race Conditions are evil right?! When you have more than one thread racing to use a piece of shared data and that data is not protected by some kind of locking mechanism you can get intermittent nonsensical errors that cause hair loss, weight gain, and caffeine addiction.

The facts of life:

Consider a = a + b; Simple enough and very common. On the metal this works out to something like:

Step 1: Look at a and keep it in mind (put it in a register).
Step 2: Look at b and keep it in mind (put it in a different register).
Step 3: Add a and b together (put that in a register).
Step 4: Write down the new value of a (put the sum in memory).

Still pretty simple. Now suppose two threads are doing it without protection. There is no mutex or other locking mechanism protecting the value of a.

Most of the time one thread will get there first and finish first. The other thread comes later and nobody is surprised with the results. But suppose both threads get there at the same time:

Say the value of a starts off at 4 and the value of b is 2.

Thread 1 reads a (step 1).
Thread 2 reads a (step 1).
Thread 1 reads b (step 2).
Thread 2 reads b (step 2).
Thread 1 adds a and b (step 3).
Thread 2 adds a and b (step 3).
Thread 1 puts the result into a (step 4).
Thread 2 puts the result into a (step 4).
Now a has the value 6.

But a should be 8 because the process happened twice! As a result your program doesn’t work properly; your customer is frustrated; you pull out your hair trying to figure out why the computer can’t add sometimes; you become intimately familiar with the pizza delivery guy; and you’re up all night pumping caffeine.

This is why we are taught never to share data without protection. Most of the time there may be no consequences (one thread starts and finishes before the other). But occasionally the two threads will come together at the same time and change your life. It gets even stranger if you have 3 or more involved!

The trouble is that protection is complicated: It interrupts the flow of the program; it slows things down; and sometimes you just don’t think about it when you need to.

The story of RTSNF and MPPE:

All of this becomes critical when you’re building a database. I’m currently in the midst of adapting MicroNeil’s Multi-Path Pattern Engine (MPPE) technology for use in the Real-Time Message Sniffer engine (RTSNF).

RTSNF will allow us to scan messages even faster than the current engine which is based on MicroNeil’s folded token matrix technology. RTSNF will also have a smaller memory footprint (which will please OEMs and appliance developers). But the most interesting feature is that it will allow us to distribute new rules to all active SNF nodes within 90 seconds of their creation.

This means that most of the time we will be able to block new spam and virus outbreaks and their variants on all of our customer’s systems within 1 minute of when we see a new piece of spam or malware in our traps.

It also means that we have to be able to make real-time incremental changes to each rulebase without slowing down the message scanning process.

How do you do such a thing? You break the rules!

You’re saying race conditions aren’t evil?? You’re MAD!
(Yes, I am. It says so in my blog.)

Updating a database without causing corruption usually requires locking mechanisms that prevent partially updated data from being read by one thread while the data is being changed by another. If you don’t use a locking mechanism then race conditions virtually guarantee you will have unexpected (corrupted) results.

In the case of MPPE and RTSNF we get around this by carefully mapping out all of the possible states that can occur from race conditions at a very low level. Then we structure our data and our read and write processes so that they take advantage of the conditions we have mapped without producing errors.

This eliminates “unintended” part of the consequences and breaks the apparent link between race conditions and certain disaster. The result is that these engines never need to slow down to make an update. Pattern scans can continue at full speed on multiple threads while new updates are in progress.

Here is a simplified example:

Consider a string of symbols: ABCDEFG

Now imagine that each symbol is a kind of pointer that stands in for other data — such as a record in a database or a field in a record. We call this symbolic decomposition. So, for example, the structure ABCDEFG might represent an address in a contact list. The symbol A might represent the Name, B the box number, C the street, D the city, etc… Somewhere else there is a symbol that represents the entire structure ABCDEFG, and so on.

We want to update the record that is represented by D without first locking the data and stopping any threads that might read that data.

Each of these symbols are just numbers and so they can be manipulated atomically. When we tell the processor to change D to Q there is no way that processor or any other will see something in-between D and Q. Each will only see one or the other. With almost no exceptions you can count on this being the case when you are storing or retrieving a value that is equal in length to the processor’s word size or shorter. Some processors (and libraries) provide other atomic operations also — but for our purposes we want to use a mechanism that is virtually guaranteed to be ubiquitous and available right down to the machine code if we need it.

The trick is that without protection we can’t be sure when one thread will read any particular symbol in the context of when that symbol might be changed. So we have two possible outcomes when we change D to Q for each thread that might be reading that symbol. Either the reading thread will see the original D or it will see the updated Q.

This lack of synchronization means that some of the reading threads may get old results for some period of time while others get new results. That’s generally a bad thing at higher levels of abstraction such as when we are working with serialized transactions. However, we are working at a very low level where our application doesn’t require serialization. Note also that if we did need to support serialization at a higher level we could do that by leveraging these techniques to build constructs that satisfy those requirements.

So we’ve talked about using symbolic decomposition to represent our data. Using symbolic decomposition we can make changes using ubiquitous atomic operations (like writing or reading a single word of memory) and we can predict the outcomes of the race conditions we allow. This means we can structure our application to account for these conditions without error and therefore we can skip conventional data protection mechanisms.

There is one more piece to this technique that is important and might not be obvious so I’ll mention it quickly.

In order to leverage this technique you must also be very careful how you structure your updates. The updates must remain invisible until they are complete. Only the thread making the update should know anything about the change until it’s complete and ready to be posted. So, for example, if we want to change the city in our address that operation must be done this way:

The symbols ABCDEFG represent an address record in our database.
D represents a specific city name (a string field) in that record.

In order to change the city we first create a new string in empty space and represent that with some new symbol.

Q => “New City”

When we have allocated the new string, loaded the data into it, and acquired the new symbol we can swap it into our address record.

ABCDEFG becomes ABCQEFG

The entire creation of Q, no matter how complex that operation may be, MUST be completed before we make the higher level change. That’s a key ingredient to this secret sauce!

Now go enjoy breaking some rules! You know you want to 🙂