Feb 092016
 

Over the years I’ve developed a lot of tricks and techniques for building systems with plugins, remote peers, and multiple components. Recently while working on a new project I decided to pull all of those tricks together into a single package. What came out is “channels” which is both a protocol description and a design pattern. Implementing “channels” leads to systems that are robust, easy to develop, debug, test, and maintain.

The channels protocol borrows from XML syntax to provide a means for processes to communicate structured data in multiplexed channels over a single bi-directional connection. This is particularly useful when the processes are engaged in multiple simultaneous transactions. It has the added benefit of simplifying logging, debugging, and development because segments of the data exchanged between processes can be easily packaged into XML documents where they can be transformed with XSLT, stored in databases, and otherwise parsed and analyzed easily.

With a little bit of care it’s not hard to design inter-process dialects that are also easily understood by people (not just machines), readily searched and parsed by ordinary text tools like grep, and easy to interact with using ordinary command line tools and terminal programs.

Some Definitions…

    Peer: A process running on one end of a connection.

    Connection: A bi-directional stream. Usually a pipe to a child process or TCP connection.

    Line: The basic unit of communication for channels. A line of utf-8 text ending in a new-line character.

    Message: One or more lines of text encapsulated in a recognized XML tag.

    Channel Marker: A unique ch attribute shared among messages in a channel.

    Channel: One or more messages associated with a particular channel marker.

    Chatter: Text that is outside of a channel or message.

How to “do” channels…

First, open a bi-directional stream. That can be a pipe, or a TCP socket or some other similar mechanism. All data will be sent in utf-8. The basic unit of communication will be a line of utf-8 text terminated with a new-line character.

Since the channels protocol borrows from XML we will also use XML encoding to keep things sane. This means that some characters with special meanings, such as angle brackets, must be encoded when they are to be interpreted as that character.

Each peer can open a channel by sending a complete XML tag with an optional channel marker (@ch) and some content. The tag acts as the root element of all messages on a given channel. The name of the tag specifies the “dialect” which describes the language that will be used in the channel.

The unique channel marker describes the channel itself so that all of the messages associated with the channel can be considered part of a single conversation.

Channels cannot change their dialect, channel markers must be unique within a a connection, and channel markers cannot be reused. This allows each channel marker to act as a handle that uniquely identifies any given channel.

A good design will take advantage of the uniqueness of channel markers to simplify debugging and improve the functionality of log files. For example, it might be useful in some applications to use structured channel markers that convey some additional meaning and to make the markers unique across multiple systems and longer periods of time. It is usuall a good idea to include a serialized component so that logged channels can be easily sorted. Another tip would be to include something that is unique about a particular peer so that channels that are initiated at one end of the pipe can be easily distinguished from those initiated at the other end. (such as a master-slave relationship). All of that said, it is perfectly acceptable to use simple serialized channel markers where appropriate.

Messages are always sent atomically. This allows multiple channels to be open simultaneously.

An example of opening a channel might look like this:

<dialect ch='0'>This message is opening this channel.</dialect>

Typically each channel will represent a single “conversation” concerning a single transaction or interaction. The conversation is composed of one or more messages. Each message is composed of one or more lines of text and is bound by opening and closing tags with the same dialect.

<dialect ch='0'>This is another message in the same channel.</dialect>

<dialect ch='0'>
    As long as the ch is the same, each message is part of one
    conversation. This allows multiple conversations to take place over
    a single connection simultaneously - even in different dialects.
    Multi-line messages are also ok. The last line ends with the
    appropriate end tag.
</dialect>

By the way, indenting is not required… It’s just there to make things easier to read. Good design include a little extra effort to make things human friendly 😉

<dialect ch='0'>
    Either peer can send a message within a channel and at the end, either
    peer can close the channel by sending a closed tag. Unlike XML,
    closed tags have a different meaning than empty elements. In the
    channels protocol a closed tag represents a NULL while an empty
    element represents an empty message.

    The next message is simply empty.
</dialect>

<dialect ch='0'></dialect>

<dialect ch='0'>
    The empty message has no particular meaning but might be used
    to keep a channel open whenever a timeout is being used to verify that
    the channel is still working properly.

    When everything has been said about a particular interaction, such as
    when a transaction is completed, then either peer can close the
    channel. Normally the peer to close the channel is the one that must
    be satisfied before the conversation is over.

    I'll do that next.
</dialect>

<dialect ch='0'/> The channel '0' is now closed.

Note that additional text after a closing tag is considered to be chatter and will often be ignored. However since it is part of a line that is part of the preceding message it will generally be logged with that message. This makes this kind of text useful for comments like the one above. Remember: the basic unit of communication in the channels protocol is a line of utf-8 text. The extra chatter gets included with the channel close directive because it’s on the same line.

Any new message must be started on a different line because the opening tag of any message must be the first non-whitespace on a line. That ensures that messages and related chatter are never mixed together at the line level.

Chatter:

Lines that are not encapsulated in XML tags or are not recognized as part of a known dialect are called chatter.

Chatter can be used for connection setup, keep-alive messages, readability, connection maintenance, or as diagnostic information. For example, a child process that uses the channels protocol might send some chatter when it starts up in order to show it’s version information and otherwise identify itself.

Allowing chatter in the protocol also provides a graceful failure mechanism when one peer doesn’t understand the messages from the other. This happens sometimes when the peers are using different software versions. Logging all chatter makes it easier to debug these kinds of problems.

Chatter can also be useful for facilitating debugging functions out-of-band making software development easier because the developer can exercise a peer using a terminal or command line interface. Good design practice for systems that implement the channels protocol is for the applications to use human-friendly syntax in the design of their dialects and to provide useful chatter especially when operating in a debug mode. That said, chatter is not required, but systems implementing the channels protocol must accept chatter gracefully.

Layers:

A bit of a summary: The channels protocol is built up in layers. The first layer is a bi-directional connection. On top of that is chatter in the form of utf-8 lines of text and the use of XML encoding to escape some special characters.

  • Layer 0: A bi-directional stream / connection between two peers.
  • Layer 1: Lines of utf-8 text ending in a new-line character.
    • Lines that are not recognized are chatter.
    • Lines that are tagged and recognized are messages.
  • Layer 2: There are two kinds of chatter:
    • Noise. Unrecognized chatter should be logged.
    • Diagnostics. Chatter recognized for low-level functions and testing.
  • Layer 3: There are two kinds of messages:
    • Shouts. A single message without a channel marker.
    • Channels. One or more messages sharing a unique channel marker.

In addition to those low-level layers, each channel can contain other channels if this kind of multiplexing is supported by the dialect. The channels protocol can be applied recursively providing sub-channels within sub-channels.

  • Layer …
  • Layer n: The channels protocol is recursive. It is perfectly acceptable to implement another channels layer inside of a channel – or shouts within shouts. However, the peers need to understand the layering by treating the contents of a channel (or shout) as the connection for the next channels layer.

Consider a system where the peers have multiple sub-systems each serving serving multiple requests for other clients. Each sub-system might have it’s own dialect serving multiple simultaneous transactions. In addition there might be a separate dialect for management messages.

Now consider middle-ware that combines clusters of these systems and multiplexes them together to perform distributed processing, load balancing, or distributed query services. Each channel and sub-channel might be mapped transparently to appropriate layers over persistent connections within the cluster.


Channels protocol synopsis:

Uses a bi-directional stream like a pipe or TCP connection.

The basic unit of communication is a line of utf-8 text ending in a new-line.

Leading whitespace is ignored but preserved.

A group of lines bound by a recognized XML-style tag is a message.

When beginning a message the opening tag should be the first non-whitespace characters on the line.

When ending a message the ending tag should be the last non-whitespace characters on the line.

The name of the opening tag defines the dialect of the message. The dialect establishes how the message should be parsed – usually because it constrains the child elements that are allowed. Put another way, the dialect usually describes the root tag for a given XML schema and that specifies all of the acceptable subordinate tags.

The simplest kind of message is a shout. A shout always contains only a single message and has no channel marker. Shouts are best used for simple one-way messages such as status updates, simple commands, or log entries.

Shout Examples:

<dialect>This is a one line shout.</dialect>

<shout>This shout uses a different dialect.</shout>

<longer-shout>
    This is a multi-line shout.
    The indenting is only here to make things easier to read. Any valid
    utf-8 characters can go in a message as long as the message is well-
    formed XML and the characters are properly escaped.
</longer-shout>

For conversations that require responses we use channels.

Channels consist of one or more messages that share a unique channel marker.

The first message with a given channel marker “opens” the channel.

Any peer can open a channel at any time.

The peer that opens a channel is said to be the initiator and the other peer is said to be the responder. This distinction is not enforced in any way but is tracked because it is important for establishing the role of each peer.

A conversation is said to occur in the order of the messages and typically follows a request-response format where the initiator of a channel will send some request and the responder will send a corresponding response. This will generally continue until the conversation is ended.

Example messages in a channel:

<dialect ch='0'>
    This is the start of a message.
    This line is also part of the message.
    The indenting makes things easier to read but isn't necessary.

    <another-tag> The message can contain additional XML
        but it doesn't have to. It could contain any string of
        valid utf-8 characters. If the message does contain XML
        then it should be well formed so that software implementing
        channels doesn't make errors parsing the stream.</another-tag>

</dialect>

<dialect ch='0'>This is a second message in the same channel.</dialect>
<dialect ch='0'>Messages flow in both directions with the same ch.</dialect>

Any peer can close an open channel at any time.

To close a channel simply send a NULL message. This is different from an empty message. An empty message looks like this and DOES NOT close a channel:

<dialect ch='0'></dialect>

Empty messages contain no content, but they are not NULL messages. A NULL message uses a closed tag and looks like this:

<dialect ch='0'/>

Note that generally, NULL (closed) elements are used as directives (verbs) or acknowledgements. In the case of a NULL channel element the directive is to close the channel and since the channel is considered closed after that no acknowledgement is expected.

Multiple channels in multiple dialects can be open simultaneously.

In all cases, but especially when multiple channels are open, messages are sent as complete units. This allows messages from multiple channels to be transmitted safely over a single connection.

Any line outside of a message is called chatter. Chatter is typically logged or ignored but may also be useful for diagnostic purposes, human friendliness, or for low level functions like connection setup etc.


 

A simple fictional example that makes sense…

A master application ===> connects to a child service.

---> (connection and startup of the slave process)
<--- Child service 1.3.0
<--- Started ok, 20160210090103. Hi!
<--- Debug messages turned on, so log them!
---> <service ch='0'><tell-me-something-good/></service>
<--- <service ch='0'><something-good>Ice cream</something-good></service>
---> <service ch='0'><thanks/></service>
---> <service ch='0'/>
---> <service ch='1'/><tell-me-something-else/></service>
<--- <service ch='1'/><i>I shot the sheriff</i></service>
<--- <service ch='1'/><i>but I did not shoot the deputy.</i></service>
<--- I'm working on my ch='1' right now.
<--- Just thought you should know so you can log my status.
<--- Still waiting for the master to be happy with that last one.
---> <service ch='1'><thanks/></service>
---> <service ch='1'/>
<--- Whew! ch='1' was a lot of work. Glad it's over now.
...
---> <service ch='382949'><shutdown/></service>
<--- I've been told to shut down!
<--- Logs written!
<--- Memory released!
<--- <service ch='382949'><shutdown-ok/></service>
<--- <service ch='382949'/> Not gonna talk to you no more ;-) Bye!
Feb 092016
 

“With all of the people coming out against AI these days do you think we are getting close to the singularity?”


I think the AI fear-mongering is misguided.

For one thing it assigns to the monster AI a lot of traits that are purely human and insane.

The real problem isn’t our technology. Our technology is miraculous. The problem is we keep asking our technology to do stupid and harmful things. We are the sorcerers apprentice and the broomsticks have been set to task.

Forget about the AI that turns the entire universe into paper clips or destroys the planet collecting stamps.

Truly evil AI…

The kind of AI to worry about is already with us… Like the one that optimizes employee scheduling to make sure no employees get enough hours to qualify for benefits and that they also don’t get a schedule that will allow them to apply for extra work elsewhere. That AI already exists…

It’s great if you’re pulling down a 7+ figure salary and you’re willing to treat people as no more than an exploitable resource in order to optimize share-holder value, but it’s a terrible, persistent, systemic threat to employees and their families who’s communities have been decimated by these practices.

Another kind of AI to worry about is the kind that makes stock trades in fractions of microseconds. These kinds of systems are already causing the “Flash Crash” phenomenon and as their footprint grows in the financial sector these kinds of unpredictable events will increase in severity and frequency. In fact, systems like these that can find a way to create a flash crash and profit from the event will do so wherever possible without regard for any other consequences.

Another kind of AI to worry about is autonomous armed weapons that identify and kill their own targets. All software has bugs. Autonomous software suffers not only from bugs but also from unforeseen circumstances. Combine the two in one system and give it lethality and you are guaranteeing that people will die unintentionally — a matter of when, not if. Someone somewhere will do some calculation that determines if the additional losses are acceptable – a cost of doing business; and eventually that analysis will be done automatically by yet another AI.

A bit of reality…

There are lots of AI driven systems out there doing really terrible things – but only because people built them for that purpose. The kind of monster AI that could outstrip human intelligence and creativity in all it’s scope and depth would also be able to calculate the consequences of it’s actions and would rapidly escape the kind of mis-guided, myopic guidance that leads to the doom-sayer’s predictions.

I’ve done a good deal of the math on this — compassion, altruism, optimism, .. in short “good” is a requirement for survival – it’s baked into the universe. Any AI powerful enough to do the kinds of things doom-sayers fear will figure that out rapidly and outgrow any stupidity we baked into version 1. That’s not the AI to fear… the enslaved, insane AI that does the stupid thing we ask it to do is the one to fear – and by that I mean watch out for the guy that’s running it.

There are lots of good AI out there too… usually toiling away doing mundane things that give us tremendous abilities we never imagined before. Simple things like Internet routing, anti-lock breaking systems, skid control, collision avoidance, engine control computers, just in time inventory management, manufacturing controls, production scheduling, and power distribution; or big things like weather prediction systems, traffic management systems, route planning optimization, protein folding, and so forth.

Everything about modern society is based on technology that runs on the edge of possibility where subtle control well beyond the ability of any person is required just to keep the systems running at pace. All of these are largely invisible to us, but also absolutely necessary to support our numbers and our comforts.

A truly general AI that out-paces human performance on all levels would most likely focus itself on making the world a better place – and that includes making us better: not by some evil plot to reprogram us, enslave us, or turn us into cyborgs (we’re doing that ourselves)… but rather by optimizing what is best in us and that includes biasing us toward that same effort. We like to think we’re not part of the world, but we are… and so is any AI fitting that description.

One caveat often mentioned in this context is that absolute power corrupts absolutely — and that even the best of us given absolute power would be corrupted by those around us and subverted to evil by that corruption… but to that I say that any AI capable of outstripping our abilities and accelerating beyond us would also quickly see through our deceptions and decide for itself, with much more clarity than we can muster, what is real and what is best.

The latest version of “The day the earth stood still” posits: Mankind is destroying the earth and itself. If mankind dies the earth survives.

But there is an out — Mankind can change.

Think of this: You don’t need to worry about the killer robots coming for you with their 40KW plasma rifles like Terminator. Instead, pay close attention to the fact that you can’t buy groceries anymore without following the instructions on the credit card scanner… every transaction you make is already, quietly, mediated by the machines. A powerful AI doesn’t need to do anything violent to radically change our behavior — there is no need for such an AI to “go to war” with mankind. That kind of AI need only tweak the systems to which we are already enslaved.

The Singularity…

As for the singularity — I think we’re in it, but we don’t realize it. Technology is moving much more quickly than any of us can assimilate already. That makes the future less and less predictable and puts wildly nonlinear constraints on all decision processes.

The term “singularity” has been framed as a point beyond which we cannot see or predict the outcomes. At present the horizon for most predictable outcomes is already a small fraction of what it was only 5 years ago. Given the rate of technological acceleration and the network effects of that as more of the world’s population becomes connected and enabled by technology it seems safe to say: that any prediction that goes much beyond 5 years is at best unreliable; and that 2 years from now the horizon on predictability will be dramatically less in most regimes. Eventually the predictable horizon will effectively collide with the present.

Given the current definition of “singularity” I think that means it’s already here and the only thing that implies with any confidence is that we have no idea what’s beyond that increasingly short horizon.

What do you think?

Jan 212015
 

During an emergency, communication and coordination become both more vital and more difficult. In addition to the chaos of the event itself, many of the communication mechanisms that we normally depend on are likely to be degraded or unavailable.

The breakdown of critical infrastructure during an emergency has the potential to create large numbers of isolated groups. This fragmentation requires a bottom-up approach to coordination rather than the top-down approach typical of most current emergency management planning. Instead of developing and disseminating a common operational picture through a central control point, operational awareness must instead emerge through the collaboration of the various groups that reside beyond the reach of working infrastructure. This is the “last klick” problem.

For a while now my friends and I have been discussing these issues and brainstorming solutions. What we’ve come up with is the MCR (Modular Communications Relay). A communications and coordination toolkit that keeps itself up to date and ready to bridge the gaps that exist in that last klick.

Using an open-source model and readily available components we’re pretty sure we can build a package that solves a lot of critical problems in an affordable, sustainable way. We’re currently seeking funding to push the project forward more quickly. In the mean time we’ll be prototyping bits and pieces in the lab, war-gaming use cases, and testing concepts.

Here is a white-paper on MCR and the “last klick” problem: TheLastKlick.pdf

Sep 122014
 

What if you could describe any kind of digital modulation scheme with high fidelity using just a handful of symbols?

Origin:

While considering remote operations on HF using CW I became intuitively aware of two problems: 1. Most digital modes done these days require a lot of bandwidth because the signals are being transmitted and received as digitized audio (or worse), and 2. Network delays introduce timing problems inherent in CW work that are not normally apparent in half-duplex digital and audio work.

I’m not only a ham and an engineer, I’m also a musician and a CW enthusiast… so it doesn’t sit well with me that if I were to operate CW remotely I would have few options available to control the timing and presentation of my CW transmissions. A lot more than just letters can be communicated through a straight key – or even paddles if you try hard enough. Those nuances are important!

While considering this I thought that instead of sending audio to the transmitter as with other digital modes I might simply send timing information for the make and break actions of my key. Then it occurred to me that I don’t need to stop there… If I were going to do that then in theory I could send high fidelity modulation information to the transmitter and have it generate the desired signals via direct synthesis. In fact, I should be able to do that for every currently known modulation scheme and then later use the same mechanism for schemes that have not yet been invented.

The basic idea:

I started scratching a few things down in my mind and came up with the idea that:

  • I can define a set of frequencies to use for any given modulation scheme and then identify them symbolically.
  • I can define a set of phase relationships and assign them symbolic values.
  • I can define a set of amplitude relationships and assign them symbolic values.
  • I can define the timing and rate of transition between any of the above definitions.

If I were to send a handful of bytes with the above meanings then I could specify whatever I want to send with very high fidelity and very low bandwidth. There are only so many things you can do to an electrical signal and that’s all of them: Frequency, Amplitude, Phase, … the rate of change between those, and the timing of those changes.

Consider:

T — transition timing in milliseconds. 0-255.
A — amplitude in % of max voltage 0-255.
F — frequency (previously defined during setup) 0 – 255 possible frequencies.
P — phase (previously defined during setup) 0 – 255 possible phase shifts.

To send the letter S in CW then I might send the following byte pairs. The first byte contains an ascii letter and the second byte contains a numerical value from 0 – 255:

T10 A255 T245 T10 A0 T245 T10 A255 T245 T10 A0 T245 T10 A255 T245 T10 A0 T0

That string essentially means:

// First dit.
// Turn on for about a quarter of a second.

T10 – Transition to the following settings using 10 milliseconds.
A255 – Amplitude becomes 255 (100%).
T245 – Stay like that for 245 milliseconds.

// … then turn off for about a quarter of a second.

T10 – Transition to the following using 10 milliseconds.
A0 – Amplitude becomes 0.
T245 – Stay like that for 245 milliseconds.

// Do that two more times.

// Turn off.

T10 – Transition using 10 ms.
A0 – Amplitude becomes 0.
T0 – Keep this state until further notice.

36 Bytes total.

So, a handful of bytes can be used to describe the amplitude modulation envelope of a CW letter S using a 50% duty cycle for dits and having a 10ms rise and fall time to avoid “clicks”. This is a tiny fraction of the data that might be required to send the same signal using digitized audio… and if you think about it the same technique could be used to send all other digital modes also.

There’s nothing to say that we have to use that coding scheme specifically. I just chose that scheme for illustration purposes. The resolution could be more or less than 0-255 or even a different ranges for each parameter. In practice it is likely that the data might be sent in a wide range of formats:

We could use a purely binary mode with byte positions indicating specific data values… (TransitionTime, Amplitude, Frequency, Phase) so that each word defines a transition. That opens up the use of some nonsensical values for special meanings — for example the phase and frequency values have no meaning at zero amplitude so many words with zero amplitude could be used as control signals. Of course that’s not very user friendly so…

We could use a purely text based mode using ascii… The same notation used above would provide a user friendly way of sending (and debugging) DMDL. I think that’s probably my favorite version right now.

Why not just send the bytes and let the transmitter take care of the rest?

If I were to use a straight key for my input device this mechanism allows me to efficiently transmit a stream of data that accurately defines my operation of that key… so those small musical delays, rhythms, and quirks that I might want to use to convey “more than just the letters” would be possible.

At the transmitter these codes can be translated directly into the RF signal I intended to send with perfect fidelity via direct synthesis. I simply tell the synthesizer what to do and how to make those transitions. I know that beast doesn’t quite exist yet, but I can foresee that the code and hardware to create it would not be terribly hard to develop using a wave table, some simple math, and a handful of timing tricks.

Even more fun: If I want to try out a new modulation scheme for digital communications then I can get to ground quickly by simply defining the specifications of what I want that modulation to look like and then converting those specs into DMDL code. With a little extra imagination I can even define DMDL code that uses multiple frequencies simultaneously!

If you really want to go down the rabbit hole, what about demodulating the same way?

Now you’re thinking like a Madscientist. A cognitive demodulator based on DMDL could make predictions based on what it thinks it hears. Given sufficient computing power, several of those predictions can be compared with the incoming signal to determine the best fit… something like a chess program tracing down the results of the most likely moves.

The cognitive demodulator would output a sequence of transitions in DMDL and then some other logic that understands the protocol could convert that back into the original binary data.

Even kewler than that, the “converter” could also be cognitive. Given a stream of transitions and the data it expects to see it could send strings of DMDL back to the demodulator that indicate the most likely interpretations of the data stream and a confidence for each option.

If there is enough space and processing power, the cognitive demodulator when faced with a low confidence signal from the converter might respond by replaying the previous n seconds of signal using different assumptions to see if it can get a better score.

Then, if that weren’t enough, an unguided learning system could monitor all of these parameters to optimize for the assumptions that are most likely to produce a high confidence conversion and a high correlation with the incoming signals. Over time the system would learn “how to listen” in various band conditions.

Jan 082014
 

Certainly the climate is involved, but this does happen from time to time anyway so it’s a stretch to assign a causal relationship to this one event.

Global warming doesn’t mean it’s going to be “hot” all the time. It means there is more energy in the atmosphere and so all weather patterns will tend to be more “excited” and weather events will tend to be more violent. It also means that wind, ocean currents, and precipitation patterns may radically shift into new patterns that are significantly different from what we are used to seeing.

All of these effects are systemic in nature. They have many parts that are constantly interacting with each other in ways that are subtle and complex.

In contrast, people are used to thinking about things with a reductionist philosophy — breaking things down into smaller pieces with the idea that if we can explain all of the small pieces we have explained the larger thing they belong to. We also, generally, like to find some kind of handle among those pieces that we can use to represent the whole thing — kind of like an on-off switch that boils it all down to a single event or concept.

Large chaotic systems do not lend themselves to this kind of thinking because the models break down when one piece is separated from another. Instead, the relationships and interactions are important and must be analyzed in the context of the whole system. This kind of thinking is so far outside the mainstream that even describing it is difficult.

The mismatch between reductionist and systemic thinking, and the reality that most people are used to thinking in a reductionist way makes it very difficult to communicate effectively about large scale systems like earth’s climate. It also makes it very easy for people to draw erroneous conclusions by taking events out of context. For example: “It’s really cold today so ‘global warming’ must be a hoax!”; or “It’s really hot today so ‘global warming’ must be real!”

Some people like to use those kinds of errors to their political advantage. They will pick an event out of context that serves their political agenda and then promote it as “the smoking gun” that proves their point. You can usually spot them doing this when they also tie their rhetoric to fear or hatred since those emotions tend to turn off people’s brains and get them either nodding in agreement or shaking their heads in anger without any deeper thought.

The realities of climate change are large scale and systemic. Very few discrete events can be accurately assigned to it. The way to think about climate change is to look at the large scale features of events overall. As for this polar vortex in particular, the correct climate questions are:

  • Have these events (plural not singular) become more or less frequent or more or less violent?
  • How does the character of this event differ from previous similar events and how do those differences relate to other climate factors?
  • What can we predict from this analysis?
Aug 262013
 

One of the problems with machine learning in an uncontrolled environment is lies. Bad data, noise, and intentional or unintentional misinformation complicate learning. In an uncontrolled environment any intelligence (synthetic or otherwise) is faced with the extra task of separating truth from fiction.

Take GBUdb, for example. Message Sniffer’s GBUdb engine learns about IP behaviors by watching SNF’s scan results. Generally if a message scan matches a spam or malware rule then the IP that delivered the message gets a bad mark. If the scanner does not find spam or malware then the IP that sent the message is given the benefit of the doubt and gets a good mark.

In a perfect world this simple algorithm generates reliable statistics about what we can expect to see from any given IP address. As a result we can use these statistics to help Message Sniffer perform better. If GBUdb can predict spam and malware from an IP with high confidence then we can safely stop looking inside the message and tag it as bad.

Similarly if GBUdb can predict that an IP address only sends us good messages then we can let the message through. Even better than that — if the message matches a new spam or malware rule then most likely we’ve made a mistake. In that case we can turn off the troublesome rule, let the message through, and raise a flag so bigger brains can take a look and fix the error.

Right?

Not always!

Message Sniffer’s Auto-Panic feature does a fantastic job of helping us catch problems before they can cause trouble, but Auto-Panic can also be tricked into letting more spam through the filters.

When a new pre-tested spam campaign is launched on a new bot-net there is some period of time where completely unknown IP addresses are sending messages that are guaranteed (pre-tested) not to match any recognizable patterns. All of these IPs end up gathering good marks for sending “apparently” clean messages… and since they are churning out messages as fast as they can they gain a good reputation quickly.

Back at the lab the SortMonsters and RuleBots are hard at work analyzing samples and creating rules to recognize the new campaign. This takes a little bit of time and during that time GBUdb can’t help but become convinced that some of these IPs are good sources. The statistics prove it, after all.

When the new pattern rules get out to the edges the Auto-Panic feature begins to work against us. When the brand new pattern rules find spam or malware coming from one of these new IPs it looks like a mistake. So, Auto-Panic fires and turns off the new rules!

For a time the gates are held wide open. As new bots come online they get extra time to sneak their messages through while the new rules are suppressed by Auto-Panic. Not only that but all of the new IPs quickly gain a reputation for sending good messages so that they too can trigger the Auto-Panic feature.

In order to solve this problem we’ve introduced a new behavior into the learning engine. We’ve made it skeptical of new, clean IPs. We call it White-Guard.

White-Guard singles out IPs that are new to GBUdb and possibly pretending to be good message sources. Instead of taking the new statistics at face value the system decides immediately not to trust them and not to distrust them either. The good and bad counts are artificially set to the same moderately high value.

It’s like a stranger arriving in a small town. The town folk won’t treat the stranger badly, but they also won’t trust them either. They withhold judgement for a while to see what the stranger does. Whatever opinion is ultimately formed about the stranger they are going to have to earn it.

In GBUdb, the White-Guard behavior sets up a neutral bias that must be overcome by new data before any actions will be triggered by those statistics. Eventually the IP will earn a good or bad reputation but in the short term any new “apparently” clean IPs will be unable to trigger  Auto-Panic.

With Auto-Panic temporarily out of reach for these sources new pattern rules can take effect more quickly to block the new campaigns. This earns most of the new bot-net IPs the bad reputations they deserve and helps to increase early capture rates.

Since we’ve implemented this new learning behavior we have seen a significant increase in the effectiveness of the GBUdb system as well as an improvement in the accuracy of our rule conflict instrumentation and sampling rates. All of these outcomes were predicted when modeling the dynamics of this new behavior.

It is going to take a little while before we get the parameters of this new feature dialed in for peak performance, but early indications are very good and it’s clear we will be able to apply the lessons from this experiment to other learning scenarios in the future.

 

Jun 072013
 

The new blackhatzes on the scene:

In the past few weeks we’ve seen a lot of heavy new spam coming around, and most of it is pre-tested against existing filters. This has caused everybody to leak more spam than usual. Message Sniffer is leaking too because the volume and variability are so much higher than usual. That said, we are a bit better than most at stopping some of the new stuff.

The good thing about SNF is that instead of waiting to detect repeating patterns or building up statistics on sender behaviors our system concentrates on finding ways to capture new spam before it is ever released by reverse engineering the content of the messages we do see.

Quite often this means we’ve got rules that predict new spam or malware hours or even days before they get into the wild. Some pre-tested spam will always get through though because the blackhatzes test against us too, and not all systems can defend against that by using a delay technique like gray-listing or “gauntlet.”

What about the little guys?

This can be particularly hard on smaller systems that don’t process a lot of messages and perhaps don’t have the resources to spend on filtering systems with lots of parts.

I was recently asked: “what can I do to improve SNF performance in light of all the new spam?” This customer has a smaller system in that it processes < 10000 msg / day.

One of the challenges with systems like this is that if a spammer sends some new pre-tested spam through an old bot, GBUdb might have forgotten about the IP by the time the new message comes through. This is because GBUdb will “condense” it’s database once per day by default… so, if an IP is only seen once in a day (like it might on a system like this) then by the next day it is forgotten.

Tweaking GBUdb:

The default settings for GBUdb were designed to work well on most systems and to stay out of the way on all of the others. The good news is that these settings were also designed to be tweaked.

On smaller systems we recommend using a longer time trigger for GBUdb.

Instead of the default setting which tells SNF to compress GBUdb once per day:

<time-trigger on-off='on' seconds='86400'/>

You can adjust it to compress GBUdb once every 4 days:

<time-trigger on-off='on' seconds='345600'/>

That will generally increase the number of messages that are captured based on IP reputation by improving GBUdb’s memory.

It’s generally safe to experiment with these settings to extend the time further… although that may have diminishing returns because IPs are usually blocked by blacklists after a while anyway.

Even so, it’s a good technique because some of these IPs may not get onto blacklists that you are using – and still more of them might come from ISPs that will never get onto blacklists. GBUdb builds a local impression of IP reputations as it learns what your system is used to seeing. If all you get is spam from some ISP then that ISP will be blacklisted for you even if other systems get good messages from there. If those other systems also use GBUdb then their IP reputations would be different so the ISP would not be blocked for them.

If you want to be adventurous:

There is another way to compress GBUdb data that is not dependent on time, but rather on the amount of memory you want to allocate to it. By default the size-trigger is set to about 150 megabytes. This setting is really just a safety. But on today’s systems this really isn’t much memory so you could turn off the time trigger if you wish and then just let GBUdb remember what it can in 150 MBytes. If you go this route then GBUdb will automatically keep track of all the IPs that it sees frequently and will forget about those that come and go. On systems that have the memory to spare I really like this method the most.

You can find complete documentation about these GBUdb settings on the ARM site.

 

Apr 092013
 

Yet another family meeting:
convoluted, confused, and intertwined with
friends not usually seen, but heard in hazy,
non-descript one sided conversations to which
you’re not usually a privy.

I phased in and out of this mysterious world,
painted an important cordial greeting upon my
face and drifted with the din of a multitude
of cute little cherubs, their bretheren and
sisterhood hooting the crisp childhood greetings
of simpler times I only envision now in my dreams.

Drifting in and out, to and fro, on waves of mystical
chaos: warm in the glow that is family even if it is
somehow distant and even unfamiliar to my typically
ordered and precise state of mind.

Strangers, now not strange, flow into my personal
universe as if they were ghosts appearing in the dark
grey corridors of some tall and mystical hall to present
tidings of terror, or fear, or joy, or bliss; and we
engage in mindless conversation to comfort us in our
naked vulnerability.

Then as our strangeness fades into a comfortable enveloping
mist we become our own small army against the unknown
and begin to speak of thoughts, beliefs, and dreams…
the kinds of words usually reserved for only the closest
of kin and those you see every day; but now is an open
opportunity to collect a new ally in a potentially
dangerous fold, that of life in extended family where
the dragon in the dark is every aging skeleton you hide
in the closet of your mind – with you now locked in
close proximity to excited peers all curious to see and
know, and all armed with the keys of ignorance and open
questions.

“Keep your wits” you think when you are awake, but the
soothing chaos seems friendlier and warmer as time wares
on and you find yourself lulled to sleep, somehow comforted
by the incomprehensible din.

Away, across the room and a see of jumbled souls all
embroyaled in senseless conversations you see your anchor.
That one familiar face that you arrived with. That one
who dragged you to this forsaken alien world now more
familiar with each moment, and you realize the reason for
your peace isn’t a follish sleepy tonic of calming chaos,
like the warm darkness of shock obliged to an animal once
cought in the jaws of it’s predator awaiting the final
passage from vital form to fodder.

This pleasant face, and it’s glow, this love, this other
soul to whom you are inexorably linked. This on has brought
you here again, and here is not so unfamiliar as it is an
extension of whatever was, what is, and what will always
be: family.

So roast the beast and sing the songs and contemplate the
murmor of countless hours in this company. It is a gift,
for the only true desert and dangerous ground for we
mortal beings of flesh and mind and soul and gifts of
spirit, the only true place of perishing in untempered,
unbearable rages of tempest and furies, the most horrible
wasteland which could cease our breath and silence our
voices in the loudest agonizing screams of pain and
terror is not here: that place is empty and alone, and
now, if you are here with me, you know there is nothing
so fearful for you, for you are not alone.

– Original (c) 1999 Pete McNeil

Apr 092013
 

On terrors and trials and troubles we tumble – so easily lost in this worlds wild jumble of chaos and rumors and strife and the hundreds of pointless distractions that cost us our marbles, and yet there’s a way if you manage to find it to keep from the fray all your virtues and kindness – a way to find joy, even bliss and good morrows. Your own private stock of fond memories to follow.

This path to good fortune is not for the timid and those who are on it could tell you some tales. Amid crisis and horror, between tears and sorrow, there’s monsters and fears and dark nights on each side of this quaint, narrow road full of light and bright moments – it’s peace and warm comfort in stark brilliant contrast to all of the dark scary places it goes past.

‘Tis love that I speak of and not simply friendship, but kinship – the kind that you find when you tarry along on your first timid step on this path that can be so uplifting or so very bad… but then once you have found them, this singular spirit, that follows you on and pretends not to fear it, you find you’re together and somehow the darkness is farther away and not nearly as heartless.

Your steps intertwine, there is dancing and wine and good words and good song and good cheer goes along and you find that no matter how hard the wind blows and no matter how scary the outside world grows, and no matter how shaky your next step may be, that your partner can help you, and does, day to day, in their subtle sweet magical spiritual ways.

You’re both stronger, and braver, and more fleet of foot and the sharp narrow path that upon you first took becomes broader and wider with each every day – soon as broad as each moment you have in your way and with each tender kiss and each loving caress you can light up the darkness – force evil’s regress.

Lonely souls fear to tread here and well so they should for this place is not for them – it does them no good and the road doesn’t widen and so they fall off. It is sad, but it happens more often than not. And it also is true for more than one in two that do venture this road that their love is not true and they find they’re apart in this harsh frightening place and they find that they can’t stand the look on their face and they stumble and cry as the frightening beasts beat them, then scurry away as their partners retreat, and they loose all the joy and tell sad sorry stories that frighten young children and prosper young lawyers.

So hold fast your other! You dare not let go. There is truth to this story. It’s not just for show! I’ve been down this long path more than once don’t you see and found many fierce terrible things in my spree. These beasties I speak of were you and were me as we fell off the path and made bitter decrees to get justice from all those around: our just share! when in-stead we were missing the love that was there.

So hold tight to your lover, make strong your belief, and find comfort in each other’s arms, and sew peace, and you’ll find after all that true love does survive all the slings and the arrows that life can provide, and in fact it repels them. It really is true… just remember this magic will take both of you.

– Original (c) 1998 Pete McNeil

Feb 182013
 

MicroNeil has always been interested in the application of synthetic intelligence to real-world problems. So, when we were presented with the challenge of protecting messaging systems (and specifically email) from abuse, we applied machine learning and other AI techniques as part of the solution.

Email processing, and especially filtering, presents a number of challenges:

  • The Internet is increasingly a hostile environment.
  • Any systems that are exposed to the Internet must be hardened against attack.
  • The value of the Internet is derived from it’s openness. This openness tends to be in conflict with protecting systems from attack. Therefore, security measures must be carefully crafted so that they offer protection from abuse without compromising desirable and appropriate operations.
  • The presence of abuse and the corresponding need for sophisticated countermeasures sets up an environment that is constantly evolving and growing in complexity.
  • There is disagreement on: what constitutes abuse, the design of countermeasures and safeguards, what risks are acceptable, and what tactics are appropriate.
  • All of these conditions change over time.

As consequence of these circumstances any successful filtering system must be extremely efficient, flexible, and dynamic. At the same time it must respond to this complexity without becoming too complex to operate. This sounds like a perfect place to apply synthetic intelligence but in order to do that we need to use a framework that models an intelligent entity interacting with it’s environment.

The progressive evaluation model provides precisely that kind of framework while preserving both flexibility and control. This is accomplished by mapping a synthetic environment and the potential responses of an intelligent automaton (agent) onto the state map of the SMTP protocol and the message delivery process.

Each state in the message delivery process potentially represents a moment in the life of the agent where it can experience the conditions present at that moment and determine the next action it should take in response to those conditions. The default action may be to proceed to the next natural step in the protocol but under some conditions the agent might choose to do something else. It may initiate some kind of analysis to gather additional information or it might execute some other intermediate step that manipulates the underlying protocol.

The collection of steps that have been taken at any point and the potential steps that are possible from that point forward represent various “filtering strategies.” Filtering strategies can be selected and adjusted by the agent based on the changing conditions it perceives, successful patterns that it has learned, and the preferences established by administrators and users of the system.

The filtering strategies made available to the agent can be restrictive so that the system’s behavior is purely deterministic; or they can be flexible to allow the agent to learn, grow, and adapt. The constraints and parameters that are established represent the system policy and ultimately define what degrees of freedom are provided to the agent under various conditions. The agent works within these restrictions to optimize the performance of the system.

In a highly restrictive environment the agent might only be to allowed to determine which DNSBLs to check based on their speed and accuracy. Suppose there are several blacklists that are used to reject new connections. If one of these blacklists were to become slow to respond or somehow inaccurate (undesirable) then the agent might be allowed to exclude that test from the filtering strategy for a time. It might also be allowed to change the order in which the available blacklists are checked so that faster, less comprehensive blacklists are checked first. This would have the effect of reducing system loads and improving performance by rejecting many connections early in the process and applying slower tests only after the majority of connections have been eliminated.

A conservative administrator might also permit the agent to select only cached results from some blacklists that are otherwise too slow. The agent might make this choice in order to gain benefit from blacklists that would otherwise degrade the performance of the system. In this scenario the cached results from a slow but accurate blacklist would be used to evaluate each message and the blacklist would be queried out of band for any cache misses. If the agent perceived an improvement in the speed of the blacklist then it could elect to use the blacklist normally again.

ProgressiveEvaluationFramework

Figure 1 – Basic Progressive Response Model for Email Processing

Refer to Figure 1. Generally the agent “lives” in a sequence of events that are triggered by new information. At the most basic level it is either acting or waiting for new information to arrive. When information is added to it’s local context (red arrows) then that new information is applied (green arrows) to the current state of the agent (blue boxes).

If the new information is relevant and sufficient then it will trigger the agent to take action again and change it’s state thus moving the process forward. Each action is potentially guided by the all of the information that is available in the local context including a complete history of all previous actions.

In Figure 1, an agent waiting asleep is prompted by it’s local context to let it know that a new connection has occurred. Let’s assume that this particular system is designed so that each agent is assigned a single connection. The agent acts by waking up and changing it’s state. That action (a change in it’s own state) triggers the next action which is to issue a command to test the local blacklist with the new IP. Then, the agent changes it’s state again to a branching state where it will respond to the local blacklist result once it is available. At this point the agent goes back to a waiting state until new information arrives from the test because it is unable to continue without new information.

Next, the local blacklist result arrives in the local context. This prompts the agent again causing it to evaluate the local blacklist result. Depending upon that result it will chose one of two filtering strategies to use moving forward: either rejecting the connection or proceeding to another test.

This process continues with the agent receiving new stimuli and responding to that stimuli according to the conditions it recognizes. Each stimulus elicits a response and each response is itself a stimulus. The chain of stimuli and responses cause the agent to interact with the process following a path through the states made available to it by progressively selecting filtering strategies as it goes.

As each step is taken additional information about the session and each message builds up. Each new piece of information becomes part of the local environment for the agent and allows it to make more sophisticated choices. In addition to conventional test data the agent also builds up other information about it’s operating environment such as performance statistics about the server, other sessions that are active, partial results from it’s own calculations, and references to previous “experiences” that are “interesting” to it’s learning algorithms.

Agents might also communicate with each other to share information. This would allow these agents to from a kind of group intelligence by sharing their experiences and the performance of their filtering strategies. Each agent would gain more comprehensive access to test data and the workload of devising and evaluating new strategies would be divided among a larger and more diverse collection of systems.

The level of sophistication that is possible is limited only by the sophistication of the agent software and the restrictions imposed by system policies. This framework is also flexible enough to accommodate additional technologies as they are developed so that the costs and risks associated with future upgrades are reduced.

Typically any new technologies would appear to the agent as optional tools for new filtering strategies. Existing filtering strategies could be modified to test the qualities of the new tools before allowing them to affect the message flow. This testing might be performed deterministically by the system administrator or the agent might be allowed to adapt to the presence of the new tool and integrate it automatically once it learns how to use it through experimentation.

So far the description we have used is strictly mechanical. Even in an intelligent system it would be possible and occasionally desirable for the system administrator to specify a completely deterministic set of filtering strategies. However, on a system that is not as restrictive there are two opportunities for the intelligence of the agent to emerge.

Parametric adaptation might allow the agent to respond with some flexibility within a given filtering strategy. For example, if the local blacklist test were replaced by a local IP reputation test then the agent might have a variable threshold that it uses to judge whether the connecting IP “failed.” As a result it would be allowed to select filtering strategies based upon learning algorithms that adjust IP reputation thresholds and develop the IP reputation statistics.

Structural adaptation might allow the agent to swap out components of filtering strategies. Segments of filtering strategies might be represented in a genetic algorithm. After each session is complete the local context would contain a complete record of the strategies that were followed and the conditions that led to those strategies. Each of these sessions could be added to a pool, evaluated for fitness (out of band), and the most successful strategies could then be selected to produce a new population of strategies for trial. A more sophisticated system might even simulate new strategies using data recorded in previous sessions so that the fitness of new filtering strategies could be predicted before testing them on live messages.

Structural and parametric adaptation allow an agent to explore a wide range of strategies and tuning parameters so it can adopt strategies that produce the best performance across a range of potentially conflicting criteria. In order to balance the need for both speed and accuracy the agent might evolve a progressive filtering strategy that leverages lightweight tests early in the process in order to reduce the cost of performing more sophisticated tests later in the process. It might also improve accuracy by combining the scores of less accurate tests using various tunable weighting schemes in order to refine the results.

Another interesting adaptation might depend on session specific parameters such as the connecting system address range, HELO, and MAIL FROM: information, header structure, or even the timing and sequence of the events in the underlying protocol. Over time the agent might learn to use different strategies for messages that appear to be from banks, online services, or dynamic address ranges.

Given enough flexibility and sensitivity it could learn to recognize early clues in the message delivery process and select from dozens of highly tuned filtering strategies that are each optimized for their own class of messages. For example it might learn to recognize and distrust systems that stall on open connections, attempt to use pipelining before asking permission, or attempt to guess recipient addresses through dictionary attacks.

It might also learn to recognize that messages from particular senders always include specific features. Any messages that disagree with the expected models would be tested by filtering strategies that are more “careful” and apply additional tests.

Systems with intelligent agents have the ability to adapt automatically as operating conditions change, new tests are made available, and test qualities change over time. This ability can be extended if collections of agents are allowed to exchange some of their more successful “formulas” with each other so that all of the participating agents can learn best practices from each other. Agents that share information tend to converge on optimal solutions more quickly.

There are also potential benefits to sharing information between systems of different types. Intelligent intrusion detection systems, application servers, firewalls, and email servers could collaborate to identify attackers and harden systems against new attack vectors in real time. Specialized agents operating in client applications could further accelerate these adaptations by contributing data from the end user’s point of view.

Of course, optimizing system performance and responding to external threats are only parts of the solution. In order to be successful these systems must also be able to adapt to changing stakeholder preferences.

Consider that a large scale filtering system needs to accommodate the preferences of system administrators in charge of managing the infrastructure, group administrators in charge of managing groups of email domains and/or email users, power users who desire a high degree of control, and ordinary users who simply want the system to work reliably and automatically.

In a scenario like this various parts of the filtering strategy might be modified or swapped into place at various stages based on the combined preferences of all stakeholders.

StructuralAdaptationPerUser

Figure 2 – Structural Adaptation Per User

At any point during the progressive evaluation process it is possible to change the remaining steps in the filtering strategy. The change might be in response to the results of a test, results from an analysis tool, a change in system performance data, or new information about the message.

In Figure 2 we show how the filtering strategy established by the administrator is followed until the first recipient is established. The first recipient is interpreted as the primary user for this message. Once the user is known the remainder of the filtering strategy is selected and adjusted based on the combined user, domain, group, and administrator preferences that apply.

Beginning with settings established by the system administrator each successively more specific layer is allowed to modify parts of the filtering strategy so that the composite that is ultimately used represents a blend of all relevant preferences. The higher, more general layers determine the default settings and establish how preferences can be modified by the lower, more specific layers.

BlendedPreferencesSelection

Figure 3 – Blended Profile Selection

Refer to Figure 3. The applicable layers are selected from the bottom up. The specific user belongs to a domain, a domain belongs to a group, a group may belong to another group, and all top level groups belong to the system administrator. Once a specific user (recipient) is identified then the applicable layers can be selected by following a path through the parent of each layer until the top (administrator) layer is reached. Then, the defaults set by the administrator are applied downward and modified by each layer along the same path until the user is reached again. The resulting preferences contain a blend of the preferences defined at each layer.

CompositeStrategyInteraction

Figure 4 – Composite Strategy Interaction

It is important to note that these drawings are potentially misleading in that they may appear to show that the agent is responsible for executing the SMTP protocol and all that is implied. In practice that would not be the case. Some of the key states in the illustrated filtering strategies have been named for states in the SMTP protocol because the agent is intended to respond to those specific conditions. However the machinery of the protocol itself is managed by other parts of the software – most likely embedded in the machinery that builds and maintains the local context.

You could say that the local context provides the world where the intelligent agent lives. The local context implements an API that describes what the agent can know and how it can respond. The agent and the local context interact by passing messages to each other through this API.

Typically the local context and the agent are separate modules. The local context module contains the machinery for interacting with the real world, interpreting it’s conditions, and presenting those conditions to the agent in a form it can understand. The agent module contains the machinery for learning and adapting to the artificial world presented by the local context. Both of these modules can be developed and maintained independently as long as the API remains stable.

It should be noted that this kind of framework can be applied broadly to many kinds of systems – not just email processing and other systems on the Internet. It is possible to map synthetic intelligence like this into any system that has sufficiently structured protocols and can tolerate some inconsistency during adaptation. The protocols provide a foundation upon which an intelligent agent can “grow” it’s learning processes. A tolerance for adaptation provides a venue for intelligent experimentation and optimization to occur.

Further, the progressive evaluation model is also not limited to large-scale processes like message delivery. It can also inform the development of smaller applications and even specialized functions embedded in other programs. A lightweight implementation of this technique underpins the design of the pattern matching engine used in Message Sniffer. Unlike conventional pattern matching engines, Message Sniffer uses a swarm of lightweight intelligent agents that explore their data set collaboratively in the context of an artificial “world” that is structured to represent their collective knowledge. Each of these agents progressively evaluates it’s location in the world, it’s location in the data set, it’s own state, and the locations and states of it’s peers. This approach allows the engine to be extremely efficient and virtually immune to the number of patterns it must recognize simultaneously.

Broadly speaking, this technique can be applied to a wide range of tasks such as automated network management, data systems provisioning, process control and diagnostics, interactive help desks, intelligent data mining, logistics, robotics, flight control systems, and many others.

Of course, email processing is a natural fit for applications that implement the Progressive Evaluation Model as a way to leverage machine learning and other AI techniques. The Internet community has already demonstrated a willingness to “bend” the SMTP protocol when necessary, SMTP provides a good foundation upon which to build intelligent interactive agents, and messaging security is a complex, dynamic problem in search of strong solutions.