Aug 262013
 

One of the problems with machine learning in an uncontrolled environment is lies. Bad data, noise, and intentional or unintentional misinformation complicate learning. In an uncontrolled environment any intelligence (synthetic or otherwise) is faced with the extra task of separating truth from fiction.

Take GBUdb, for example. Message Sniffer’s GBUdb engine learns about IP behaviors by watching SNF’s scan results. Generally if a message scan matches a spam or malware rule then the IP that delivered the message gets a bad mark. If the scanner does not find spam or malware then the IP that sent the message is given the benefit of the doubt and gets a good mark.

In a perfect world this simple algorithm generates reliable statistics about what we can expect to see from any given IP address. As a result we can use these statistics to help Message Sniffer perform better. If GBUdb can predict spam and malware from an IP with high confidence then we can safely stop looking inside the message and tag it as bad.

Similarly if GBUdb can predict that an IP address only sends us good messages then we can let the message through. Even better than that — if the message matches a new spam or malware rule then most likely we’ve made a mistake. In that case we can turn off the troublesome rule, let the message through, and raise a flag so bigger brains can take a look and fix the error.

Right?

Not always!

Message Sniffer’s Auto-Panic feature does a fantastic job of helping us catch problems before they can cause trouble, but Auto-Panic can also be tricked into letting more spam through the filters.

When a new pre-tested spam campaign is launched on a new bot-net there is some period of time where completely unknown IP addresses are sending messages that are guaranteed (pre-tested) not to match any recognizable patterns. All of these IPs end up gathering good marks for sending “apparently” clean messages… and since they are churning out messages as fast as they can they gain a good reputation quickly.

Back at the lab the SortMonsters and RuleBots are hard at work analyzing samples and creating rules to recognize the new campaign. This takes a little bit of time and during that time GBUdb can’t help but become convinced that some of these IPs are good sources. The statistics prove it, after all.

When the new pattern rules get out to the edges the Auto-Panic feature begins to work against us. When the brand new pattern rules find spam or malware coming from one of these new IPs it looks like a mistake. So, Auto-Panic fires and turns off the new rules!

For a time the gates are held wide open. As new bots come online they get extra time to sneak their messages through while the new rules are suppressed by Auto-Panic. Not only that but all of the new IPs quickly gain a reputation for sending good messages so that they too can trigger the Auto-Panic feature.

In order to solve this problem we’ve introduced a new behavior into the learning engine. We’ve made it skeptical of new, clean IPs. We call it White-Guard.

White-Guard singles out IPs that are new to GBUdb and possibly pretending to be good message sources. Instead of taking the new statistics at face value the system decides immediately not to trust them and not to distrust them either. The good and bad counts are artificially set to the same moderately high value.

It’s like a stranger arriving in a small town. The town folk won’t treat the stranger badly, but they also won’t trust them either. They withhold judgement for a while to see what the stranger does. Whatever opinion is ultimately formed about the stranger they are going to have to earn it.

In GBUdb, the White-Guard behavior sets up a neutral bias that must be overcome by new data before any actions will be triggered by those statistics. Eventually the IP will earn a good or bad reputation but in the short term any new “apparently” clean IPs will be unable to trigger  Auto-Panic.

With Auto-Panic temporarily out of reach for these sources new pattern rules can take effect more quickly to block the new campaigns. This earns most of the new bot-net IPs the bad reputations they deserve and helps to increase early capture rates.

Since we’ve implemented this new learning behavior we have seen a significant increase in the effectiveness of the GBUdb system as well as an improvement in the accuracy of our rule conflict instrumentation and sampling rates. All of these outcomes were predicted when modeling the dynamics of this new behavior.

It is going to take a little while before we get the parameters of this new feature dialed in for peak performance, but early indications are very good and it’s clear we will be able to apply the lessons from this experiment to other learning scenarios in the future.

 

Jun 072013
 

The new blackhatzes on the scene:

In the past few weeks we’ve seen a lot of heavy new spam coming around, and most of it is pre-tested against existing filters. This has caused everybody to leak more spam than usual. Message Sniffer is leaking too because the volume and variability are so much higher than usual. That said, we are a bit better than most at stopping some of the new stuff.

The good thing about SNF is that instead of waiting to detect repeating patterns or building up statistics on sender behaviors our system concentrates on finding ways to capture new spam before it is ever released by reverse engineering the content of the messages we do see.

Quite often this means we’ve got rules that predict new spam or malware hours or even days before they get into the wild. Some pre-tested spam will always get through though because the blackhatzes test against us too, and not all systems can defend against that by using a delay technique like gray-listing or “gauntlet.”

What about the little guys?

This can be particularly hard on smaller systems that don’t process a lot of messages and perhaps don’t have the resources to spend on filtering systems with lots of parts.

I was recently asked: “what can I do to improve SNF performance in light of all the new spam?” This customer has a smaller system in that it processes < 10000 msg / day.

One of the challenges with systems like this is that if a spammer sends some new pre-tested spam through an old bot, GBUdb might have forgotten about the IP by the time the new message comes through. This is because GBUdb will “condense” it’s database once per day by default… so, if an IP is only seen once in a day (like it might on a system like this) then by the next day it is forgotten.

Tweaking GBUdb:

The default settings for GBUdb were designed to work well on most systems and to stay out of the way on all of the others. The good news is that these settings were also designed to be tweaked.

On smaller systems we recommend using a longer time trigger for GBUdb.

Instead of the default setting which tells SNF to compress GBUdb once per day:

<time-trigger on-off='on' seconds='86400'/>

You can adjust it to compress GBUdb once every 4 days:

<time-trigger on-off='on' seconds='345600'/>

That will generally increase the number of messages that are captured based on IP reputation by improving GBUdb’s memory.

It’s generally safe to experiment with these settings to extend the time further… although that may have diminishing returns because IPs are usually blocked by blacklists after a while anyway.

Even so, it’s a good technique because some of these IPs may not get onto blacklists that you are using – and still more of them might come from ISPs that will never get onto blacklists. GBUdb builds a local impression of IP reputations as it learns what your system is used to seeing. If all you get is spam from some ISP then that ISP will be blacklisted for you even if other systems get good messages from there. If those other systems also use GBUdb then their IP reputations would be different so the ISP would not be blocked for them.

If you want to be adventurous:

There is another way to compress GBUdb data that is not dependent on time, but rather on the amount of memory you want to allocate to it. By default the size-trigger is set to about 150 megabytes. This setting is really just a safety. But on today’s systems this really isn’t much memory so you could turn off the time trigger if you wish and then just let GBUdb remember what it can in 150 MBytes. If you go this route then GBUdb will automatically keep track of all the IPs that it sees frequently and will forget about those that come and go. On systems that have the memory to spare I really like this method the most.

You can find complete documentation about these GBUdb settings on the ARM site.

 

Feb 182013
 

MicroNeil has always been interested in the application of synthetic intelligence to real-world problems. So, when we were presented with the challenge of protecting messaging systems (and specifically email) from abuse, we applied machine learning and other AI techniques as part of the solution.

Email processing, and especially filtering, presents a number of challenges:

  • The Internet is increasingly a hostile environment.
  • Any systems that are exposed to the Internet must be hardened against attack.
  • The value of the Internet is derived from it’s openness. This openness tends to be in conflict with protecting systems from attack. Therefore, security measures must be carefully crafted so that they offer protection from abuse without compromising desirable and appropriate operations.
  • The presence of abuse and the corresponding need for sophisticated countermeasures sets up an environment that is constantly evolving and growing in complexity.
  • There is disagreement on: what constitutes abuse, the design of countermeasures and safeguards, what risks are acceptable, and what tactics are appropriate.
  • All of these conditions change over time.

As consequence of these circumstances any successful filtering system must be extremely efficient, flexible, and dynamic. At the same time it must respond to this complexity without becoming too complex to operate. This sounds like a perfect place to apply synthetic intelligence but in order to do that we need to use a framework that models an intelligent entity interacting with it’s environment.

The progressive evaluation model provides precisely that kind of framework while preserving both flexibility and control. This is accomplished by mapping a synthetic environment and the potential responses of an intelligent automaton (agent) onto the state map of the SMTP protocol and the message delivery process.

Each state in the message delivery process potentially represents a moment in the life of the agent where it can experience the conditions present at that moment and determine the next action it should take in response to those conditions. The default action may be to proceed to the next natural step in the protocol but under some conditions the agent might choose to do something else. It may initiate some kind of analysis to gather additional information or it might execute some other intermediate step that manipulates the underlying protocol.

The collection of steps that have been taken at any point and the potential steps that are possible from that point forward represent various “filtering strategies.” Filtering strategies can be selected and adjusted by the agent based on the changing conditions it perceives, successful patterns that it has learned, and the preferences established by administrators and users of the system.

The filtering strategies made available to the agent can be restrictive so that the system’s behavior is purely deterministic; or they can be flexible to allow the agent to learn, grow, and adapt. The constraints and parameters that are established represent the system policy and ultimately define what degrees of freedom are provided to the agent under various conditions. The agent works within these restrictions to optimize the performance of the system.

In a highly restrictive environment the agent might only be to allowed to determine which DNSBLs to check based on their speed and accuracy. Suppose there are several blacklists that are used to reject new connections. If one of these blacklists were to become slow to respond or somehow inaccurate (undesirable) then the agent might be allowed to exclude that test from the filtering strategy for a time. It might also be allowed to change the order in which the available blacklists are checked so that faster, less comprehensive blacklists are checked first. This would have the effect of reducing system loads and improving performance by rejecting many connections early in the process and applying slower tests only after the majority of connections have been eliminated.

A conservative administrator might also permit the agent to select only cached results from some blacklists that are otherwise too slow. The agent might make this choice in order to gain benefit from blacklists that would otherwise degrade the performance of the system. In this scenario the cached results from a slow but accurate blacklist would be used to evaluate each message and the blacklist would be queried out of band for any cache misses. If the agent perceived an improvement in the speed of the blacklist then it could elect to use the blacklist normally again.

ProgressiveEvaluationFramework

Figure 1 – Basic Progressive Response Model for Email Processing

Refer to Figure 1. Generally the agent “lives” in a sequence of events that are triggered by new information. At the most basic level it is either acting or waiting for new information to arrive. When information is added to it’s local context (red arrows) then that new information is applied (green arrows) to the current state of the agent (blue boxes).

If the new information is relevant and sufficient then it will trigger the agent to take action again and change it’s state thus moving the process forward. Each action is potentially guided by the all of the information that is available in the local context including a complete history of all previous actions.

In Figure 1, an agent waiting asleep is prompted by it’s local context to let it know that a new connection has occurred. Let’s assume that this particular system is designed so that each agent is assigned a single connection. The agent acts by waking up and changing it’s state. That action (a change in it’s own state) triggers the next action which is to issue a command to test the local blacklist with the new IP. Then, the agent changes it’s state again to a branching state where it will respond to the local blacklist result once it is available. At this point the agent goes back to a waiting state until new information arrives from the test because it is unable to continue without new information.

Next, the local blacklist result arrives in the local context. This prompts the agent again causing it to evaluate the local blacklist result. Depending upon that result it will chose one of two filtering strategies to use moving forward: either rejecting the connection or proceeding to another test.

This process continues with the agent receiving new stimuli and responding to that stimuli according to the conditions it recognizes. Each stimulus elicits a response and each response is itself a stimulus. The chain of stimuli and responses cause the agent to interact with the process following a path through the states made available to it by progressively selecting filtering strategies as it goes.

As each step is taken additional information about the session and each message builds up. Each new piece of information becomes part of the local environment for the agent and allows it to make more sophisticated choices. In addition to conventional test data the agent also builds up other information about it’s operating environment such as performance statistics about the server, other sessions that are active, partial results from it’s own calculations, and references to previous “experiences” that are “interesting” to it’s learning algorithms.

Agents might also communicate with each other to share information. This would allow these agents to from a kind of group intelligence by sharing their experiences and the performance of their filtering strategies. Each agent would gain more comprehensive access to test data and the workload of devising and evaluating new strategies would be divided among a larger and more diverse collection of systems.

The level of sophistication that is possible is limited only by the sophistication of the agent software and the restrictions imposed by system policies. This framework is also flexible enough to accommodate additional technologies as they are developed so that the costs and risks associated with future upgrades are reduced.

Typically any new technologies would appear to the agent as optional tools for new filtering strategies. Existing filtering strategies could be modified to test the qualities of the new tools before allowing them to affect the message flow. This testing might be performed deterministically by the system administrator or the agent might be allowed to adapt to the presence of the new tool and integrate it automatically once it learns how to use it through experimentation.

So far the description we have used is strictly mechanical. Even in an intelligent system it would be possible and occasionally desirable for the system administrator to specify a completely deterministic set of filtering strategies. However, on a system that is not as restrictive there are two opportunities for the intelligence of the agent to emerge.

Parametric adaptation might allow the agent to respond with some flexibility within a given filtering strategy. For example, if the local blacklist test were replaced by a local IP reputation test then the agent might have a variable threshold that it uses to judge whether the connecting IP “failed.” As a result it would be allowed to select filtering strategies based upon learning algorithms that adjust IP reputation thresholds and develop the IP reputation statistics.

Structural adaptation might allow the agent to swap out components of filtering strategies. Segments of filtering strategies might be represented in a genetic algorithm. After each session is complete the local context would contain a complete record of the strategies that were followed and the conditions that led to those strategies. Each of these sessions could be added to a pool, evaluated for fitness (out of band), and the most successful strategies could then be selected to produce a new population of strategies for trial. A more sophisticated system might even simulate new strategies using data recorded in previous sessions so that the fitness of new filtering strategies could be predicted before testing them on live messages.

Structural and parametric adaptation allow an agent to explore a wide range of strategies and tuning parameters so it can adopt strategies that produce the best performance across a range of potentially conflicting criteria. In order to balance the need for both speed and accuracy the agent might evolve a progressive filtering strategy that leverages lightweight tests early in the process in order to reduce the cost of performing more sophisticated tests later in the process. It might also improve accuracy by combining the scores of less accurate tests using various tunable weighting schemes in order to refine the results.

Another interesting adaptation might depend on session specific parameters such as the connecting system address range, HELO, and MAIL FROM: information, header structure, or even the timing and sequence of the events in the underlying protocol. Over time the agent might learn to use different strategies for messages that appear to be from banks, online services, or dynamic address ranges.

Given enough flexibility and sensitivity it could learn to recognize early clues in the message delivery process and select from dozens of highly tuned filtering strategies that are each optimized for their own class of messages. For example it might learn to recognize and distrust systems that stall on open connections, attempt to use pipelining before asking permission, or attempt to guess recipient addresses through dictionary attacks.

It might also learn to recognize that messages from particular senders always include specific features. Any messages that disagree with the expected models would be tested by filtering strategies that are more “careful” and apply additional tests.

Systems with intelligent agents have the ability to adapt automatically as operating conditions change, new tests are made available, and test qualities change over time. This ability can be extended if collections of agents are allowed to exchange some of their more successful “formulas” with each other so that all of the participating agents can learn best practices from each other. Agents that share information tend to converge on optimal solutions more quickly.

There are also potential benefits to sharing information between systems of different types. Intelligent intrusion detection systems, application servers, firewalls, and email servers could collaborate to identify attackers and harden systems against new attack vectors in real time. Specialized agents operating in client applications could further accelerate these adaptations by contributing data from the end user’s point of view.

Of course, optimizing system performance and responding to external threats are only parts of the solution. In order to be successful these systems must also be able to adapt to changing stakeholder preferences.

Consider that a large scale filtering system needs to accommodate the preferences of system administrators in charge of managing the infrastructure, group administrators in charge of managing groups of email domains and/or email users, power users who desire a high degree of control, and ordinary users who simply want the system to work reliably and automatically.

In a scenario like this various parts of the filtering strategy might be modified or swapped into place at various stages based on the combined preferences of all stakeholders.

StructuralAdaptationPerUser

Figure 2 – Structural Adaptation Per User

At any point during the progressive evaluation process it is possible to change the remaining steps in the filtering strategy. The change might be in response to the results of a test, results from an analysis tool, a change in system performance data, or new information about the message.

In Figure 2 we show how the filtering strategy established by the administrator is followed until the first recipient is established. The first recipient is interpreted as the primary user for this message. Once the user is known the remainder of the filtering strategy is selected and adjusted based on the combined user, domain, group, and administrator preferences that apply.

Beginning with settings established by the system administrator each successively more specific layer is allowed to modify parts of the filtering strategy so that the composite that is ultimately used represents a blend of all relevant preferences. The higher, more general layers determine the default settings and establish how preferences can be modified by the lower, more specific layers.

BlendedPreferencesSelection

Figure 3 – Blended Profile Selection

Refer to Figure 3. The applicable layers are selected from the bottom up. The specific user belongs to a domain, a domain belongs to a group, a group may belong to another group, and all top level groups belong to the system administrator. Once a specific user (recipient) is identified then the applicable layers can be selected by following a path through the parent of each layer until the top (administrator) layer is reached. Then, the defaults set by the administrator are applied downward and modified by each layer along the same path until the user is reached again. The resulting preferences contain a blend of the preferences defined at each layer.

CompositeStrategyInteraction

Figure 4 – Composite Strategy Interaction

It is important to note that these drawings are potentially misleading in that they may appear to show that the agent is responsible for executing the SMTP protocol and all that is implied. In practice that would not be the case. Some of the key states in the illustrated filtering strategies have been named for states in the SMTP protocol because the agent is intended to respond to those specific conditions. However the machinery of the protocol itself is managed by other parts of the software – most likely embedded in the machinery that builds and maintains the local context.

You could say that the local context provides the world where the intelligent agent lives. The local context implements an API that describes what the agent can know and how it can respond. The agent and the local context interact by passing messages to each other through this API.

Typically the local context and the agent are separate modules. The local context module contains the machinery for interacting with the real world, interpreting it’s conditions, and presenting those conditions to the agent in a form it can understand. The agent module contains the machinery for learning and adapting to the artificial world presented by the local context. Both of these modules can be developed and maintained independently as long as the API remains stable.

It should be noted that this kind of framework can be applied broadly to many kinds of systems – not just email processing and other systems on the Internet. It is possible to map synthetic intelligence like this into any system that has sufficiently structured protocols and can tolerate some inconsistency during adaptation. The protocols provide a foundation upon which an intelligent agent can “grow” it’s learning processes. A tolerance for adaptation provides a venue for intelligent experimentation and optimization to occur.

Further, the progressive evaluation model is also not limited to large-scale processes like message delivery. It can also inform the development of smaller applications and even specialized functions embedded in other programs. A lightweight implementation of this technique underpins the design of the pattern matching engine used in Message Sniffer. Unlike conventional pattern matching engines, Message Sniffer uses a swarm of lightweight intelligent agents that explore their data set collaboratively in the context of an artificial “world” that is structured to represent their collective knowledge. Each of these agents progressively evaluates it’s location in the world, it’s location in the data set, it’s own state, and the locations and states of it’s peers. This approach allows the engine to be extremely efficient and virtually immune to the number of patterns it must recognize simultaneously.

Broadly speaking, this technique can be applied to a wide range of tasks such as automated network management, data systems provisioning, process control and diagnostics, interactive help desks, intelligent data mining, logistics, robotics, flight control systems, and many others.

Of course, email processing is a natural fit for applications that implement the Progressive Evaluation Model as a way to leverage machine learning and other AI techniques. The Internet community has already demonstrated a willingness to “bend” the SMTP protocol when necessary, SMTP provides a good foundation upon which to build intelligent interactive agents, and messaging security is a complex, dynamic problem in search of strong solutions.

Jun 152012
 

Back in the early days of spam fighting we recognized a problem with all types of filtering. No matter what kind of filtering you are using it is fairly trivial for an attacker to defeat your filters for a time by pretesting their messages on your system.

You can try to keep them out, but in the end, if you allow customers on your system then any one of them might be an attacker pretending to be an ordinary customer. To test a new campaign they simply send a sample to themselves and see if it makes it through. If it does then they have a winner. If it doesn’t then they need to try again. Either way they always appear to be just an ordinary customer that gets ordinary spam like anyone else.

The simplest and most effective way to solve this problem is to selectively delay the processing of some messages so that all of your filtering strategies have time to catch up to new threats. After a short delay these messages are sent through the filtering process again where they receive the benefit of any adjustments that have been made. We call this solution “Gauntlet” because it forces some messages to “run the gauntlet” before allowing them to pass.

The first step is to send your messages through your usual filtering process. You will be able to discard (or quarantine) most of these immediately. The remaining messages should be fairly clean but, most importantly, they will be a much smaller volume.

The next step is deciding which messages to delay. This is controversial because customer expectations are often unreasonable. Even though email was never designed to be an instantaneous form of communication it tends to be nearly so most of the time; and in any case most email users expect to receive messages within seconds of when they are sent.

The reality is that many messages take some time to be delivered and that there is usually very little control or knowledge on part of the recipient regarding when messages are sent. As a result there is a fair amount of ambiguity over the apparent travel time of any given message. It is also true that while most customers will violently agree that email should never be delayed, under most circumstances a delay will be unnoticed and inconsequential. In fact one of the most powerful features of email is that the recipient can control when they receive and respond to email – unlike phone calls, instant messages, or friends dropping in unannounced.

This flexibility between perceived and actual delivery times gives us an opportunity to defeat pretested spam – particularly if we can be selective about which messages we delay.

The more sophisticated the selection process the less impact delayed processing will have on end users and support staff. Often the benefits from implementing Gauntlet far outweigh any discomfort that might occur.

For example, Message Sniffer generally responds to new threats within seconds of their arrival in spam traps and typically generates new rules within minutes of new outbreaks (if not immediately). Many of those messages, usually sent in dramatically large bursts, are likely to be reaching some mailboxes before they arrive in spam traps. If some messages can be delayed by as little as 10, 20, or 30 minutes then the vast majority of those messages will never reach a customer’s mailbox.

If a selective 30 minute delay can prevent virtually all of a new phishing or malware campaign from reaching it’s target then the benefits can be huge. If a legitimate bank notification is delayed by 30 minutes the delay is likely to go completely unnoticed. It is worth noting that many email users regularly go hours or even days without checking their mail!

On the other hand there are also email users (myself included) that are likely to “live in” their email – frequently responding to messages mere minutes or seconds after they arrive. For these users most of all, the sophistication of the selection process matters.

What should be selected for delayed processing?

More advanced systems might use sophisticated algorithms (even AI) to select messages in or out of delayed processing. A system like that might be tuned to delay anything “new” and anything common in recently blocked messages.

Less sophisticated systems might use lists of keywords and phrases to delay messages that look like high-value targets. Other simple selection criteria might include messages from ISPs and online services that are frequently abused or messages containing certain kinds of attachments. Some systems might chose to delay virtually all messages by some small amount while selecting others for longer delays.

A more important question is probably which messages should not be delayed. It is probably best to expedite messages that are responses to recently sent email, messages from known good senders such as those from sources with solid IP reputations, and those that have been white-listed by administrators or customers.

In order to remove the mystery and offload some of the support work, the best solutions can put some of the controls in the hands of their customers. Customers who feel it is vital that none of their messages are delayed might opt out. Others who prefer to minimize their exposure to threats might elect to impose longer delays and to delay every message regardless of it’s source and content.

One customer who implemented Gauntlet back in the early days had an interesting spin on how they presented it to their users. Instead of telling them they were delaying some messages they told the customer that the delayed messages were initially quarantined as suspicious but later released automatically by sophisticated algorithms. This allowed them to implement relatively moderate delays without burdening their users with any additional complexity.

However it is implemented, delayed message processing is a powerful tool against pretested spam. Recent, dramatic growth in the volume and sophistication of organized attacks by cyber criminals is a clear sign that the time has come to implement sophisticated defenses like Gauntlet.

 

Apr 202011
 

This week has seen some truly amazing spring weather around the MadLab including everything from tornado threats and sustained high winds to flash flooding and dense fog.

April showers, as they saying goes, will bring May flowers – so we don’t mind too much as long as the power stays on and the trees don’t fall on the roof!

In cyberspace things are also picking up it seems. For about the last three weeks we’ve seen declining severity and frequency of spam storms. However this week has been different.

Beginning about 3 days ago we’ve seen a surge in new spam storms and in particular a dramatic increase in the use of hacked web sites and URL shortener abuse.

Previous 30 days of spam storms.

After 3 weeks of declining spam storms, a new surge starts this week...

There is also another notable change in the data. For several years now there has been a pretty solid 24 hour cyclical pattern to spam storms. This week we’re seeing a much more chaotic pattern. This and other anecdotal evidence seems to suggest that the new spams are being generated more automatically and at lower levels across wider bot nets.

We used to see distinct waves of modifications and responses to new filtering patterns. Now we are now seeing a much more chaotic and continuous flow of new spam storms as current campaigns are continuously modified to defeat filtering systems.

Chaotic spam storm arrival rates over the past 48 hoursThere’s no telling if these trends will continue, nor for how long, but they do seem to suggest that new strategies and technologies are coming into use in the blackhatzes camps. No doubt this is part of the response to the recent events int he anti-spam world.

Microsoft takes down Rustock spam botnet

DOJ gets court permission to attack botnet

In response to the blackhatzes changes my anti-spam team and I have developed several new protocols and modified several of our automated friends (rule-bots) to take advantage of new artifacts in the data. The result has been a dramatic increase in the creation rate of new heuristics, reduced response times, and improved preemptive captures.

Rule Activity Display shows higher rule rates and smoother hit densities

With these changes, changes in blackhatz tactics, and new sniffer engine updates coming along I’m going to be very busy watching the blinking lights to keep track of the weather outside the MadLab and in cyberspace.

Mar 042010
 

Those trixy blackhatzes are making a real mess of things these days. The last day or so in particular has been a festival of hacked servers and exploited free-hosting sites. Just look at this graph from our soon-to-be-launched Spam-Weather site:


While spammers have always enjoyed exploiting free services they have been particularly busy at it the last few days. The favorites this time around have been webstarts and doodlekits. What makes sites like these so attractive to the blackhats is that there is virtually no security on the sites. Anybody can sign up for a new account in minutes without any significant challenges. This means that the entire process can be scripted and automated by the blackhats.

After they’ve used one URL for a while (and it begins to get filtered) they simply light up another one, and so on, and so on.

Some email administrators are tempted to block all messages containing links to free hosting sites — and for some that might be an option — but for PROs like us it’s not. There are usually plenty of legitimate messages floating around with links to free-hosted web sites so blocking all such links would definitely lead to false positives (unacceptable).

At ARM we have a wide range of defenses against these messages so we’re able to block not only on specific links but also on message structures, obfuscation techniques, and other artifacts that are always part of these messages. In addition to that our tools also allow us to predict what the next round of messages might look like so that even when they do change things up we’re often ahead of them.

No mistake about it though… it’s hard work!

It would be _MUCH_ better for everyone if folks that offer free hosting and other commonly exploited services (like URL shortening, blog hosting,  and free email accounts) would do a better job keeping things secure.