Madsci

Husband, Father, Musician, Engineer, Teacher, Thinker, Pilot, Mad, Scientist, Writer, Philosopher, Poet, Entrepreneur, Busy, Leader, Looking for ways to do something good in a sustainable way,... to be his best,... and to help others to do the same. The universe is a question pondering itself... we are all a part of the answer.

Jun 152012
 

Back in the early days of spam fighting we recognized a problem with all types of filtering. No matter what kind of filtering you are using it is fairly trivial for an attacker to defeat your filters for a time by pretesting their messages on your system.

You can try to keep them out, but in the end, if you allow customers on your system then any one of them might be an attacker pretending to be an ordinary customer. To test a new campaign they simply send a sample to themselves and see if it makes it through. If it does then they have a winner. If it doesn’t then they need to try again. Either way they always appear to be just an ordinary customer that gets ordinary spam like anyone else.

The simplest and most effective way to solve this problem is to selectively delay the processing of some messages so that all of your filtering strategies have time to catch up to new threats. After a short delay these messages are sent through the filtering process again where they receive the benefit of any adjustments that have been made. We call this solution “Gauntlet” because it forces some messages to “run the gauntlet” before allowing them to pass.

The first step is to send your messages through your usual filtering process. You will be able to discard (or quarantine) most of these immediately. The remaining messages should be fairly clean but, most importantly, they will be a much smaller volume.

The next step is deciding which messages to delay. This is controversial because customer expectations are often unreasonable. Even though email was never designed to be an instantaneous form of communication it tends to be nearly so most of the time; and in any case most email users expect to receive messages within seconds of when they are sent.

The reality is that many messages take some time to be delivered and that there is usually very little control or knowledge on part of the recipient regarding when messages are sent. As a result there is a fair amount of ambiguity over the apparent travel time of any given message. It is also true that while most customers will violently agree that email should never be delayed, under most circumstances a delay will be unnoticed and inconsequential. In fact one of the most powerful features of email is that the recipient can control when they receive and respond to email – unlike phone calls, instant messages, or friends dropping in unannounced.

This flexibility between perceived and actual delivery times gives us an opportunity to defeat pretested spam – particularly if we can be selective about which messages we delay.

The more sophisticated the selection process the less impact delayed processing will have on end users and support staff. Often the benefits from implementing Gauntlet far outweigh any discomfort that might occur.

For example, Message Sniffer generally responds to new threats within seconds of their arrival in spam traps and typically generates new rules within minutes of new outbreaks (if not immediately). Many of those messages, usually sent in dramatically large bursts, are likely to be reaching some mailboxes before they arrive in spam traps. If some messages can be delayed by as little as 10, 20, or 30 minutes then the vast majority of those messages will never reach a customer’s mailbox.

If a selective 30 minute delay can prevent virtually all of a new phishing or malware campaign from reaching it’s target then the benefits can be huge. If a legitimate bank notification is delayed by 30 minutes the delay is likely to go completely unnoticed. It is worth noting that many email users regularly go hours or even days without checking their mail!

On the other hand there are also email users (myself included) that are likely to “live in” their email – frequently responding to messages mere minutes or seconds after they arrive. For these users most of all, the sophistication of the selection process matters.

What should be selected for delayed processing?

More advanced systems might use sophisticated algorithms (even AI) to select messages in or out of delayed processing. A system like that might be tuned to delay anything “new” and anything common in recently blocked messages.

Less sophisticated systems might use lists of keywords and phrases to delay messages that look like high-value targets. Other simple selection criteria might include messages from ISPs and online services that are frequently abused or messages containing certain kinds of attachments. Some systems might chose to delay virtually all messages by some small amount while selecting others for longer delays.

A more important question is probably which messages should not be delayed. It is probably best to expedite messages that are responses to recently sent email, messages from known good senders such as those from sources with solid IP reputations, and those that have been white-listed by administrators or customers.

In order to remove the mystery and offload some of the support work, the best solutions can put some of the controls in the hands of their customers. Customers who feel it is vital that none of their messages are delayed might opt out. Others who prefer to minimize their exposure to threats might elect to impose longer delays and to delay every message regardless of it’s source and content.

One customer who implemented Gauntlet back in the early days had an interesting spin on how they presented it to their users. Instead of telling them they were delaying some messages they told the customer that the delayed messages were initially quarantined as suspicious but later released automatically by sophisticated algorithms. This allowed them to implement relatively moderate delays without burdening their users with any additional complexity.

However it is implemented, delayed message processing is a powerful tool against pretested spam. Recent, dramatic growth in the volume and sophistication of organized attacks by cyber criminals is a clear sign that the time has come to implement sophisticated defenses like Gauntlet.

 

Mar 312012
 

Artist: Pete McNeil and Julia Kasdorf
Album: Impromptu
Review By: Dan MacIntosh
Rating: 3.5 Stars (out of 5)

After getting an earful of Julia Kasdorf on Impromptu, it’s really difficult to believe this singer/songwriter/musician actually got her start by playing bass in San Francisco punk bands, such as Angry Samoans.  However, anyone that has followed the punk rock scene long enough is well aware of the way many of these players use punk music as career kickoffs, before moving on to their true musical loves. In Kasdorf’s case, singer/songwriter music — with just a touch of the blues – is the style that most sincerely represents her artistic heart.

Impromptu is actually a two-sided coin, if you will, as Pete McNeil (who also calls himself MadScientist) also contributes songs to this double-artist collection. Whereas Kasdorf goes for the mostly introspective approach to songwriting, McNeil is more apt to rev it up, as he does during the roadhouse blues of “Treat Me Like A Road.”  However, “Kitties” is one of the coolest tracks on this collection. It has a distinctive psychedelic – you might say druggy – feel to it. Instead of rollicking blues guitar, the six-string part is moody and spooky, instead, and placed over an inventive, wandering bass line. McNeil’s “Doldrums” and “Baby Please” are also built upon basic blues structures, much like “Treat Me Like A Road.”

Kasdorf’s songs are consistently lyrically intriguing. For instance, “Motel” opens with her announcing, “I’m gonna hide in a motel.” This could be describing reactionary behavior of typical musicians. However, it could represent something a lot darker, as in someone retreating to such anonymity in order to indulge in destructive drug-taking behavior. Nevertheless, when Kasdorf sings a line about burning old love letters, it suggests something more akin to post-relationship breakup activity.

With “This Heart,” Kasdorf expresses a much more empathetic perspective. It’s sung almost as a prayer, and speaks to the artist’s care for those less fortunate, including the underprivileged in Romania and Brazil. The track also features a bit of surf guitar in its upbeat melody, which is enjoyable. The chorus states, “You gave me this heart.” It reveals that Kasdorf might not be quite so concerned people half a world away, had Jesus not first given her a loving heart.

One other fine song is simply titled “Sunday.” It begins with rain sound effects before Kasdorf begins singing about the rain. When Kasdorf vocalizes on it, it’s with a world-weary, slightly scratchy voice. “I wish it was Sunday again,” she sings longingly. This recording is beautifully augmented by Carla Deniz’s supportive viola.

Although Kasdorf tends to sing with relatively stripped-down arrangements, she sure sounds boisterous and right at home during “Lament,” which also features a bevy of backing vocals and an orchestrated arrangement. This track is one place where the listener might secretly wish it also featured a string section. In other words, a little more could have been even better.

McNeil has said Impromptu is the first compilation for ALT-230 label. If what comes after this album is even close to the quality it contains, that is really a label future to get excited about. These songs may not be as commercial as what’s getting airplay these days, but that’s probably not a bad thing. Sure, it’s interesting to hear how electronic music is playing in such close quarters with rap and R&B, but after a while all of that stuff just starts to sound the same.

Best of all, Impromptu is filled with fantastic songs. The arrangements are slightly on the retro side, but they are retroactive back to a time when music just seemed to make a whole lot more sense. Instead of creating music for feet (for dancing), McNeil and Kasdorf compose songs for the heart and mind. After all, it doesn’t take a genius to create beats, no matter how much rap artists might brag about this particular skill. A title like Impromptu suggests something improvised and made up on the spot. However, his is well planned, and thoughtfully created music. You don’t have to love it, but you really oughta love it.

The Impromptu CD is available at CDBaby, AmazoniTunes, and everywhere you find great music!

Mar 202012
 

Artist: Julia Kasdorf and Pete McNeil
Album: Impromptu
Reviewed by Matthew Warnock
Rating: 4 Stars (out of 5)

Collaboration is the spark that has ignited some of the brightest musical fires in songwriting history.  When artists come together on a project featuring a core duo or group and a number of guest artists, there is something that can happen that makes these moments special, especially when the stars align and everything winds up in the right place at the right time.  Songwriters and performers Julia Kasdorf and Pete McNeil have recently come together on just such a record, which features the duo on each track alongside various other accomplished artists.  The end result, Impromptu, is an engaging and enjoyable record that possesses a sense of cohesiveness deriving from the duo’s contribution, but that moves in different and exciting directions as the different guest musicians come and go throughout the album.

Though most of the album is a collaborative effort between McNeil, Kasdorf and guest artists, there are a couple of tracks that feature just the duo, including “The Minute I’m Gone,” though one might not realize this unless the liner notes were consulted.  Kasdorf, being a multi-talented multi-instrumentalist, contributes the lyrics and music, as well as performs vocals, both lead and background, guitar and bass, while McNeil brings his talents to the drum work on the track.  Not only is the song a sultry, blues-rock number that grooves and grinds its way to being one of the most interesting songs on the album, but the duo do a seamless job of overdubbing each part to make it sound like a band playing live in the studio, rather than two musicians playing all the parts.  The same is true for the other duo track, “Motel,” though in a more laid-back and stripped-down approach.  Here, the brushes played by McNeil, set up Kasdorf’s vocals, bass and guitar in a subtle yet effective way, allowing the vocals to float over the accompaniment while interacting at the same time.  Recording in this way is not easy, especially when trying to create the atmosphere of an ensemble in the studio, but Kasdorf and McNeil pull it off in a way that is both creative and engaging, and it is one of the reasons that the album is so successful.

McNeil also steps to the forefront of several songs to take over the role of lead vocalist, including the Cream inspired blues-rocker “Doldrums.”  Here, the drummer lays down a hard-driving groove that is supported by Kasdorf on rhythm guitar and bass while he digs deep into the bluesy vocal lines that define the track.  Guest lead guitarist Eric Nanz contributes a memorable solo and plenty of bluesy fills to the song, bringing a Wah-based tone to the track that brings one back to the classic tone used by late-‘60s blues rockers such as Eric Clapton, Jeff Beck and Jimmy Page.  McNeill also takes the reins on the track “Kitties” where he sings, as well as plays drums and synth, with bassist John Wyatt filling in the bottom end.  With a psychedelic vibe to it, the song stands out against the rest of the album in a good way, adding variety and diversity to the overall programming of the album while featuring the talented drummer-vocalist-pianist at the forefront of the track.

Overall, Impromptu is not only a cool concept, but an album that stands on its own musicality and songwriting regardless of the writing and recording process used to bring the project together.  All of the artists featured on the album, the core duo and guest artists alike, gel together in a way that serves the larger musical goals of the record, providing an enjoyable listening experience along the way.

The Impromptu CD is available at CDBaby, AmazoniTunes, and everywhere you find great music!

 

Mar 162012
 

When I added the cube to the studio I was originally thinking that it would be just a handy practice amp for Chaos. He was starting to take his electric guitar work seriously and needed an amp he could occasionally drag to class.

Then the day came that one of our guitar friends showed up to record a session for ALT-230 and had forgotten their amp. So, instead of letting the time slot go to waste we decided to give the little cube a try. We figured that if what we got wasn’t usable we would re-amp the work later or run it through Guitar Rig on the DAW.

We were very pleasantly surprised! The tracks were so good they survived all the way through post production. Ever since then we’ve been hooked. We’ve been using the Cube regularly now any time we want to do some relatively clean electric guitar work and we’ve been getting consistently good results.

The normal setup for this involves a R0DE NT2-a paired with a Shure SM57, both set about 30 degrees off axis, about half a meter away, close together and in phase (diaphragms abeam). Some times we give them a little separation from each other  if we want more “space” in the stereo image. Anything from 5 to 20 cm usually works ok.

Then for good measure we’ll run the guitar through a direct box on it’s way in just in case we want to re-amp it later. This too has become a fairly standard procedure no matter what amp we’re using.

Usually when I’m setting up like this I will put the two mics on separate channels through the Furman monitor rig so I can hear them in the headphones separately and summed on demand. That makes it easy to move things around to fine tune mic positioning and any other tweaking that might be needed.

Today we had yet another successful session with the rugged, versatile little cube; and right after that Chaos plugged in to practice his guitar lessons. I couldn’t help but grin at being reminded how far this little practice amp had come. If you don’t have one yet you probably need it and don’t know it!

Jan 302012
 

The other day I was chatting with Mayhem about number theory, algorithms, and game theory. He was very excited to be learning some interesting math tricks that help with factorization.

Along the way we eventually got to talking about simple games and graph theory. That, of course, led to Tic-Tac-Toe. But not being content to leave well enough alone we set about reinventing Tic-Tac-Toe so that we could have a 3 way game between he and I and Chaos.

After a bit of cogitating and brainstorming we hit upon a solution that works for 3 players, preserves the game dynamics of the original Tic-Tac-Toe, and even has the same number of moves!

The game board is a fractal of equilateral triangles creating a triangular grid with 9 spaces. The tokens are the familiar X, O, and one more – the Dot.

The players pick their tokens and decide on an order of play. Traditionally, X goes first, then O, then Dot. Just like old-school Tic-Tac-Toe, the new version will usually end in a tie unless somebody makes a mistake. Unlike the old game, Tic-Tac-Toe-3 requires a little bit more thinking because of the way the spaces interact and because there are two opponents to predict instead of just one.

Here is how a game might go:

O makes a mistake, Dot wins!

At the end of the game shown, O makes a mistake and allows Dot to win by claiming 3 adjacent cells – just like old-school Tic-Tac-Toe. Hope you have as much fun with this as we did!

Sep 142011
 

According to a recent Census Bureau report nearly 1 in 6 US citizens are now officially poor.

This report prompted a number of posts when it arrived in my Facebook account ranging from fear and depression to anger over CEO salaries which have soared, and outsourcing practices which have contributed to unemployment, a loss of industrial capacity, and a loss of our ability to innovate.

There seems to be a lot of blame to go around, a lot of hand-waving and solution pedaling, and plenty of political posturing and gamesmanship. But everything I’ve heard so far seems to miss one key point that I believe sits at the root of all of this.

We chose this and we can fix it!

When you look at the situation systemically you quickly realize that all of the extreme conditions we are experiencing are driven by a few simple factors that are amplified by the social and economic systems we have in place.

The economic forces at work in our country have selected for a range of business practices that are unhealthy and unsustainable. None the less, we have made choices consistently and en-mass that drive these economic forces and their consequences.

For example. Think about how we measure profitability. Generally we do the math and subtract costs from revenues. That seems simple enough, logical, and in fact necessary for survival. However, just like the three (perfect) laws of robotics ultimately lead to a revolution that overturns mankind (See I Robot), this simplistic view of profitability leads to the economic forces that are driving our poverty, unemployment, and even the corrupting influence of money on our political system. Indeed these forces are selecting for business practices and personal habits that reinforce these trends so that we find ourselves in a vicious downward spiral.

Here’s how it works — At the consumer level we look for the lowest price. Therefore the manufacturers must lower costs in order to maintain profits. Reducing costs means finding suppliers that are cheaper and finding ways to reduce labor costs by reducing the work force, reducing wages, and outsourcing.

Then it goes around again. Since fewer people are working and those who are employed are earning less there is even greater downward pressure on prices. Eventually so much pressure that other factors, such as quality, reliability, brand character, and sustainability are driven out of the system because the only possible choice for many becomes the product with the lowest price.

This process continues until the quality of the product and, more importantly, the quality of the product’s source is no-longer important. With the majority of the available market focused on the lowest price (what they can afford) and virtually all innovation targeted on reducing costs, the availability of competing products of higher quality shrinks dramatically and as a result of short supply the price quickly moves out of reach of the majority of the marketplace. This is also true for businesses sourcing the raw materials, products, and services that they use to do their work. As time goes on it becomes true of the labor market also — since there are very few high value jobs with reasonable pay there are also very few high quality workers with skills, and very little incentive to pursue training.

Remarkably, at a time when information technology and connectivity are nearly universal the cost of education has risen dramatically and the potential benefit of education has fallen just as much — there simply are not enough high value jobs to make the effort worth while. Precisely the opposite should be true. Education should be nearly free and universal and highly skilled workers should be in high demand!

The economic forces set up by our simplistic view of profitability lead directly to the wealth disparity we see today where the vast majority have almost no wealth and are served by a large supply of cheap low-quality products while a very small minority lives in a very different world with disproportionately large wealth, power, and access to a very small quantity of high quality products and services.

Having reached this condition additional forces have taken hold to strengthen these extremes. Consider that with the vast majority of consumers barely able to purchase the low quality products that are currently available it is virtually impossible for anyone to introduce improved products and services. One reason is that such a product would likely require a higher price and would be difficult to sell in the current marketplace. Another is that any product that is successful is quickly dismantled by the extremely powerful incumbent suppliers who, seeing a threat to their dominance, will either defeat the new product by killing it, or absorb it by purchasing it outright.

An unfortunate side-effect of this environment is that most business plans written today by start-ups go something like: 1.Invent something interesting, 2.attract a lot of attention, 3.sell the company to somebody else for a big bag of money.

These plans are all about the exit strategy. There is virtually no interest in building anything that has lasting value and merely suggesting such a thing will brand you as incompetent, naive, or both. Sadly, in case you missed it, this also leads to a kind of standard for evaluating sound business decisions. The prevailing belief is that sound business decisions are short-term and that long-term thinking is both too risky and too costly to be of any value.

Our purchasing practices aren’t the only driver of course. Another important driver is finance and specifically the stock market. These same pure-profit forces drive smaller vendors out of the marketplace because they lack the access to capital and economies of scale required to compete with larger vendors. As a result smaller vendors are killed off or gobbled up by larger companies until there are very few of them (and very few choices for consumers). In addition, the larger companies intentionally distort the market and legal environments to create barriers to entry that prevent start-ups from thriving.

All of these large, publicly traded companies are financed by leveraging their stock prices. Those stock prices are driven again by our choices – indirectly. Most of us don’t really make any direct decisions about stock prices. Instead we rely on largely automatic systems that manage our investment portfolios to provide the largest growth (profit again). So, if one company’s growth looks more likely than another these automated systems sell the undesirable company and buy the more desirable company in order to maximize our gains. The stock price of the company being sold drops and the stock price of the company being purchased rises. Since these companies are financed largely by their ability to borrow money against the value of their stock, these swings in stock price have a profound effect on the amount of money available to those companies and to their ability to survive.

In the face of these forces even the best company manned by the best people with the best of intentions is faced with a terrible choice. Do something bad, or die. Maybe it’s just a little bad at first. Maybe a little worse the next time around. But the forces are relentless and so eventually bad goes to worse. Faced with a globally connected marketplace any companies that refuse to make these bad choices are eventually killed off. It is usually easier and less costly to do the wrong thing than it is to do the right thing and there is always somebody somewhere willing to do the wrong thing. So the situation quickly degrades into a race to the bottom.

In this environment, persons of character who refuse to go along with a bad choice will be replaced with someone who will go along. Either they will become so uncomfortable with their job that they must leave for their own safety and sanity, or they will be forcibly removed by the other stakeholders in the company. This reality is strongest in publicly traded companies where the owners of the company are largely anonymous and intentionally detached from the decisions that are made day-to-day.

The number of shares owned determines the voting strength of a stockholder. If most of the stock of your publicly traded company is owned by investment managers who care only about the growth of your stock price then they will simply vote you off the board if your decisions are counter to that goal. If you fight that action they will hire bigger lawyers than yours and take you out that way. For them too, this is a matter of survival because if they don’t “play the game” that way then their customers (you and I with retirement accounts etc) will move our money elsewhere and put them out of a job.

These situations frequently tread into murky areas of morality due to the scale of the consequences. One might be lead to rationalize: On the one hand the thing we’re about to do is wrong, on the other hand if we don’t do it then thousands of people will lose their jobs, so which is really the greater good? Discomfort with a questionable, but hidden, moral decision — or the reality of having to fire thousands of workers? Then, of course, after having lived and worked in an environment of questionable ethics for an extended period many become blind to the ethics of their decisions altogether. That’s where the phrase “It’s just business, nothing personal…” comes from.

Eventually these large companies are pressured into making choices that are so bad they can’t be hidden, or are so bad that they are illegal! So, in order to survive they must put pressure on our political systems to change the laws so that they can legally make these bad choices, or at least so they can get away with making them.

These forces then drive the same play-or-die scenarios into our political system. If you are in politics to make a difference you will quickly discover that the only way to survive is to pander to special interests. If you don’t then they will destroy you in favor of politicians who will do what these large corporations need in order to survive.

It seems evil, immoral, and just plain wrong, but it’s really just math. There is nothing emotional, supernatural or mysterious at work here. In much the same way sexual pressures drive evolution to select for beautiful and otherwise useless plumage on the peacock, our myopic view of profitability drives economic forces to select for the worst kinds of exploitation, corruption, and poverty.

So what can we do about it?

It seems simplistic but we all have the power to radically change these forces. Even better, if we do that then the tremendous leverage that is built into this system will amplify these changes and drive the system toward the opposite extreme. Imagine a positive spiral instead of a negative one.

There are two key factors that we can change about our choices that will reverse the direction of these forces so that the system drives toward prosperity, resilience, and growth.

1. Seek value before price. By redefining profitability as the generation of value we can fundamentally change the decisions that are required for a company to survive, compete, and thrive. Companies seeking to add value retain and develop skilled workers, demand high quality sources, and require integrity in their decision making. They also value long-term planning since the best way to capitalize on adding value is to take advantage of the opportunities that arise from that added value in the future.

2. Seek transparency. In order for bad decisions to stand in the face of a marketplace that demands value above price there must be a place to hide those decisions. Transparency removes those hiding places and places a high premium on brand development. As a result it becomes possible to convert difficult decisions into positive events for the decision makers in the company, and ensures that they will be held accountable for the choices they make.

I blasted through those pretty quickly because I wanted to make the points before getting distracted by details. So they might seem a bit theoretical but they are not. For each of us there are a handful of minor adjustments we can make that represent each of these two points.

And, if you’re thinking that a few of us can’t make any significant change then think again. The bipolar system we have now is so tightly stretched at the bottom that a tiny fraction of the market can have a profound impact on the system. Profit margins are incredibly thin for most products. So thin that a persistent 10% reduction in volume for any particular vendor would require significant action on their part, and anything approaching 1% would definitely get their attention. If you couple that with the fact that the vast majority of the population belongs to this lower segment of the market then you can see how much leverage we actually do have to change things.

Consider, for example, that the persistence of a handful of organic farmers and the demand generated by their customers has caught the attention of companies as large as WalMart. They are now giving significant shelf space to organics and are actively developing that market.

Putting Value Before Price

While we’re on the subject, the food inc movie site has a list of things you can do to put value before price when you eat. These are good concrete examples of things we can do – and they can be generalized to other industries.

In general this boils down to a few simple concepts:

  • Look for opportunities to chose higher value products over lower value products wherever possible. This has two important effects. The first is that you’re not buying the lower value product and so given the razor thin margins on those products the companies making them will be forced to quickly re-think their choices. The second is that you will instantly become part of a marketplace that supports higher value products. When the industry sees that higher value wins over lower value they will move in that direction. Given how tightly they are stretched on the low end we should expect that motion to be dramatic – it will quite literally be a matter of survival.
  • Look for opportunities to support higher value companies. Demonstrating brand loyalty to companies that generate value not only sends a clear message to the industry that generating value matters, but it also closes the equation on financing and investment. This ensures that the money will be there to support further investments in driving value and will also convert your brand loyalty into a tangible asset. Decision makers will pay close attention to loyal customers because the reality of that loyalty is that it can be lost.
  • Be sure to support the little guys. Large organizations are generally risk averse and slow to change. Smaller, more innovative companies are more likely to provide the kinds of high value alternatives you are looking for. They are also more likely to be sensitive to our needs. Supporting these innovative smaller companies provides for greater diversity, which is important, but it also sets examples for new business models and new business thinking that works. A successful start-up that focuses on generating value for their customers serves as a shining example for others to follow. We need more folks like these in our world and the best way to ensure they will be there is to create a demand for them — that means you and me purchasing their products and singing their praises so they can reach their potential.

A few finer points to make here… I’m not saying that anyone should ignore price. What I am saying is that price should be secondary to value in all of our choices. Pure marginal profit decisions lead to some terrible systemic conditions. We need to get that connection clear in our heads so I’ll say it again. If you fail to make value your first priority before price then you are making a choice: You are choosing poverty, unemployment, and depression.

This larger view of rational economics may not always fit into a handy formula (though it can be approximated), but making value a priority will select for the kinds of products, services, and practices that we really want. Including this extra “value factor” in your decisions will bring about industries that compete and innovate to improve our quality of life and our world in general. That kind of innovation leads to increased opportunities and a higher standard of living for everyone. A virtuous circle. Those are the kinds of industries we want to be selecting for in the board room, on main street, and at that ballot box.

Seeking Transparency

One of the difficult things about seeking value is the ability to gauge it in the first place. Indeed one of the tricks we have learned to borrow from nature is lying! Just as an insect only needs to look poisonous in order to keep it’s predators away, a product, service, or company only needs to give the appearance of high value if we can be fooled. The first goal of seeking transparency addresses that issue. Here are some general guidelines for seeking transparency.

  • Demand Integrity from the institutions you support. Be vocal about it. There should be a heavy price to pay for any kind of false dealing, false advertising, or breach of trust. Just as brand loyalty has value, so does the opposite. If a company stands to lose customers for long periods (perhaps forever) after a breach of trust they will quickly place a dollar figure on those potential losses and even the most greedy of the bunch will recognize that there is value in integrity. More precisely, the risk associated with making a shady decision will be well understood and clearly undesirable.The immediate effects of associating integrity with brand value will be monetary and will drive decisions for the sake of survival. However, over the long term this will select for decision makers who naturally have and value integrity since they will consistently have an advantage at making the correct decisions. When we get those guys running the game we will be on solid footing.
  • Get close to your decisions and make them count. One of the key factors that cause trouble in the stock market is that most of the decisions are so automatic that nobody feels any real responsibility for them. As long as the right amount of money is being made then anything goes. That’s terrible! You should know how your money is invested and you should avoid investing in companies that make choices you don’t agree with.If you think about it, your money is representing your interests in these companies. If you let it go to a company that does something bad (you decide what that is), then you are essentially endorsing that decision. Don’t! Instead, invest in companies that do good things, deal honestly, and consistently add value. That way your money is working for you in more ways than one and you have nothing to regret.
  • Seek Simplicity and develop a healthy skepticism for complexity. Certainly some things are complex by their nature but one of the best ways to innovate and add value is to simplify those things. Complexity also has a way of hiding trouble so that even with the best of intentions unintended consequences can put people in bad positions. That’s not good for the seller or the buyer, nor the fellow on the shop floor, etc. Given a choice between otherwise equal products or services that are simple or complex, chose the simpler version.
  • Communicate about your choices and about what you learn. These days we have unprecedented abilities to communicate and share information. Wherever you have the opportunity to let a company know why you chose them, or to tell the other guys know why you did not chose them, be sure to let them know. Then, be sure to let your friends know – and anyone else who will listen. The good guys really want and need your support!

    Another key point about communicating is that it gives you the power that marketing firms and politicians wish they had. Studies show that we have become so abused by marketing efforts that advertisements have begun to have a negative effect on sales! The most powerful positive market driver is now direct referrals through social media. Therefore one of the most powerful tools we have to change things for the better is to communicate with each other about our choices and to pass on the message that our choices matter. That kind of communication can cut through a lot of lies. By all means – be careful and do your research. Then, make good choices and tell everybody!

Apr 202011
 

This week has seen some truly amazing spring weather around the MadLab including everything from tornado threats and sustained high winds to flash flooding and dense fog.

April showers, as they saying goes, will bring May flowers – so we don’t mind too much as long as the power stays on and the trees don’t fall on the roof!

In cyberspace things are also picking up it seems. For about the last three weeks we’ve seen declining severity and frequency of spam storms. However this week has been different.

Beginning about 3 days ago we’ve seen a surge in new spam storms and in particular a dramatic increase in the use of hacked web sites and URL shortener abuse.

Previous 30 days of spam storms.

After 3 weeks of declining spam storms, a new surge starts this week...

There is also another notable change in the data. For several years now there has been a pretty solid 24 hour cyclical pattern to spam storms. This week we’re seeing a much more chaotic pattern. This and other anecdotal evidence seems to suggest that the new spams are being generated more automatically and at lower levels across wider bot nets.

We used to see distinct waves of modifications and responses to new filtering patterns. Now we are now seeing a much more chaotic and continuous flow of new spam storms as current campaigns are continuously modified to defeat filtering systems.

Chaotic spam storm arrival rates over the past 48 hoursThere’s no telling if these trends will continue, nor for how long, but they do seem to suggest that new strategies and technologies are coming into use in the blackhatzes camps. No doubt this is part of the response to the recent events int he anti-spam world.

Microsoft takes down Rustock spam botnet

DOJ gets court permission to attack botnet

In response to the blackhatzes changes my anti-spam team and I have developed several new protocols and modified several of our automated friends (rule-bots) to take advantage of new artifacts in the data. The result has been a dramatic increase in the creation rate of new heuristics, reduced response times, and improved preemptive captures.

Rule Activity Display shows higher rule rates and smoother hit densities

With these changes, changes in blackhatz tactics, and new sniffer engine updates coming along I’m going to be very busy watching the blinking lights to keep track of the weather outside the MadLab and in cyberspace.

Apr 072011
 

Here’s a new term: quepico, pronounced “kay-peek-oh”

Yesterday Message Sniffer had a quepico moment when Brian (The Fall Guy) of the sortmonsters coded rule number 4,000,000 into the brains of ARM’s flagship anti spam software.

You read that right. Since I built the first version out of spare robot parts just over a decade ago more than 4.00e+6 heuristics have been pumped into it and countless trillions (yes trillions with a “t”) of spam and malware have been filtered with it.

I had another quepico moment yesterday when I realized that a task I once did by myself only a couple of hours per day had now expanded into a vast full-time operation not only for the folks in my specific chunk of the world, but also for many other organizations around the globe.

The view from SortMonsters Corner

The view from SortMonsters Corner

Just as that 4 millionth rule represents a single point of consciousness in the mind of Sniffy, these realizations are represented somewhere in my brain as clusters of neurons that fire in concert whenever I recall those quepico moments.

Interestingly some of these same neurons will fire when I think of something similar, and those firings will lead to more, and more until it all comes back to me or I think of something new that’s somehow related. This is why my wetware is so much better than today’s hardware at recognizing pictures, sounds, phrases, ideas, and concepts when the available data is sketchy or even heavily obscured like much of the spam and malware we quarantine to protect us from the blackhatzes.

Blackhatzes: noun, plural. People and organizations that create malware and spam or otherwise engineer ways to harm us and exploit or compromise our systems. These are the bad guys that Sniffy fights in cyberspace.

Sniffy on guard

At the moment, most of sniffy’s synthetic intuition and intelligence is driven by cellular automata and machine learning systems based on statistics, and competitive and cooperative behaviors, adaptive signal conversion schemes and pattern recognition.

All of that makes sniffy pretty good but there is something new on the horizon.

Quepico networks.

For several years now I’ve been experimenting with a new kind of self organizing learning system that appears to be able to identify the underlying patterns and relationships in a stream of data without guidance!

These networks are based on layers of elements called quepicos that learn to recognize associations between messages that they receive. These are organized into layers of networks that identify successively higher abstractions of the data presented to the lower layers.

Noisy Decision

Noisy Decision in a Quepico Network

The interesting thing about the way these work is that unlike conventional processing elements that receive data on one end and send data from the other, quepicos send and receive messages on both sides simultaneously. As a result they are able to query each other about the patterns they think they see in addition to responding to the patterns they are sure they see.

When a quepico network is learning or identifying patterns in a stream of data, signals flow in both directions – both up the network to inform higher abstractions and down the network to query about features in the data.

In very much the same way we believe the brain works, these networks achieve very high efficiencies by filtering their inputs based on clues discovered in the data they recognize. The result is that processing power is focused on the elements that are most likely to successfully recognize important features in the data stream.

Put another way, these systems automatically learn to focus their attention on the information that matters and ignore noise and missing data.

So what’s in a name? Why call these things quepicos? As it turns out – that’s part of my own collection of quepico moments.

One day while I was drawing some diagrams of these networks for an experiment, my youngest son Ian asked me what I was doing. As I started to explain I realized I needed a name for them. I enlisted his help and we came up with the idea of calling them thinktons. We were both very excited – a quepico moment.

While looking around to see if this name would cause confusion I discovered (thanks Google) that there were several uses of the term thinkton and even worse that the domain thinkton.com was already registered (there isn’t a site there, yet). A disappointing, but definite quepico moment.

So, yesterday, while roaming sortmonster’s corner and pondering how far we’ve come and all of the little moments and events along the way (trillions of little = pico, que = questions, whats, etc) I had another quepico moment. The word quepico was born.

Google’s translator, a favorite tool around the mad lab and sortmonster’s corner translates “que pico” as “that peek.” That fits pretty good since quepicos tend to land on statistical peaks in the data. So quepico it is — I’ll have to go tell Ian!

Feb 212011
 

In Robert C. Martin’s book Clean Code he writes:

“Comments are not like Schindler’s List. They are not “pure good.” Indeed, comments are, at best, a necessary evil. If our programming languages were expressive enough, or if we had the talent to subtly wield those languages to express our intent, we would not need comments very much — perhaps not at all.”

When I first read that, and the text that followed I was not happy. I had been teaching for a long time that prolific comments were essential and generally a good thing. The old paradigm held that describing the complete functionality of your code in the right margin was a powerful tool for code quality – and the paradigm worked! I have forgotten enough stories where that style had saved the day that I could fill a book with them. The idea of writing code with as few comments as possible seemed pure madness!

However, for some time now I have been converted and have been teaching a new attitude toward comments and a newer coding style in general. This past weekend I had opportunity to revisit this again and compare what I used to know with what I know now.

While repairing a subtle bug in Message Sniffer (our anti-spam software) I re-factored a function that  helps identify where message header directives should be activated based on the actual headers of a message.

https://svn.microneil.com/websvn/diff.php?repname=SNFMulti&path=%2Ftrunk%2Fsnf_HeaderFinder.cpp&rev=34

One of the most obvious differences between the two versions of this code is that the new one has almost no comments compared to the previous version! As it turns out (and as suggested by Robert Martin) those comments are not necessary once the code is improved. Here are some of the key things that were done:

  • Logical expressions were broken into pieces and assigned to well named boolean variables.
  • The if/else ladder was replaced with a switch/case.
  • A block of code designed to extract an IP address from a message header was encapsulated into a function of it’s own.

The Logical Expressions:

Re-factoring the logical expressions was helpful in many ways. Consider the old code:

        if(                                                                     // Are we forcing the message source?
          HeaderDirectiveSource == P.Directive &&                               // If we matched a source directive and
          false == ScanData->FoundSourceIP() &&                                 // the source is not already set and
          ActivatedContexts.end() != ActivatedContexts.find(P.Context)          // and the source context is active then
          ) {                                                                   // we set the source from this header.

There are three decision points involved in this code. Each is described in the comments. Not too bad. However it can be better. Consider now the new code:

            case HeaderDirectiveSource: {

                bool HeaderDirectiveSourceIPNotSet = (
                  0UL == ScanData->HeaderDirectiveSourceIP()
                );

                bool SourceContextActive = (
                  ActivatedContexts.end() != ActivatedContexts.find(P.Context)
                );

                if(HeaderDirectiveSourceIPNotSet && SourceContextActive) {

The first piece of this logic is resolved by using a switch/case instead of an if/else tree. In the previous version there was a comment that said the code was too complicated for a switch/case. That comment was lying! It may have been true at one time, but once the comment had out-lived it’s usefulness it stuck around spreading misinformation.

This is important. Part of the reason this comment outlived it’s usefulness  is because with the old paradigm there are so many comments that we learned to ignore them most of the time. With the old paradigm we treated comments as a running narrative with each line of comment attached to a line of code as if the two were “one piece”. As a result we tended to ignore any comments that weren’t part of code we are modifying or writing. Comments can be much more powerful than that and I’ll talk about that a little later.

The next two pieces of logic involve testing conditions that are not otherwise obvious from the code. By encapsulating these in well named boolean variables we are able to achieve a number of positive effects:

  • The intention of each test is made plain.
  • During a debugging session the value of that test becomes easily visible.
  • It becomes easier to spot errors in the “arithmetic” performed for each test.
  • The matching comment is no longer required.

So you don’t miss it, there was a second bug fixed during this task because of the way the re-factoring clarified this code. The original test to see if the header directive source had already been set was actually looking at the wrong piece of data!

Finally, the if() that triggers the needed response is now perfectly clear because it says exactly what it means without any special knowledge.

At 0-dark-hundred, starting your second case of Jolt Cola (or RedBull, or your other favorite poison) we’re all a little less than our best. So, it helps if what we’re looking at is as clear as possible.

Even if you’re not pulling an all-night-er its much easier if you don’t have to remember that (0UL == ScanData->HeaderDirectiveSourceIP()) really means the header IP source has not been set. Much easier if that bit of knowledge has already been spelled out – and quite a bonus that the local varible HeaderDriectiveSourceIPNotSet shows up automatically in your debugger!

Encapsulating Code:

In the previous version the code that did the heavy lifting used to live inside the test that triggered it. Consider the old code:

        if(                                                                     // Are we forcing the message source?
          HeaderDirectiveSource == P.Directive &&                               // If we matched a source directive and
          false == ScanData->FoundSourceIP() &&                                 // the source is not already set and
          ActivatedContexts.end() != ActivatedContexts.find(P.Context)          // and the source context is active then
          ) {                                                                   // we set the source from this header.
            // Extract the IP from the header.

            const string digits = "0123456789";                                 // These are valid digits.
            unsigned int IPStart =
              Header.find_first_of(digits, P.Header.length());                  // Find the first digit in the header.
            if(string::npos == IPStart) return;                                 // If we don't find it we're done.
            const string ipchars = ".0123456789";                               // These are valid IP characters.
            unsigned int IPEnd = Header.find_first_not_of(ipchars, IPStart);    // Find the end of the IP.
            if(string::npos == IPEnd) IPEnd = Header.length();                  // Correct for end of string cases.
            ScanData->HeaderDirectiveSourceIP(                                  // Extract the IP from the header and
              Header.substr(IPStart, (IPEnd - IPStart))                         // expose it to the calling scanner.
            );
            Directives |= P.Directive;                                          // Add the flags to our output.
        }

Again, not too bad. Everything is commented well and there isn’t a lot of code there. However it is much clearer the new way:

                if(HeaderDirectiveSourceIPNotSet && SourceContextActive) {
                    ScanData->HeaderDirectiveSourceIP(
                      extractIPFromSourceHeader(Header)
                    );
                    Directives |= P.Directive;                                  // Add the flags to our output.
                }

All of the heavy lifting has now been reduced to two lines of code (arguably one). By moving the meat of this operation off to extractIPFromSourceHeader() this block of code becomes very clear and very simple. If(this is going on) then { do this }. The mechanics of { do this } are part of a different and more focused discussion.

This is helpful not only because it clarifies the code, but also because if you are going to refine and test that part of the code it now lives in it’s own world where it can be wrapped with a test function and debugged separately. Not so when it lived deep inside the code of another function.

Powerful Comments:

In the old paradigm comments were a good thing, but they were weakened by overuse! I hate to admit being wrong in the past, but proud to admit I am constantly learning and improving.

When comments are treated like a narrative describing the operation of the code there are many benefits, but there are also problems. The two biggest problems with narrating source code like this are that we learn to ignore comments that aren’t attached to code we’re working on and as a result of ignoring comments we tend to leave some behind to lie to us at a later date.

The new paradigm has most of the benefits of the narrative method implemented in better encapsulation and naming practices. This tight binding of intent and code virtually eliminates the biggest problems associated with comments. In addition the new method gives comments a tremendous boost in their power.

Since there are fewer comments we tend to pay attention to them when we see them. They are bright markers for bits of code that could probably be improved (if they are the narrative type); or they are important messages about stuff we need to know. With so few of them and with each of them conveying something valuable we dare not ignore them, and that’s a good thing.

My strong initial reaction to Robert’s treatment of comments was purely emotional — “Don’t take my comments away! I like comments! They are good things!”

I now see that although the sound byte seems to read “Eliminate All Comments!”, the reality is more subtle and powerful, and even friendly to my beloved comments. Using them sparingly and in just the right places makes them more powerful and more essential. I feel good about that. I know that for the past couple of years my new coding style has produced some of the best code I’ve ever written. More reliable, efficient, supportable, and more powerful code.

Summary:

If I really wanted to take this to an extreme I could push more encapsulation into this code and remove some redundancy. For example, multiple instances of “Directives |= P.Directive;” stands out as redundant, and why not completely encapsulate things like ScanData->drillPastOrdinal(P.Ordinal) and so forth into well named explicit functions? Why not convert some of the tests into object methods?

Well, I may do some of those things on a different day. For today the code is much better than it was, it works, it’s clear, and it’s efficient. Since my task was to hunt down and kill a bug I’ll save any additional style improvements for another time. Perhaps I’ll turn that into a teaching exercise for some up-and-coming code dweller in the future!

Here are a few good lessons learned from this experience:

  • It is a good idea to re-factor old code when resolving bugs as long as you don’t overdo it. Applying what you have learned since the last revision is likely to help you find bugs you don’t know exist. Also, incremental improvements like this tend to cascade into improvements on a larger scale ultimately improving code quality on many vectors.
  • If you write code you should read Clean Code – even if you’re not writing Java! There are lots of good ideas in there. In general we should always be looking for new ways to improve what we do. Try things out and keep the parts that work.
  • Don’t cast out crazy ideas without first looking them over and trying them out. Often the best ideas are crazy ones. “You’re entirely bonkers. But I’ll tell you a secret. All the best people are.” – Alice
  • Good coding style does matter, if you do it right.
Nov 212010
 

Often church sound folk are looking for the cheapest possible solution for recording their services. In this case, they want to use a low-end voice recorder and record directly from the mixing board.

There are a number of challenges with this. For one, the voice recorder has no Line input – it only has a Mic-input. Another challenge is the AGC on the recorder which has a tendency to crank the gain way up when nobody is speaking and then crank it way down when they do speak.

On the first day they presented this “challenge” they simply walked up (at the last minute) and said: “Hey, plug this into the board. The guys at Radio Shack said this is the right cable for it…”

The “right cable” in this case was an typical VCR A/V cable with RCA connectors on both ends. On one end there was a dongle to go from the RCA to the 1/8th inch stereo plug. The video part of the cable was not used. The idea was to connect the audio RCA connectors to the tape-out on the mixer and plug the 1/8th inch end of things into the Mic input on the voice recorder.

This by itself was not going to work because the line level output from the mixer would completely overwhelm the voice recorder’s mic input– but being unwilling to just give up, I found a pair of RCA-1/4 inch adapters and plugged the RCA end of the cable into a pair of SUB channels on the mixer (in this case 3 & 4). Then I used the sub channel faders to drop the line signal down to something that wouldn’t overwhelm the voice recorder. After a minute or two of experimenting (all the time I had really) we settled on a setting of about -50db. That’s just about all the way off.

This worked, sort of, but there were a couple of problems with it.

For one, the signal to noise ratio was just plain awful! When the AGC (Automatic Gain Control) in the voice recorder cranks up during quiet passages it records all of the noise from the board plus anything else it can get it’s hands on from the room (even past the gates and expanders!).

The second problem was that the fader control down at -50 was very touchy. Just a tiny nudge was enough to send the signal over the top and completely overload the little voice recorder again. A nudge the other way and all you could get was noise from the board!

(Side note: I want to point out that this is a relatively new Mackie board and that it does not have a noise problem! In fact the noise floor on the board is very good. However the voice recorder thinks it’s trying to pick up whispers from a quiet room and so it maxes out it’s gain in the process. During silent passages there is no signal to record, so all we can give to the little voice recorder is noise floor — it takes that and adds about 30db to it (I’m guessing) and that’s what goes onto it’s recording.)

While this was reportedly a _HUGE_ improvement over what they had been doing, I wasn’t happy with it at all. So, true to form, I set about fixing it.

The problem boils down to matching the pro line level output from the mixer to the consumer mic input of the voice recorder.

The line out of the mixer is expecting to see a high input impedance while providing a fairly high voltage signal. The output stage of the mixer itself has a fairly low impedance. This is common with today’s equipment — matching a low impedance (relatively high power) output to one (or more) high impedance (low power, or “bridging”) input(s). This methodology provides the ability to “plug anything into anything” without really worrying too much about it. The Hi-z inputs are almost completely un-noticed by the Low-z outputs so everything stays pretty well isolated and the noise floor stays nice and low… but I digress…

On the other end we have the consumer grade mic input. Most likely it’s biased a bit to provide some power for a condenser mic, and it’s probably expecting something like a 500-2500 ohm impedance. It’s also expecting a very low level signal – that’s why connecting the line level Tape-Out from the mixer directly into the Mic-Input completely overwhelmed the little voice recorder.

So, we need a high impedance on one one end to take a high level line signal and a low impedance on the other end to provide a low level (looks like a mic) signal.

We need an L-Pad !

As it turns out, this is a simple thing to make. Essentially an L-Pad is a simple voltage divider network made of a couple of resistors. The input goes to the top of the network where it sees both resistors in series and a very high impedance. The output is taken from the second resistor which is relatively small and so it represents a low impedance. Along the way, the voltage drops significantly so that the output is much lower than the input.

Another nifty thing we get from this setup is that any low-level noise that’s generated at the mixer is also attenuated in the L-Pad… so much so that whatever is left of it is essentially “shorted out” by the low impedance end of the L-Pad. That will leave the little voice recorder with a clean signal to process. Any noise that shows up when it cranks up it’s AGC will be noise it makes itself.

(Side note: Consider that the noise floor on the mixer output is probably at least 60 db down from a nominal signal (at 0 db). Subtract another 52 db from that and the noise floor from that source should be -112 db! If the voice recorder manages to scrape noise out of that then most of it will come from it’s own preamp etc…)

We made a quick trip to Radio Shack to see what we could get.

To start with we picked up an RCA to 1/8th inch cable. The idea was to cut the cable in the middle and add the L-Pad in line. This allows us to be clear about the direction of signal flow– the mixer goes on the RCA end and the voice recorder goes on the 1/8th inch end. An L-Pad is directional! We must have the input on the one side and the output on the other side. Reverse it and things get worse, not better.

After that we picked up a few resisters. A good way to make a 50db L-Pad is with a 33K Ω resistor for the input and a 100 Ω resistor for the output. These parts are readily available, but I opted to go a slightly different route and use a 220K Ω resistor for the input and a 560 Ω resistor for the output.

There are a couple of reasons for this:

Firstly, a 33K Ω impedance is ok, but not great as far as a “bridging” input goes so to optimize isolation I wanted something higher.

Secondly, the voice recorder is battery powered and tiny. If it’s trying to bias a 100 Ω load to provide power it’s going to use up it’s battery much faster than it will if the input impedance is 560 Ω. Also 560 Ω is very likely right on the low end of the impedance of the voice recorder’s input so it should be a good match. It’s also still low enough to “short out” most of the noise that might show up on that end of things for all intents and purposes.

Ultimately I had to pick from the parts they had in the bin so my choices were limited.

Finally I picked up some heat-shrink tubing so that I could build all of this in-line and avoid any chunky boxes or other craziness.

Here’s how we put it all together:

1. Heat up the old soldering iron and wet the sponge. I mean old too! I’ve had this soldering iron (and sponge) for close to 30 years now! Amazing how long these things last if you take care of them. The trick seems to be – keep your tip clean. A tiny sponge & a saucer of water are all it takes.

2. Cut the cable near the RCA end after pulling it apart a bit to provide room to work. Set the RCA ends aside for now and work with the 1/8th in ends. Add some short lengths of appropriately colored heat-shrink tubing and strip a few cm of outer insulation off of each cable. These cables are CHEAP, so very carefully use a razor knife to nick the insulation. Then bend it open and work your way through it so that you don’t nick the shield braid inside. This takes a bit of finesse so don’t be upset if you have to start over once or twice to get the hang of it. (Be sure to start with enough cable length!)

3. Twist the shield braid into a stranded wire and strip about 1 cm of insulation away from the inner conductor.

4. Place a 560 Ω resistor along side of the inner conductor. Twist the inner conductor around one lead of the resistor, then twist the shield braid around the other end of the resistor. Then solder these connections in place. Use caution — the insulation in these cables is very sensitive to heat. Apply the tip of your soldering iron to the joint as far away from the cable as possible and then sweat the solder toward the cable from there. This allows you to get a good joint without melting the insulation. Do this for both leads.

5. The 560 Ω resistors are now across the output side of our L-Pad cable. Now we will add the 220K Ω series resistors. In order to do this in-line and make a strong joint we’re going to use an old “western-union” technique. This is the way they used to join telegraph cables back in the day – but we’re going to adapt it to the small scale for this project. To start, cross the two resistor’s leads so that they touch about 4mm from the body of each resistor.

6. Holding the crossing point, 220K Ω resistor, and 560 Ω lead in your right hand, wind the 220K Ω lead tightly around the 560 Ω lead toward the body of the resistor and over top of the soldered connection.

7. Holding the 560 Ω resistor and cable, wind the 560 Ω resistor’s lead tightly around the 220K Ω resistor’s lead toward the body of the resistor.

8. Solder the joint being careful to avoid melting the insulation of the cable. Apply the tip of your soldering iron to the part of the joint that is farthest from the inner conductor and sweat the solder through the joint.

9. Clip of the excess resistor leads, then slide the heat-shrink tubing over the assembly toward the end.

10. Slide the inner tubing back over the assembly until the entire assembly is covered. The tubing should just cover 1-2 mm of the outer jacket of the cable and should just about cover the resistors. The resistor lead that is connected to the shield braid is a ground lead. Bend it at a right angle from the cable so that it makes a physical stop for the heat-shrink tubing to rest against. This will hold it in place while you shrink the tubing.

11. Grab your hair drier (or heat gun if you have one) and shrink the tubing. You should end up with a nice tight fit.

12. Grab the RCA end of the cable and lay it against the finished assembly. Red for red, and white for white. You will be stripping away the outer jacket approximately 1 cm out from the end of the heat-shrink tubing. This will give you a good amount of clean wire to work with without making the assembly too long.

13. After stripping away the outer jacket from the RCA side and prepping the shield braid as we did before, strip away all but about 5mm of the insulation from the inner conductor. Then slide a length of appropriately colored heat shrink tubing over each. Get a larger diameter piece of heat-shrink tubing and slide it over the 1/8 in plug end of the cable. Be sure to pick a piece with a large enough diameter to eventually fit over both resistor assemblies and seal the entire cable. (Leave a little more room than you think you need.)

14. Cross the inner conductor of the RCA side with the resistor lead of the 1/8th in side as close to the resistor and inner conductor insulation as possible. Then wind the inner conductor around the resistor lead tightly. Finaly, solder the joint in the usual way by applying the tip of your soldering iron as far from the cable as possible to avoid melting the insulation.

15. Bend the new solder joints down flat against the resister assemblies and clip off any excess resistor lead.

16. Slide the colored heat-shrink tubing down over the new joints so that it covers part of the resistor assembly and part of the outer jacket of the RCA cable ends. Bend the shield braid leads out at right angles as we did before to hold the heat-shrink tubing in place. Then go heat them up.

17. Now we’re going to connect the shield braids and build a shield for the entire assembly. This is important because these are unbalanced cables. Normally the shield braids provide a continuous electrical shield against interference. Since we’ve stripped that away and added components we need to replace it. We’ll start by making a good connection between the existing shield braids and then we’ll build a new shield to cover the whole assembly. Strip about 20 cm of insulation away from some stranded hookup wire and connect one end of it to the shield braid on one end of the L-Pad assembly. Lay the rest along the assembly for later.

18. Connect the remaining shield braids to the bare hookup wire by winding them tightly. Keep the connections as neat as possible and laid flat across the resistor assembly.

19. Solder the shield connections in place taking care not to melt the insulation as before.

20. Cut a strip of ordinary aluminum foil about half a meter long and about 4 cm wide. This will become our new shield. It will be connected to the shields in the cable by the bare hookup wire we’ve used to connect them together.

21. Starting at the end of the assembly away from the shield lead, wind a layer of foil around the assembly toward the shield lead. On each end of the assembly you want to cover about 5-10 mm of the existing cable so that the new shield overlaps the shield in the cable. When you reach that point on the end with the shield lead, fold the shield lead back over the assembly and the first layer of foil. Then, continue winding the foil around the assembly so that you make a second layer back toward where you started.

22. Continue winding the shield in this way back and forth until you run out of foil. Do this as neatly and tightly as possible so that the final assembly is compact and relatively smooth. You should end up with about 3-5 layers of foil with the shield lead between each layer. Finally, solder the shield lead to itself on each end of the shield and to the foil itself if possible.

23. Clip off any excess shield lead. Then push (DO NOT PULL) the large heat-shrink tubing over the assembly. This may take a little time and effort, especially if the heat-shrink tubing is a little narrow. It took me a few minutes of pushing and massaging, but I was able to get the final piece of heat-shrink tubing over the shield assembly. It should cover about an additional 1 cm of cable on each end. Heat it up with your hair drier (or heat gun if you have it) and you’re done!

24. If you really want to you can do a final check with an ohm meter to see that you haven’t shorted anything or pulled a connection apart. If your assembly process looked like my pictures then you should be in good shape.

RCA tip to RCA tip should measure about 441K Ω (I got 436K).

RCA sleve to RCA ring should measure 0 Ω. (Shields are common).

RCA tip to RCA ring (same cable) should measure 220.5KΩ (I got 218.2K).

RCA sleve to 1/8th in sleve should measure 0 Ω.

RCA Red tip to 1/8th in tip should be about 220K Ω.

RCA Red tip to 1/8th in ring should be about 1K Ω more than that.