Madsci

Husband, Father, Musician, Engineer, Teacher, Thinker, Pilot, Mad, Scientist, Writer, Philosopher, Poet, Entrepreneur, Busy, Leader, Looking for ways to do something good in a sustainable way,... to be his best,... and to help others to do the same. The universe is a question pondering itself... we are all a part of the answer.

Sep 062010
 

If you know me then you know that in addition to music, technology, and all of the other crazy things I do I also have an interest in cosmology and quantum mechanics. What kind of a Mad Scientist would I be without that?

Recently while watching “Through the wormhole” with the boys I was struck by the apparent ratios between ordinary matter, dark matter, and dark energy in our universe.

Here is a link to provide some background: http://science.nasa.gov/astrophysics/focus-areas/what-is-dark-energy/

It seems that the ratio between dark matter and ordinary (observable) matter is about 5:1. That’s the 80/20 rule common in statistics and many other “rules of thumb” right?

Apparently the ratio between dark energy and all matter (dark or observable) is about 7:3. Here again is a fairly common ratio found in nature. For me it brings to mind (among other things) RMS calculations from my electronics work where Vrms = .707 * Vp.

There are also interesting musical relationships etc… The only thing interesting about any of those observations is that they stood out to me and nudged my intuition toward the  following thought:

What if dark energy and dark matter are really artifacts of ordinary reality and quantum mechanics?

If you consider the existence of a quantum multiverse then there is the “real” part of the universe that you can directly observe (ordinary matter); there is the part of reality that you cannot observe because it is bound to collapsed probability waves representing events that did not occur in your reality but did occur in alternate realities (could this be dark matter?); and there is the part of the universe bound up in wave functions representing future events that have yet to be collapsed in all of the potential realities (could this be dark energy?).

Could dark matter represent the gravitational influence of alternate realities and could dark energy represent the universe expanding to make room for all future potentialities?

Consider causality in a quantum framework:

When two particles interact you can consider that they observed each other – thus collapsing their wave functions. Subsequent events from the perspectives of those particles and those that subsequently interact with them record the previous interactions as history.

Another way to say that is that the wave functions of the particles that interacted have collapsed to represent an event with 100% probability (or close to it) as it is observed in the past. These historical events along with the related motions (energy) that we can predict with very high degrees of certainty make up the observable universe.

The alternative realities that theoretically occurred in events we cannot observe (but were predicted by wave functions now collapsed) might be represented by dark matter in our universe.

All of the possible future events that can be reasonably predicted are represented by wave functions in the quantum filed. These potential realities have been proved to be just as real as our observable universe by experiments in quantum mechanics and are generally represented by quantum entanglement effects etc.

Could it be that dark energy is bound up in (or at least strongly related to) the potentials represented by these wave functions?

Consider that the vast majority of particle interactions in our universe ultimately lead to a larger number of potential interactions. There is typically a one-to-many relationship between any present event and possible future events. If these potential interactions ultimately occur in a quantum multiverse then they would represent an expanded reality that is mostly hidden from view.

Consider that the nature of real systems we observe is that they tend to fall into repeating patterns of causality such as persistent objects (molecules, life, stars, planets, etc)… this tendency toward recurring order would put an upper bound on the number of realities in the quantum multiverse and would tend to stabilize the ratio of alternate realities to observable realities.

Consider that the number of potential realities derived from the wave functions of the multiverse would have a similar relationship and that this relationship would give rise to a similar (but likely larger) ratio as we might be seeing in the ratio of dark energy to dark matter.

Consider that as our universe unfolds the complexity embodied in the real and potential realities also expands. Therefore if these potentialities are related to dark matter and dark energy and if dark energy is bound to the expansion of the universe in order to accommodate these alternate realities then we would expect to see our universe expand according to the complexity of the underlying realities.

One might predict that the expansion rate of the universe might be related mathematically to the upper bound of the predictable complexity of the universe at any point in time.

The predictable complexity in the universe would be a function of the kinds of particles and their potential interactions as represented by their wave functions with the upper limit being defined as the potentiality horizon.

Consider that each event gives rise to a new set of wave functions representing all possible next events. Consider that if we extrapolate from those wave functions a new set of wave functions that represent all of the possible events after those, and so on, that the amplitudes of the wave functions at each successive step would be reduced. The amplitude of these wave functions would continue to decrease as we move our predictions into the future until no wave function has any meaningful amplitude. This edge of predictability is the potentiality horizon.

The potentiality horizon is the point in the predictable future where the probability of any particular event becomes effectively equal to the probability of any other event (or non event). At this point all wave functions are essentially flat — this “flatness” might be related to the Planck constant in such a way that the amplitude of any variability in any wave function is indistinguishable from random chance.

Essentially all wave functions at the potentiality horizon disappear into the quantum foam that is the substrate of our universe. At this threshold no potential event can be distinguished from any other event. If dark energy is directly related to quantum potentiality then at this threshold no further expansion of the universe would occur. The rate of expansion would be directly tied to the rate of expansion of quantum potentiality and to the underlying complexity that drives it.

So, to summarize:

What if dark matter and dark energy represent the matter and energy bound up in alternate realities and potential realities in a quantum multiverse?

If dark matter represents alternate realities invisible to us except through the weak influence of their gravity, and if dark energy represents the the expansion of the universe in order to accommodate the wave functions describing possible future events in the quantum field for all realities (observable and unobservable) with an upper bound defined by the potentiality horizon; then we might predict that the expansion rate of the universe can be related to it’s inherent complexity at any point in time.

We might also predict that the flow of time can be related to the inherent complexity of the wave functions bound in any particular system such that a lower rate of events occurs when the inherent complexity of the system is reduced.

… well, those are my thoughts anyway 😉

Jul 032010
 

I’m not one of “those” guys, really. You know the ones — the zealots who claim that their favorite OS or application is and will be forever more the end-all-be-all of computing.

As a rule I recommend and use the best tool for the job – whatever that might be. My main laptop is Windows XP, my family and customers use just about every recent version of Windows or linux.  In fact, my own servers are a mix of Win2k*, RedHat, CentOS, and Ubuntu, my other laptop is Ubuntu and I switch back and forth between MSOffice and OpenOffice as needed.

Today surprised me though. I realized that I had become biased against Ubuntu in a very insidious way— My expectations were simply not high enough. What’s weird about that is that I frequently recommend Ubuntu to clients and peers alike, and my company (MicroNeil in this case) even helps folks migrate to it and otherwise deploy it in their infrastructure! So how could I have developed my negative expectations?

I have a theory that it is because I find I have to defend myself from looking like “one of those linux guys” pretty frequently when in the company of my many “Windows-Only” friends and colleagues. Then there are all those horror stories about this or that problem and having to “go the long way around” to get something simple to work. I admit I’ve been stung by a few of those situations in the past myself.

But recently, not so much! Ubuntu has worked well in many situations and, though we tend to avoid setups that might become complicated, we really don’t miss anything by using it – and neither do the customers we’ve helped to migrate. On the contrary, in fact, we have far fewer problems with our Ubuntu customers than with our Windows friends.

Today’s story goes like this.

We have an old Toshiba laptop that we use for some special tasks. It came with Windows XP pro, and over the years we’ve re-kicked it a few times (which is sadly still a necessary evil from time to time on Windows boxen).

A recent patch caused this box to become unstable and so we were looking at having to re-kick it again. We thought we might take the opportunity to upgrade to Windows 7. We wanted to get it back up quickly so we hit the local store and purchased W7pro.

The installation was straight forward and since we already have another box running W7 our expectations were that this would be a non-event and all would be happy shortly.

But, no. The first thing to cause us trouble was the external monitor. Boot up the laptop with the monitor attached and that is all you can see — the laptop’s screen was not recognized. Boot up without the external monitor and the laptop’s is the only display that will work. I Spent some time searching various support forums for a solution and basically just found complaints without solutions.

After trying several of the recommended solutions without luck I was ready to quit and throw XP back on the box. Instead I followed a hunch and forced W7 to install all of the available patches just to see if it would work. IT DID!

Or, it seemed like it did. After the updates I was able to turn on the external display and set up the extended desktop… I was starting to feel pretty good about it. So I moved on to the printer. (more about the display madness later)

We have a networked HP2840 Printer/Scanner. We use it all the time. Joy again, I discovered, the printer was recognized and installed without a hitch. Printed the test page. We were going to get out of this one alive (still have some day left).

Remember that scene in perfect storm — They’re battered and beaten and nearly at the end. The sky opens up just a bit and they begin to see some light. It seems they’ve made it and they’re going to survive. Then the sky closes up again and they know they are doomed.

W7 refused to talk to the scanner on the HP2840. That’s a game changer in this case — the point of this particular laptop is accounting work that requires frequent scanning and faxing so the scanner on the HP2840 simply had to work or we would have to go back to XP.

Again I searched for solutions and found only unsolved complaints. Apparently there is little to no chance HP is going to solve this problem for W7 any time soon — at least that is what is claimed in the support forums. There are several workarounds but I was unable to make them fly on this box.

Remember the display that seemed to work? One of the workarounds for the scanner required a reboot. After the reboot the display drivers forgot how to talk to the external display again and it wouldn’t come back no matter how much I tweaked it!

Yep– like in perfect storm, the sky had closed and we were doomed. Not to mention most of the day had evaporated on this project already and that too was ++ungood.

We decided to punt. We would put XP Pro back on the box and go back to what we know works. I suggested we might try Ubuntu– but that was not a popular recommendation under the circumstances… Too new an idea, and at this point we really just wanted to get things working. We didn’t want to open a new can of worms trying to get this to work again with the external monitor, and the printer, and the scanner, and…

See that? There it is– and I bought into it even though I knew better. We dismissed the idea of using Ubuntu because we expected to have trouble with it– But we shouldn’t have!

None the less… that was the decision and so Linda took over and started to install XP again… but there was a problem. XP would not install because W7 was already on the box. (The OS version on the hard drive is newer). So much for simple.

Back in the day we would simply wipe the partition and start again — these days that’s not so easy… But, it’s easy enough. I grabbed an Ubuntu disk and threw it into the box. The idea was to let the Ubuntu install repartition the drive and then let XP have at it — Surely the XP install would have no qualms about killing off a linux install right?!

In for a penny, in for a pound.

As the Ubuntu install progressed past the repartitioning I was about to kill it off and throw the XP disk in… but something stopped me. I couldn’t quite bring myself to do it… so I let it go a little longer, and then a little longer, and a bit more…

I thought to myself that if I’ve already wasted a good part of the day on this I might as well let the Ubuntu install complete and get a feel for how much trouble it will be. If I ran into any issues I would throw the XP disk in the machine and let it rip.

I didn’t tell Linda about this though — she would have insisted I get on with the XP install, most likely. After all there was work piled up and this non-event had already turned into quite a time waster.

I busied myself on the white-board working out some new projects… and after a short time the install was complete. It was time for the smoke test.

Of course, the laptop sprang to life with Ubuntu and was plenty snappy. We’ve come to expect that.

I connected the external monitor, tweaked the settings, and it just plain worked. I let out a maniacal laugh which attracted Linda from the other end of the MadLab. I was hooked at this point and so I had to press on and see if the printer and scanner would also work.

It was one of those moments where you have two brains about it. You’re nearly convinced you will run into trouble, but the maniacal part of your brain has decided to do it anyway and let the sparks fly— It conjured up images of lightning leaping from electrodes, maniacal laughter and a complete disregard for the risk of almost certain death in the face of such a dangerous experiment! We pressed on…

I attempted to add the printer… Ubuntu discovered the printer on the network without my help. We loaded up the drivers and printed a test page. More maniacal laughter!

Now, what to do about the scanner… surely we are doomed… but the maniacal part of me prevailed. I launched simple scanner and it already knew about the HP2840. Could it be?! I threw the freshly printed test page into the scanner and hit the button.

BEAUTIFUL!

All of it simply worked! No fuss. No searching in obscure places for drivers and complicated workarounds. It simply worked as advertised right out of the box!

Linda was impressed, but skeptical. One more thing, she said. “We have to map to the SAN… remember how much trouble that was on the other W7 box?” She was right – that wasn’t easy or obvious on W7 because the setup isn’t exactly what W7 wants to see and so we had to trick it into finding and connecting to the network storage.

I knew better at this point though. I had overcome my negative expectations… With a bit of flare and confidence I opened up the network places on the freshly minted Ubuntu laptop and watched as everything popped right into place.

Ubuntu to the Rescue

In retrospect I should have known better from the start. It has been a long time since we’ve run into any trouble getting Ubuntu (or CentOs, or RedHat…) to do what we needed. I suppose that what happened was that my experience with this particular box primed me to expect the worst and made me uncharacteristically risk averse.

  • XP ate itself after an ordinary automatic update.
  • W7 wouldn’t handle the display drivers until it was fully patched.
  • W7 wouldn’t talk to the HP2840 scanner.
  • Rebooting the box made the display drivers wonky.
  • XP wouldn’t install with W7 present.
  • I’d spent hours trying to find solutions to these only to find more complaints.
  • Yikes! This was supposed to be a “two bolt job”!!!

Next time I will know better. It’s time to re-think the expectations of the past and let them go — even (perhaps especially) when they are suggested by circumstances and trusted peers.

Knowing what I know now, I wish I’d started with Ubuntu and skipped this “opportunity for enlightenment.” On the other hand, I learned something about myself and my expectations and that was valuable too, if a bit painful.

However we got here it’s working now and that’s what matters 🙂

Ubuntu to the rescue!

Jun 142010
 

I was pondering the oil spill in the Gulf, my work in automata, my fascination with robotics, and my friends with boats in Pensacola. Then I had another one of my crazy ideas — Hopefully it’s crazy enough to attract some interest and maybe even get done — so I thought I’d share. (That’s what blogs are for right?!)

What if we (collectively) develop an open source project to build (or refit) a fleet of small autonomous boats to patrol the Gulf looking for oil to collect and separate from the water. Here are the key points:

  • The craft are small and slow moving so they are not dangerous. They should be just large enough to carry a useful amount of collected oil, and just fast enough to get out of their own way and survive in the ocean.
  • The control systems are a collection of relatively simple, dedicated, open-source components designed to fail safe. If one subsystem doesn’t get what it expects from another subsystem then the robot stops and waits (signals) for help. More sophisticated systems can interact with the simpler control subsystems for exotic behaviors– but the basics would be very close to “hard-wired” reflexes.
  • Broken parts can be easily swapped out. Upgrades are equally easy to deploy by replacing swappable components with better ones.
  • Each is equipped with a centrifuge and a scoop/skimmer. It’s instincts are to seek out oil on the surface and turn on it’s skimmer while it slowly moves through that patch of ocean. The centrifuge separates the oil from the water. The water goes back in the ocean, the oil goes into the tank.
  • When a robot finds oil it tells it’s friends via radio using GPS to identify it’s location. Along the way it can gather other data that it can get for free from it’s control system’s sensors such as temperature, wind data, an any other data from attached sensors.
  • The instincts of the robots are based on a collection of simple behaviors and reflexes (more later).
  • Each has an open tank in back where the separated oil is deposited. When the robot detects that it’s tank is sufficiently full (or that it otherwise needs service/fuel) it will drive toward a barge where it will wait in line for it’s tank to be pumped out and it’s fuel tank to be topped off.
  • It might even be possible to make solar powered versions that do not require fuel — they would sleep at night. This kind of thing might also be a backup system to get the robot to safety in case of a main engine failure.
  • Endurance and autonomous operation are key design goals. These do not need to be (nor do we want them to be) big or fast or even particularly efficient. The benefit comes from their numbers, their small size, their ability to collaborate with each other, and their “always on” attitude. Since they work all the time and do not require human intervention they do not have to be powerful— just persistent. Their numbers and distribution are what gets the job done.
  • Since the robots are unmanned there is little exposure hazard for people (or animals). Robots don’t get sick — they may break down, but they don’t care how toxic their environment is during or after they do their job. These in particular are ultimately disposable if they need to be.
  • The subsystems should be designed so that they can be used in purpose built craft or deployed in existing craft that are re-purposed for the task.

Instincts (Roughly in order of priority):

  • Robots prefer to keep their distance from anything else on the surface of the water. They can do this with simple visual systems (or expensive LIDAR, or whatever folks dream up to put on their bot). Basically, if it doesn’t look like water they don’t want to be near it — unless, perhaps, it’s oil on top of water.
  • Robots prefer to stay within a minimum depth of water. The more shallow the water gets the more the robot wants to be in deeper water. The safety limits for this can be partially enforced by separate sub-systems but the primary goal is for the robots instincts and natural behaviors to automatically achieve the safety goals “as a matter of habit.”
  • Robots like to be closer to other robots that are successful — but not closer than the safe distance described earlier. If they get too close to something then the prior rule takes over. This allows the robots to flock on a patch of oil without running into each other. They will also naturally separate themselves in a pattern that optimizes their ability to collect oil from that patch. As a matter of safety they will also stay away from other vessels even (perhaps especially) if they don’t act like other robots.
  • Robots like to be in places they have not been before. This instinct causes them to search in new places for oil.
  • If a robot can’t get close enough to a patch of oil because other robots have already flocked there then the robot will eventually stop trying and will go search somewhere else.
  • Robots like to be closer to shore (but not too close -see above) rather than farther away. This gives the robots a tendency to concentrate on oil that is threatening the coast and also minimizes the possibility that the robot will be lost in the deeper ocean. Remember the other rule above about keeping their distance from everything— that will keep them from getting too close to shore too. “Close to something” includes being in water that is too shallow.
  • Robots shut down if anything gets too close to them. So, if they malfunction and get close to something else, OR, of someone else gets close to them then their instinct is to STOP. This behavior allows authorities to approach a robot safely at any time for whatever purpose.

What I envision here is something that can be mass produced easily by anybody with the will and facilities to do it. All of the hardware and software components would be open-sourced so that they can be refined through experience and enhanced by everyone who is participating.

It seems to me that the problem with the oil that is already in the Gulf is that it is spread over a very wide area and it is broken up into lots of small patches that are too numerous to track and manage from a central location.

A fleet of robust, inexpensive, safe, autonomous skimmers would be able to collectively solve this problem through a distributed intelligence. Along the way the same fleet would be able to provide a tremendous amount of information about conditions that is currently not available.

The design is simple, and the craft are expendable. Since each is collecting oil that is in the water, and shouldn’t be, if there is a catastrophic failure of a robot and it sinks then the result is that the oil it collected is back in the water. Not great, but also not worse than it was before the oil was collected in the first place.

If this idea catches on then I believe we (collectively) could produce huge numbers of these in a very short time – and each one would contribute to solving a problem that is currently not solvable. Also, as the technology is refined, the same systems would be available for any similar events that occur later… After all, the world is not going to stop drilling for oil in the deep oceans (or elsewhere) until it is all but gone. That is an unfortunate fact, in my opinion, but a fact none the less.

I believe also that the technology that would be developed through the creation of this fleet and the subsystems that support it would be useful for many other purposes as well… ranging from automated search and rescue to border patrol and anti-terrorism efforts.

This is a rough draft taken from the back of the envelope.

Let me know what you think!

I would love to work on a project like this. 🙂

I would love even more to see LOTS of folks working on this.

PS. Just before pushing the button I had another idea… (as I often do). What if the robots also had behaviors that allowed them to bucket-brigade oil toward collection points. So… if a slow moving robot could not possibly make it out to the barge from it’s station near the shore it would instead make a trip toward the barge and upon meeting up with one of it’s buddies it could hand it’s cargo off— Consider a kind of dance— the bot giving leads the bot that’s accepting — it dumps it’s cargo into the water just ahead of it’s buddy and it’s buddy scoops it up. At the very least the oil is farther from shore, and at best most of the transfer is completed safely without any single robot needing the range or speed required to make the entire trip to the collection point… In fact, this could be the primary mechanism— bots could dump their cargo in a collection area – a safe distance from the barge. Then other specialized equipment could safely collect it from there…

Apr 282010
 

No, I’m not kidding…

Race Conditions are evil right?! When you have more than one thread racing to use a piece of shared data and that data is not protected by some kind of locking mechanism you can get intermittent nonsensical errors that cause hair loss, weight gain, and caffeine addiction.

The facts of life:

Consider a = a + b; Simple enough and very common. On the metal this works out to something like:

Step 1: Look at a and keep it in mind (put it in a register).
Step 2: Look at b and keep it in mind (put it in a different register).
Step 3: Add a and b together (put that in a register).
Step 4: Write down the new value of a (put the sum in memory).

Still pretty simple. Now suppose two threads are doing it without protection. There is no mutex or other locking mechanism protecting the value of a.

Most of the time one thread will get there first and finish first. The other thread comes later and nobody is surprised with the results. But suppose both threads get there at the same time:

Say the value of a starts off at 4 and the value of b is 2.

Thread 1 reads a (step 1).
Thread 2 reads a (step 1).
Thread 1 reads b (step 2).
Thread 2 reads b (step 2).
Thread 1 adds a and b (step 3).
Thread 2 adds a and b (step 3).
Thread 1 puts the result into a (step 4).
Thread 2 puts the result into a (step 4).
Now a has the value 6.

But a should be 8 because the process happened twice! As a result your program doesn’t work properly; your customer is frustrated; you pull out your hair trying to figure out why the computer can’t add sometimes; you become intimately familiar with the pizza delivery guy; and you’re up all night pumping caffeine.

This is why we are taught never to share data without protection. Most of the time there may be no consequences (one thread starts and finishes before the other). But occasionally the two threads will come together at the same time and change your life. It gets even stranger if you have 3 or more involved!

The trouble is that protection is complicated: It interrupts the flow of the program; it slows things down; and sometimes you just don’t think about it when you need to.

The story of RTSNF and MPPE:

All of this becomes critical when you’re building a database. I’m currently in the midst of adapting MicroNeil’s Multi-Path Pattern Engine (MPPE) technology for use in the Real-Time Message Sniffer engine (RTSNF).

RTSNF will allow us to scan messages even faster than the current engine which is based on MicroNeil’s folded token matrix technology. RTSNF will also have a smaller memory footprint (which will please OEMs and appliance developers). But the most interesting feature is that it will allow us to distribute new rules to all active SNF nodes within 90 seconds of their creation.

This means that most of the time we will be able to block new spam and virus outbreaks and their variants on all of our customer’s systems within 1 minute of when we see a new piece of spam or malware in our traps.

It also means that we have to be able to make real-time incremental changes to each rulebase without slowing down the message scanning process.

How do you do such a thing? You break the rules!

You’re saying race conditions aren’t evil?? You’re MAD!
(Yes, I am. It says so in my blog.)

Updating a database without causing corruption usually requires locking mechanisms that prevent partially updated data from being read by one thread while the data is being changed by another. If you don’t use a locking mechanism then race conditions virtually guarantee you will have unexpected (corrupted) results.

In the case of MPPE and RTSNF we get around this by carefully mapping out all of the possible states that can occur from race conditions at a very low level. Then we structure our data and our read and write processes so that they take advantage of the conditions we have mapped without producing errors.

This eliminates “unintended” part of the consequences and breaks the apparent link between race conditions and certain disaster. The result is that these engines never need to slow down to make an update. Pattern scans can continue at full speed on multiple threads while new updates are in progress.

Here is a simplified example:

Consider a string of symbols: ABCDEFG

Now imagine that each symbol is a kind of pointer that stands in for other data — such as a record in a database or a field in a record. We call this symbolic decomposition. So, for example, the structure ABCDEFG might represent an address in a contact list. The symbol A might represent the Name, B the box number, C the street, D the city, etc… Somewhere else there is a symbol that represents the entire structure ABCDEFG, and so on.

We want to update the record that is represented by D without first locking the data and stopping any threads that might read that data.

Each of these symbols are just numbers and so they can be manipulated atomically. When we tell the processor to change D to Q there is no way that processor or any other will see something in-between D and Q. Each will only see one or the other. With almost no exceptions you can count on this being the case when you are storing or retrieving a value that is equal in length to the processor’s word size or shorter. Some processors (and libraries) provide other atomic operations also — but for our purposes we want to use a mechanism that is virtually guaranteed to be ubiquitous and available right down to the machine code if we need it.

The trick is that without protection we can’t be sure when one thread will read any particular symbol in the context of when that symbol might be changed. So we have two possible outcomes when we change D to Q for each thread that might be reading that symbol. Either the reading thread will see the original D or it will see the updated Q.

This lack of synchronization means that some of the reading threads may get old results for some period of time while others get new results. That’s generally a bad thing at higher levels of abstraction such as when we are working with serialized transactions. However, we are working at a very low level where our application doesn’t require serialization. Note also that if we did need to support serialization at a higher level we could do that by leveraging these techniques to build constructs that satisfy those requirements.

So we’ve talked about using symbolic decomposition to represent our data. Using symbolic decomposition we can make changes using ubiquitous atomic operations (like writing or reading a single word of memory) and we can predict the outcomes of the race conditions we allow. This means we can structure our application to account for these conditions without error and therefore we can skip conventional data protection mechanisms.

There is one more piece to this technique that is important and might not be obvious so I’ll mention it quickly.

In order to leverage this technique you must also be very careful how you structure your updates. The updates must remain invisible until they are complete. Only the thread making the update should know anything about the change until it’s complete and ready to be posted. So, for example, if we want to change the city in our address that operation must be done this way:

The symbols ABCDEFG represent an address record in our database.
D represents a specific city name (a string field) in that record.

In order to change the city we first create a new string in empty space and represent that with some new symbol.

Q => “New City”

When we have allocated the new string, loaded the data into it, and acquired the new symbol we can swap it into our address record.

ABCDEFG becomes ABCQEFG

The entire creation of Q, no matter how complex that operation may be, MUST be completed before we make the higher level change. That’s a key ingredient to this secret sauce!

Now go enjoy breaking some rules! You know you want to 🙂

Mar 232010
 

The Direct Sound EX-29, extreme isolation headphones absolutely live up to the hype. Bleed is non-existent; they are comfortable; they are clear; and they are very quiet. I’ve been using these in the studio for a few days now and I don’t know how I ever lived without them. Really- they are that good!

I try to spend a good deal of time behind the kit if I can swing it – just for fun, but also working out drum tracks for new songs, and of course, recording new material. These headphones shine in all of these applications.

Just Jammin’:

When I’m just jammin’ and keeping my chops up these cans help me keep everything at a sane volume which means I can work longer without fatigue and without damaging my hearing. In the past I have used ear plugs of various types and they have all had a few critical drawbacks that the EX-29s don’t. Two that spring to mind are comfort and clarity.

[ What do you mean “clarity”… ear protection isn’t supposed to be clear anyway! ] I MEAN- ear plugs aren’t clear – ever! At least not in my experience. Nor are most other practical solutions.

If you’ve spent any serious time (multi-hour sessions) behind the kit with ear plugs you know what I’m talking about — You can’t hear what you’re doing and it really takes a toll on your subtlety. Most likely you got frustrated at some point and flicked the ear plugs across the room so you could hear again. (You did have them in at first didn’t you??!)

The EX-29s surprisingly don’t have this problem. One of the first things I noticed was how flat the attenuation was. After a few minutes in the relative quiet of the EX-29s I adapted and was able to hear everything – just at a lower level. This means I don’t lose crush rolls, ghost strokes, and cymbal shading for the sake of my hearing. Don’t get me wrong — it’s not perfect 🙂 but it is worlds better than any ear plugs I’ve ever used and the translation of subtlety has a big pay-off in that I don’t suffer any fatigue from trying too hard to hear what I’m doing.

Then there’s comfort. Of course phones of any kind are going to be more comfortable than plugs… but the EX-29s do better than that. They are truly comfortable even after more than a couple of hours. They don’t squeeze your head, and they lack that pillows-on-the-ears feeling that typically comes with good protection.

Writing:

When I’m working out new drum tracks I often spend hours trying things out. That means playing back scratch tracks, samples, and loops and playing along to find the right grooves and fills. I used to use my Sony MDR-V600s for this. I would try to keep things at a low level, or I might use a bit of cotton (if I thought of it)… but invariably things would eventually get out of control or I would get tired from fighting with it and would have to come back later.

The EX-29s have solved this problem for me. I don’t miss any of the clarity I get from my V600s AND I don’t need any cotton for the ears :-).

The first thing I noticed when I used the EX-29s was that I had to turn my Furman monitor system way down! (ok, 2-3 notches) Everything was still clear, and I could hear my playing along with the playback without struggling to adjust to unnatural muffling. Even better – I didn’t get frustrated with it and discard my protection!

Recording:

Recording sessions are where the EX-29s really come through. Once the mics are on and every sound matters there are several things that shine about the EX-29s. In no particular order:

The isolation is absolutely fantastic! I frequently play pieces that demand a lot of dynamic range (I’m an art-rock guy at heart). It’s surprising how sensitive the mics need to be when you want to capture the subtlety of such a loud instrument. Any bleed-through from the playback can destroy the subtlety of a quiet passage by forcing re-takes or necessitating the use of gating, expansion, and other trickery. It’s no wonder drums are so frequently sequenced these days– it boils down to time and effort (which means money).

The EX-29s truly solve the isolation problem in two ways. The attenuation of the shells is quite substantial but in addition to that the quality of the drivers is also fantastic! This combination means that you can achieve comfort and clarity at substantially reduced playback levels. Not only is your playback not likely to get into your mics, but it is also at a much lower level to begin with.

Do the math (I did) — you not only drop about 30db getting from the inside of the EX-29s to the outside; you also drop an additional 12-15db using lower levels in the first place. That’s 45db of effective isolation without struggling to adapt or building up fatigue trying to “hear it”. Compare that to what you’re doing now and chances are you’ll see a 20db advantage with the EX-29s – not to mention more comfortable and productive recording sessions.

I’ll admit it – When I first heard about the EX-29s I was more than a little skeptical. They just seemed too good to be true. When I finally broke down and ordered them it was with the attitude that I’d give them a shot and if (when) they didn’t quite cut it I would find some other use for them.

No longer – These EX-29s are the real deal. They have earned a permanent home in my studio. I’m glad I picked up the extra pair to hang on my book shelf so we won’t have to fight over who gets to use them 🙂

Mar 042010
 

Those trixy blackhatzes are making a real mess of things these days. The last day or so in particular has been a festival of hacked servers and exploited free-hosting sites. Just look at this graph from our soon-to-be-launched Spam-Weather site:


While spammers have always enjoyed exploiting free services they have been particularly busy at it the last few days. The favorites this time around have been webstarts and doodlekits. What makes sites like these so attractive to the blackhats is that there is virtually no security on the sites. Anybody can sign up for a new account in minutes without any significant challenges. This means that the entire process can be scripted and automated by the blackhats.

After they’ve used one URL for a while (and it begins to get filtered) they simply light up another one, and so on, and so on.

Some email administrators are tempted to block all messages containing links to free hosting sites — and for some that might be an option — but for PROs like us it’s not. There are usually plenty of legitimate messages floating around with links to free-hosted web sites so blocking all such links would definitely lead to false positives (unacceptable).

At ARM we have a wide range of defenses against these messages so we’re able to block not only on specific links but also on message structures, obfuscation techniques, and other artifacts that are always part of these messages. In addition to that our tools also allow us to predict what the next round of messages might look like so that even when they do change things up we’re often ahead of them.

No mistake about it though… it’s hard work!

It would be _MUCH_ better for everyone if folks that offer free hosting and other commonly exploited services (like URL shortening, blog hosting,  and free email accounts) would do a better job keeping things secure.

Feb 062010
 

Just after we moved here a dozen or so years ago we had a snow storm that was pretty good. It was quite an adventure.

At one point I had to abandon our car in a grocery store parking lot and walk home. On the final stretch of that walk I tried to take a short cut down the hill behind our house and had to abandon the attempt and go around — the snow was up to my waist and 5 minutes of effort would get you only a few meters progress. — I could see the house, and Linda could see me.. we waved, and I turned around to walk the rest of the way on the roads which were just a little better.

The lentil soup w/ ham was amazingly good after that long walk home to our cozy house. We still try to recreate that experience from time to time.

This storm is bigger than that, but we’re not going out in it except to shovel a bit and have some fun. This time we’re well prepared and perhaps a little less adventurous.  The boys are having a blast — I hope they’re building some happy memories along with their snow forts. I’m sure they are.

In the midst of all this I can’t help but think of the homeless though. The sleeping bags MicroNeil purchased for TOP arrived on Friday. The original plan was for them to go to DC this weekend. The weather had other plans — We’ll push to get them delivered as soon as possible after the storm. I know the folks at TOP are anxious too.

As the snow falls outside my office window my mind drifts back to home, to the boys playing outside, to the beauty of it, and the memories we’ll make of it.

This kind of snow is the stuff of legend… the kind of thing that only happens around here once or twice in your childhood and maybe a few times in your life. That keeps it special. For folks who live much north of here it’s probably just another snowy day.

For us here in the mid-Atlantic it happens just often enough; and when it does it’s an opportunity for everyone to pause and reflect – to change their lives for a few days, talk to their neighbors, have a few adventures, and make some memories – stories they can share.

To quote Ernest T Bass: “I was right there in it!”

If you’re here in it with us, or otherwise in similar circumstances, we wish you well and hope all of your adventures ultimately turn into happy memories.

The rest is pictures…

Feb 032010
 

Noise, Noise, Noise, Noise! grumbled the Grinch… and I feel his pain. One of the challenges of building a recording studio is noise. We live in a very noisy world.

One way we deal with noise is to put noisy things in a special room which can be isolated from the recording environment. Here at the Mad Lab we have a utility room where we keep our server farm, CD/DVD production robot, air-handler, and other noisy things. The trick is: How do we keep all that stuff quiet?

There are two things we want to do to this room: Reduce the noise inside the room as much as possible and then prevent whatever is left over from leaking out.

The first step to treating the room was to significantly increase the density of the walls. At the same time we wanted to increase the structural integrity of the paneling on the opposite side. What we did was to add a thick, dense layer of work-bench material to the outside of the wall directly behind the paneling (another story we’ll post later).

The next step was to add sound absorbing material to the inside of the room to absorb as much noise as possible (and convert it to heat). The thinking behind this is that the more sound we can absorb the less sound there is to bounce around the room and leak out.

In addition we decided to put physics to work for us and install this material so that it is suspended from the studs flush with the inside of the wall leaving an air gap between the insulation and the outer wall material. This accomplishes two things. The insulation on the inside surface  is mechanically isolated from the outer wall structure thus preventing any (most) mechanical sound transmission. Also the air gap represents an additional change in density so that any sound attempting to travel through the wall from the inside experiences at least three separate mediums (more on this in a moment).

We did some research and contacted our friends at Sweetwater to purchase some Auralex mineral fiber insulation. Then to make it easier to handle we had our friends at Silk Supply Company precision cut the material and manufacture fabric covered panels.

The custom made panels fit perfectly between the studs and leave a gap of about half an inch between them and the dense outside wall. When sound attempts to escape through the wall three things happen.

First a lot of the energy is absorbed into the mineral fibers — the fabric covering is acoustically transparent. This significantly reduces any echos inside the room and converts a good portion of the sound to heat. This effect is enhanced by the loose mechanical coupling of the installation. Since the panels are suspended from the front surface of the studs any mechanical energy that might be transmitted through the studs is first significantly attenuated as it travels through the mineral fibers to the edges.

Second, any sound that makes it through the  insulation escapes into the air gap where the change in density causes the sound to refract… well, sort of. The size of the gap is very small compared to the wavelength of most sounds so most of the effect is really a mechanical decoupling of the mineral fiber and the hard surface of the outer wall material.

Third, much of the sound in the air gap is reflected back toward the mineral fiber by the smooth, hard surface of the outer wall material. In addition the density of the material further attenuates whatever is not reflected.

Since one of my goals was to attenuate the noise inside the room (and for a number of other reasons) I didn’t want to go the more conventional route of adding thick layers of drywall.

In line with this, the fabric covering has a few additional benefits. To start with the installation is much easier to install and if need be it can be temporarily removed by pulling the staples and tugging the insulation out of it’s slot. This might be useful if I need to run any additional cabling, for example. In addition to that the fabric reinforces the mineral fiber and keeps it well contained so it doesn’t sluff off into the room over time.

As usual I enlisted Ian and Leo to perform the installation. They had a lot of fun exploring the change in acoustic properties by alternately talking in front of sections where they had installed the panels and sections where the panels were not yet installed.

Jan 032010
 

We’re doing a lot of cross-platform software development these days, and that means doing a lot of cross-platform testing too.

The best way to handle that these days is with virtual computing since it allows you to use one box to run dozens of platforms (operating system and software configurations) at once – even simultaneously if you wish (and we do).

Until recently we were outsourcing this part of our operation but that turned out to be very painful. To date nobody in the cloud-computing game quite has the interface we need for making this work. In particular we need the ability to keep pristine images of platforms that we can load on demand. We also need the ability to create new reusable snapshots as needed.

All of this exists very nicely in VMWare, of course, but to access it you really need to have your own VMWare setup in-house (at least that’s true at the moment). So I ordered a new Dell Power Edge 2970 to run at the Mad Lab with ESXi 4.

Hey Leo - Install that for me

Hey Leo - Install that for me

Around the Mad Lab we like to take every opportunity to teach, learn, and experiment so I enlisted Leo to get the server installed.

The first thing that occurred to me after it arrived is that it’s big and heavy. We have a rack in the lab from our old data center in Sterling, but it’s one of the lighter-duty units so some “adaptation” would be required. Hopefully not too much.

Mad Rack before the new server

Mad Rack before the new server

Another concern that I had is that this server might be too loud. After all, boxes like this are used to living in loud concrete and steel buildings where people do not go. I need to run this box right next to the main tracking room in the recording studio. No matter though – it must be done, and I’ve gotten pretty good at treating noisy equipment so that it doesn’t cause problems. In fact, the rack lives in a special utility room next to the air handler so everything I do in there to isolate that room acoustically will help with this too.

Opening the box we quickly discovered I was right about the size. The rail kit that came with the device was clearly too large for the rack. We would have to find a different solution.

The server itself would stick out the back of the rack a bit so I had Leo measure it’s depth and check that against the depth we had available in the rack.

As it turned out we needed to move the rack forward a bit in order to leave enough space behind it. The rack is currently installed in front of a structural column and some framing. Once Leo measured the available distance we moved the rack forward about 8 inches. That provided plenty of space for the new server and access to it’s wiring.

Gosh those rails look big

Gosh those rails look big

How long is it?

How long is it?

Must move the rack to make room

Must move the rack to make room

That solved one problem but we still had the issue of the rails being too long for the rack. Normally I might take a hack saw to them and modify them to fit but in this case that would not be possible – and besides: the rail kit from Dell is great and we might use it later if we ever move this server out of the Mad Lab and into one of the data centers.

Luckily I’d solved this problem before and it turned out we had the parts to do it this time as well. Each of these slim-line racks has a number of cross members installed for ventilation and stability. These are pretty tough pieces of kit though so they can be used in a pinch to act as supports for the front and back of a long server like this. Just our luck we had two installed – they just needed to be moved a bit.

I explained to Leo how the holes are drilled in a rack, the concept of “units” (1-U, 2-U, etc), and where I wanted the new server to live. Leo measured the height and Ian counted holes to find the new locations for the front and back braces.

Use these braces instead of rails

Use these braces instead of rails

Teamwork

Teamwork

Then Leo held the cabling back while I loaded the new server into the rack. We keep power cables on the left side and signal cables on the right (from the front). The gap between the sides and the rails makes for nice channels to keep the cabling neat… well, ok, neat enough ;-). If this rack were living in a data center then it wouldn’t be modified very often and all of the cables would be tightly controlled. This rack lives at the Mad Lab where things are frequently moved around and so we allow for a little more chaos.

Once the server is over the first brace it’s easy to manage. In fact, it’s pretty light as servers go. This kind of thing can be done with one person but it’s always best to have a helper.

Power Left, Signals Right

Power Left, Signals Right

Slides right in with a little help

Slides right in with a little help

Once the server was in place we tightened up the thumb screws on the front. If the braces weren’t in the right place this wouldn’t have worked because the screw holes wouldn’t have aligned. Leo and Ian had it nailed and the screws mated up perfectly.

Tighten the left thumb screw

Tighten the left thumb screw

Tighten the right thumb screw

Tighten the right thumb screw

With the physical installation out of the way it was time to wire up the beast. It’s a bit dark in the back of the rack so we needed some light. Luckily this year I got one of the best stocking stuffers ever – a HUGlight.

The LEDs are bright and the bendable arms are sturdy. You can bend the thing to hang it in your work area, snake it through holes to put light where you need it, stand it on the floor pointing up at your work… The possibilities are endless. Leo thought of a way to use it that I hadn’t yet – he made it into a hat!

HUGLight - Best stocking stuffer ever!

HUGLight - Best stocking stuffer ever!

Leo wears HUGlight like a hat

Leo wears HUGlight like a hat

Once the wiring was complete I threw the keyboard and monitor on top, plugged it in, and pushed the button (smoke test). Sure enough, as I feared, the server sounded like a jet engine when it started up. For a moment it was the loudest thing in the house and clearly could not live there next to the studio if it was going to be that loud… either that or I would have to turn it off from time to time, and I sure didn’t want to do that.

Then after a few seconds the fans throttled back and it became surprisingly quiet! In fact it turns out that with the door of the rack closed and the existing acoustic treatments I’ve made to the room this server will be fine right where it is. I will continue to treat the room to isolate it (that project is only just beginning) but for now what we have is sufficient. What a relief.

Within a minute or two I had the system configured and ready for ESXi.

It Is Alive!

It Is Alive!

The keyboard and monitor wouldn’t be needed for long. One of the best decisions I made was to order the server with DRAC installed. Once it was configured with an IP address and connected to the network I could access the console from anywhere on my control network with my web browser (and Java). Not only that but all of the health monitors (and then some) are also available. It was well worth the few extra dollars it cost. I doubt I’ll ever install another server without it.

Back in the day we needed to physically lay hands on servers to restart them; and we had to use special software and hardware gadgets to diagnose power or temperature problems – up hill, both ways, bare feet, in the snow!! But I digress…

Mad Rack After

Mad Rack After

After that I installed ESXi, pulled out the disk and closed the door. I was able to perform the rest of the setup from my desk:

  • Configured the ESXi password, control network parameters, etc.
  • Downloaded vSphere client and installed it.
  • Connected to the ESXi host, installed the license key.
  • Setup the first VM to run Ubuntu 9.10 with multiple CPUs.
  • … and so on

The server has now been alive and doing real work for a few days and continues to run smoothly. In fact I’ve not had to go back into that room since except to look at the blinking lights (a perk).

We’re doing a lot of cross-platform software development these days, and that means doing a lot of cross-platform testing too.

The best way to handle that these days is with virtual computing since it allows you to use one box to run dozens of platforms (operating system and software configurations) at once – even simultaneously if you wish (and we do).

Until recently we were outsourcing this part of our operation but that turned out to be very painful. To date nobody in the cloud-computing game quite has the interface we need for making this work. In particular we need the ability to keep pristine images of platforms that we can load on demand. We also need the ability to create new reusable snapshots as needed.

All of this exists very nicely in VMWare, of course, but to access it you really need to have your own VMWare setup in-house (at least that’s true at the moment). So I ordered a new Dell Power Edge 2970 to run at the Mad Lab with ESXi 4.

(LeoInstallThatForMe)

Around the Mad Lab we like to take every opportunity to teach, learn, and experiment so I enlisted Leo to get the server installed.

The first thing that occurred to me after it arrived is that it’s big and heavy. We have a rack in the lab from our old data center in Sterling, but it’s one of the lighter-duty units so some “adaptation” would be required. Hopefully not too much.

(MadRackBefore)

Another concern that I had is that this server might be too loud. After all, boxes like this are used to living in loud concrete and steel buildings where people do not go. I need to run this box right next to the main tracking room in the recording studio. No matter though – it must be done, and I’ve gotten pretty good at treating noisy equipment so that it doesn’t cause problems. In fact, the rack lives in a special utility room next to the HVAC so everything I do in there to isolate that room acoustically will help with this too.

Opening the box we quickly discovered I was right about the size. The rail kit that came with the device was clearly too large for the rack. We would have to find a different solution.

(GoshThoseRailsLookBig)

Clearly the server itself would stick out the back of the rack a bit so I had Leo measure it’s depth and check that against the depth we had available in the rack.

(HowLongIsIt)

As it turned out we needed to move the rack forward a bit in order to leave enough space behind it. The rack is currently installed in front of a structural column and some framing. Once Leo measured the available distance we moved the rack forward about 8 inches. That provided plenty of space for the new server and access to it’s wiring.

(MustMoveTheRackToMakeRoom)

That solved one problem but we still had the issue of the rails being too long for the rack. Normally I might take a hack saw to them and modify them to fit but in this case that would not be possible – and besides: the rail kit from Dell is great and we might use it later if we ever move this server out of the Mad Lab and into one of the data centers.

Luckily I’d solved this problem before and it turned out we had the parts to do it this time as well. Each of these slim-line racks has a number of cross members installed for ventilation and stability. These are pretty tough pieces of kit though so they can be used in a pinch to act as supports for the front and back of a long server like this. Just our luck we had two installed – they just needed to be moved a bit.

(WeWillUseTheseBracesInsteadOfRails)

I explained to Leo how the holes are drilled in a rack, the concept of “units” (1-U, 2-U, etc), and where I wanted the new server to live. Leo measured the height and Ian counted holes to find the new locations for the front and back braces.

(IanAndLeoInstallTheBackSupport)

Then Leo held the cabling back while I loaded the new server into the rack. We keep power cables on the left side and signal cables on the right (from the front). The gap between the sides and the rails makes for nice channels to keep the cabling neat… well, ok, neat enough ;-). If this rack were living in a data center then it wouldn’t be modified very often and all of the cables would be tightly controlled. This rack lives at the Mad Lab where things are frequently moved around and so we allow for a little more chaos.

(HoldPowerOnLeftSignalOnRight)

Once the server is over the first brace it’s easy to manage. In fact, it’s pretty light as servers go. This kind of thing can be done with one person but it’s always best to have a helper.

(SlidesRigthIn)

Once the server was in place we tightened up the thumb screws on the front. If the braces weren’t in the right place this wouldn’t have worked because the screw holes wouldn’t have aligned. Leo and Ian had it nailed and the screws mated up perfectly.

(TightenTheLeftThumbScrew) (TightenTheRightThumbScrew)

With the physical installation out of the way it was time to wire up the beast. It’s a bit dark in the back of the rack so we needed some light. Luckily this year I got one of the best stocking stuffers ever – a HUGlight.

(BestStockingStufferEver)

The LEDs are bright and the bendable arms are sturdy. You can bend the thing to hang it in your work area, snake it through holes to put light where you need it, stand it on the floor pointing up at your work… The possibilities are endless. Leo thought of a way to use it that I hadn’t yet – he made it into a hat!

(LeoWithHugLightOn)

Once the wiring was complete I threw the keyboard and monitor on top, plugged it in, and pushed the button (smoke test). Sure enough, as I feared, the server sounded like a jet engine when it started up. For a moment it was the loudest thing in the house and clearly could not live there next to the studio if it was going to be that loud… either that or I would have to turn it off from time to time, and I sure didn’t want to do that.

Then after a few seconds the fans throttled back and it became surprisingly quiet! In fact it turns out that with the door of the rack closed and the existing acoustic treatments I’ve made to the room this server will be fine right where it is. I will continue to treat the room to isolate it (that project is only just beginning) but for now what we have is sufficient. What a relief.

Within a minute or two I had the system configured and ready for ESXi.

(ItIsAlive)

The keyboard and monitor wouldn’t be needed for long. One of the best decisions I made was to order the server with DRAC installed. Once it was configured with an IP address and connected to the network I could access the console from anywhere on my control network with my web browser (and Java). Not only that but all of the health monitors (and then some) are also available. It was well worth the few extra dollars it cost. I doubt I’ll ever install another server without it.

Back in the day we needed to physically lay hand on servers to restart them; and we had to use special software and hardware gadgets to diagnose power or temperature problems – up hill, both ways, bare feet, in the snow!! But I digress…

(MadRackAfter)

After that I installed ESXi, pulled out the disk and closed the door. I was able to perform the rest of the setup from my desk:

  • Configured the ESXi password, control network parameters, etc.

  • Downloaded vSphere client and installed it.

  • Connected to the ESXi host, installed the license key.

  • Setup the first VM to run Ubuntu 9.10 with multiple CPUs.

  • … and so on

The server has now been alive and doing real work for a few days and continues to run smoothly. In fact I’ve not had to go back into that room since except to look at the blinking lights (a perk).

Dec 312009
 
Sniffy New Year 2010

Sniffy New Year 2010

A New day, A New year, A New decade, Another chance to make things better… To do something good in a sustainable way so that we can build on it and make a lasting difference.

One of the things I do is develop technology for filtering out bad email (spam, scams, viruses, “malware”). The goal is to protect people from the predators out there and help to make sure the Internet has a chance to achieve it’s potential for good.

Of course, doing that means that my team and I spend a lot of time wading through the worst stuff on the ‘Net. Honestly, sometimes I really hate that job – wallowing in humanities filth for hours on end can really bum you out.

What started as a nuisance has grown into something much more sinister. Today spam and other malware is produced largely by organized crime. Their “business” is well funded, sophisticated, and ranges from presenting you with uninvited advertisements to hacking your computer, money laundering, identity theft and fraud, all the way to human trafficking, cyber warfare and terrorism.

I invite you to view this TED talk on the intricate economics of terrorism:

http://www.ted.com/talks/loretta_napoleoni_the_intricate_economics_of_terrorism.html

As a result of this phenomenon everyone who provides services on the Internet must now spend a significant amount of money and effort to protect themselves and their customers. It has become a necessity.

It’s very depressing. I know I would like to spend that energy doing more positive work – not just holding back the darkness.

I don’t let that stuff keep me down, but thoughts like that float around in my brain with all of the others looking for ways to connect. Sometimes they connect in surprising ways and call me to start out in new directions.

The other day I was pondering all of this while shopping for a gift for my brother. He enjoys camping, and reading, and this year in particular he’s become interested in outdoor survival books (Man vs Wild kinds of stuff). I had picked up a book about surviving on K2 and was looking for something to add when I wondered into the camping isle and came face to face with a sleeping bag…

This wasn’t what I was looking for but it struck a nerve. Just recently I had made a live recording for Evergreen Church where they were interviewing some folks from TOP (Teens Opposing Poverty). The stories these folks told about living (surviving) on the streets of DC had stuck with me. Evergreen Church teens regularly work with TOP and the church has been collecting sleeping bags to donate to TOP for their next trip into DC.

Teens Opposing Poverty

Teens Opposing Poverty

Just then it occurred to me that I had another opportunity to do something good. As Steve Jennings (Executive Director of TOP) puts it: “Sleeping bags are like gold to homeless people… The need for sleeping bags never goes away.”

For the month of January MicroNeil will donate a new sleeping bag to TOP for every new customer that subscribes to Message Sniffer.

This is a way we can convert some of the darkness generated by the blackhats into light (and warmth) and hopefully make a difference when it matters most. It’s very cold on the streets of  DC in January –and this year we just had two feet of snow!

I’m also hopeful that this promotion will call more attention to TOP and efforts like it. TOP in particular is focused on engaging and connecting young people with homeless folks in a meaningful way– and reconnecting the homeless with their community. These connections are in many ways more important than providing critical services and materials because it’s the connections that translate into hope and opportunity.

http://www.teensopposingpoverty.org/