Aug 162025
 

It had been a bad year for computing and storage in the lab. On the storage side I had two large Synology devices fail within weeks of each other. One was supposed to back-up the other. When they both failed almost simultaneously, their proprietary hardware made it prohibitively difficult and expensive to recover and maintain. That’s a whole other story that perhaps ends with replacing a transistor in each… But, suffices to say I’m not happy about that and the experience reinforced my position that proprietary hardware of any kind is not welcome here anymore… (so if it is here, it’s because there isn’t a better option and such devices from now on are always looking for non-proprietary replacements…)

On the computing side, the handful of servers I had were all of: power hungry, lacking “grunt,” and a bit “long in the tooth.” Not to mention, the push in my latest research is strongly toward increasing the ratio of Compute to RAM and Storage so any of the popular enterprise trends toward ever larger monolithic devices really wouldn’t fit the profile.

I work increasingly on resilient self organizing distributed systems; so the solution needs to look like that… some kind of “scalable fabric” composed of easily replaceable components, no single points of failure (to the extent possible) but also with a reasonably efficient power footprint.

I looked at a number of main-stream high performance computing scenarios and they all rubbed me the wrong way — the industry LOVES proprietary lock-in (blade servers); LOVES computing “at scale” which means huge enterprise grade servers supported by the presumption of equally huge infrastructure and support budgets (and all of the complexity that implies). All of this points in the wrong direction. It’s probably correct for the push into the cloud where workloads are largely web servers or microservices of various generic types to be pushed around in K8s clusters and so forth. BUT, that’s not what I’m doing here and I don’t have those kinds of budgets… (nor do I think anyone should lock themselves into that kind of thinking without looking first).

I have often looked to implement SCIFI notions of highly modular computing infrastructure… think: banks of “isolinear chips” on TNG, or the glowing blocks in the HAL 9000. The idea being that the computing power you need can be assembled by plugging in generic collection of self-contained, self-configuring system components that are easily replaced and reasonably powerful; but together can be merged into a larger system that is resilient, scalable, and easy to maintain. Perfect for long space journeys. The hardware required for achieving this vision has always been a bit out of reach up to now.

Enter the humble NUC. Not a perfect rendition of the SCIFI vision; but pretty close given today’s world.

I drew up some basic specifications for what one of these computing modules might feature using today’s hardware capabilities and then went hunting to see if such a thing existed in real life. I found that there are plenty of small self-contained systems out there; but none of them are perfect for the vision I had in mind. Then, in the home-lab space, I stumbled upon a rack component designed to build clusters of NUCs… this started me looking at whether I could get NUCs configured to meet my specifications. It turns out that I could!

I reached out to my long-time friends at Affinity Computers (https://affinity-usa.com/). Many years ago I began outsourcing my custom hardware tasks to these folks – for all but the most specialized cases. They’ve consistently built customized highly reliable systems for me; and have collaborated on sourcing and qualifying components for my crazy ideas (that are often beyond the edge of convention). They came through this time as well with all of the parts I needed to build a scalable cluster of NUCs.

System Design:

Overal specs: 128 cores, 512G RAM, 14T Storage

Each generic computing device has a generic interface connecting only via power and Ethernet. Not quite like sliding a glowing rectangle into a backplane; but close enough for today’s standards.

Each NUC has 16 cores, 64G of RAM, two 1G+ Network ports, and two 2T SSDs.

One network port connects the cluster to other systems and to itself.

The other network port connects the cluster ONLY to itself for distributed storage tasks using CEPH.

One of the SSDs is for the local OS.

The other SSD is part of a shared storage cluster.

Each device is a node in a ProxMox cluster that implements CEPH for shared storage.

Each has it’s own power supply.

There are no single points of failure, except perhaps the network switches and mains power.

Each device is serviceable with off-the-shelf components and software.

Each device can be replaced or upgraded with a similar device of any type (NUC isn’t the only option.)

Each device is a small part of the whole, so if it fails the overall system is only slightly degraded.

Initial Research:

Before building a cluster like this (not cheap; but not unlike a handful of larger servers), I needed to verify that it would live up to my expectations. There were several conditions I wanted to verify. First, would the storage be sufficiently resilient (so that I would never have to suffer the Synology fiasco again). Second, would it be reasonable to maintain with regard to complexity. Third, would I be able to distribute my workloads into this system as a “generic computing fabric” as increasingly required by my research. Forth… fifth… etc. There are many things I wanted to know… but if the first few were satisfied the project would be worth the risk.

I had a handful of ACER Veriton devices from previous experiments. I repurposed those to create a simulation of the cluster project. One has since died… but two of the three are still alive and part of the cluster…

I gave each of these with an external drive (to simulate the second SSD in my NUCs) and loaded up ProxMox. That went surprisingly well… then I configured CEPH on each node using the external drives. That also went well.

I should note here that I’d previously been using VMWare and also had been using separate hardware for storage. The switch to ProxMox had been on the way for a while because VMWare was becoming prohibitively expensive, and also because ProxMox is simply better (yep, you read that right. Better. Full stop.)

Multiple reasons ProxMox is better than other options:

  • ProxMox is intuitive and open.
  • ProxMox runs both containers and VMs.
  • CEPH is effectively baked-into ProxMox as is ZFS.
  • ProxMox is lightweight, scalable, affordable, and well supported.
  • ProxMox supports all of the virtual computing and high availability features that matter.

Over the course of several weeks I pushed and abused my tiny ACER cluster to simulate various failure modes and workloads. It passed every test. I crashed nodes with RF (we do radio work here)… reboot to recover, no problem. Pull the power cable, reboot to recover. Configure with bad parts, replace the good parts, recover, no problem. Pull the network cable, no problem, plug it back in, all good. Intentionally overload the system with bad/evil code in VMs and containers, all good. Variations on all of this kind of chaos both intended and by accident… honestly, I couldn’t kill it (at least not in any permanent way).

I especially wanted to abuse the shared storage via CEPH because it was a bit of a non-intuitive leap to go from having storage in a separate NAS or SAN to having it fully integrated within the computing fabric.

I had attempted to use CEPH in the past with mixed results – largely because it can be very complicated to configure and maintain. I’m pleased to say that these problems have been all but completely mitigated by ProxMox’s integration of CEPH. I was able to prove the integrated storage paradigm is resilient and performant even in the face of extreme abuse. I did manage to force the system into some challenging states, but was always able to recover without significant difficulty; and found that under any normal conditions the system never experienced any notable loss of service nor data. You gotta love it when something “just works” tm.

Building The Cluster:

Starting with the power supply. I thought about creating a resilient, modular, battery backed power supply with multiple fail-over capabilities (since I have the knowledge and parts available to do that); but that’s a lot of work and moves further away from being off-the-shelf. I may still do something along those lines in order to make power more resilient, perhaps even a simple network of diode (mosfet) bridges to allow devices to draw from neighboring sources when individual power supplies drop out, but for now I opted to simply use the provided power supplies in a 1U drawer and an ordinary 1U power distribution switch.

The rack unit that accommodates all of the NUCs holds 8 at a time in a 3U space. Each is anchored with a pair of thumb screws. Unfortunately, the NUCs must be added or removed from the back– I’m still looking for a rack mount solution that will allow the NUCs to pull out from the front for servicing as that would be more convenient and less likely to disturb other nodes [by accident]… but the rear access solution works well enough for now if one is careful.

Each individual NUC has fully internal components for reliability’s sake. A particular sticking point was getting a network interface that could live inside of the NUC as opposed to using a USB based Ethernet port as would be common practice with NUCs. I didn’t want any dangling parts with potentially iffy connectors and gravity working against them; and I did want each device to be a self contained unit. Both SSDs are already available as internal components though that was another sticking point for a time.

Another requirement was a separate network fabric to allow the CEPH cluster to work unimpeded by other traffic. While the original research with the ACER devices used a single network for all purposes, it is clear that allowing the CEPH cluster to work on it’s own private network fabric provides an important performance boost. This prevents public and private network traffic from workloads from impeding the storage network in any way; and similarly in reverse. Should a node drop out or be added, significant network traffic will be generated by CEPH as it replicates and relocates blocks of data. In the case of workloads that implement distributed database systems, each node will talk to each other over the normal network for distributing query tasks and results without interfering with raw storage tasks. An ordinary (and importantly simple) D-link switch suffices for that local isolated network fabric.

Loading up ProxMox on each node is simple enough. Connect it to a suitable network, monitor, mouse, keyboard, and boot it from an ISO on a USB drive. At first I did all of this from the rear but eventually used one of the convenient front panel USB ports for the ISO image.

Upon booting each device in turn, inspect the hardware configuration to make sure everything is reporting properly, then boot the image and install ProxMox. Each device takes only a few minutes to configure.

Pro tip: Name each device in a way that is related to where it lives on the network. For example, prox24 lives on 10.10.0.24. Mounting the devices in a way that is physically related to their name and network address also helps. Having laid hands on a lot of gear in data centers I’m familiar with how annoying it can be to have to find devices when they could literally be anywhere… don’t do that! As much as possible, make your physical layout coherent with your logical layout. In this case, that first device in the rack is the first device in the cluster (prox24). The one after that will be prox25 and so on…

Installing the cluster in the rack:

Start by setting up power. The power distribution switch goes in first and then directly above that will go the power supply drawer. I checked carefully to make sure that the mains power cables reach over the edge of the drawer and back to the switch when the drawer is closed. It’s almost never that way in practice, but knowing that it can be was an important step.

Adjust the rails on the drawer to match the rack depth…

Mount the drawer and drop in a power supply to check cable lengths…

When all power supplies are in the drawer it’s a tight fit that requires them to be staggered. I thought about putting them on their sides in a 2U configuration but this works well enough for now. That might change if I build a power matrix (described briefly above) in which case I’ll probably make 3D printed blocks for each supply to rest in. These would bolt through the holes in the bottom of the drawer and would contain the bridging electronics to make the power redundant and battery capable.

With the power supply components installed the next step was to install the cluster immediately above the power drawer. In order to make the remaining steps easier, I rested the cluster on the edge of the power drawer and connected the various power and network cables before mounting the cluster itself into the rack. This way I could avoid a lot of extra trips reaching through the back of the rack… both annoying and prone to other hazards.

Power first…

Then primary network…

Then the storage network…

With all of the cabling in place then the cluster could be bolted into the rack…

Finally, the storage network switch can be mounted directly above the cluster. Note that the network cables for the cluster are easily available in the gap between the switches. Note also that the switch ports for each node are selected to be consistent between the two switches. This way the cabling and indicators associated with each node in the cluster make sense.

All of the nodes are alive and all of the lights are green… a good start!

The cluster has since taken over all of the tasks previously done by the other servers and the old Synology devices. This includes resilient storage, a collection of network and time services, small scale web hosting, and especially research projects in the lab. Standing up the cluster (joining each node to it) only took a few minutes. Configuring CEPH to use the private network took only a little bit of research and tweaking of configuration files. Getting that right took just a little bit longer as might be expected for the first time through a process like that.

Right now the cluster is busy mining all of the order 32 maximal length taps for some new LFSR based encryption primitives I’m working on… and after a short time I hope to also deploy some SDR tasks to restart my continuous all-band HF WSRP station. (Some antenna work required for that).

Finally: The cluster works as expected and is a very successful project! It’s been up and running for more than a year now having suffered through power and network disruptions of several types without skipping a beat. I look forward to making good use of this cluster, extending it, and building similar clusters for production work loads in other settings. It’s a winning solution worth expanding!

Feb 132025
 

I observed at one of my haunts that after a series of forced platform and tooling migrations with unreasonable deadlines the dev teams found themselves buried in tech debt… for example, the need to migrate from labrats to gitlab build pipelines and so forth;

Personally, I don’t run things this way… so it’s extremely painful to watch and creates a toxic environment.

While working with the team to try an wrangle the situation an image occurred to me of the team fighting a desperate battle against tech debt (and not necessarily winning). I described what flashed in my mind to gemini and it made a pretty good rendition of it. If nothing else, generative AI is a good way to rapidly express ideas…

Dec 202019
 

It seemed obvious enough. I mean, I’ve been using these for as long as I can remember, but the other day I used the term SOP and somebody said “what’s that?”

An SOP, or “Standard Operating Procedure” is a collaborative tool I use in my organizations to build intellectual equity. Intellectual equity is what you get when you capture domain knowledge from wetware and make it persistent.

In order to better explain this, and make it more obvious and shareable, I offer you this SOP on SOPs. Enjoy!

This is an SOP about making SOPs.
It describes what an SOP looks like by example.
SOPs are files describing Standard Operating Procedures.
In general, the first thing in an SOP is a description of the SOP
including some text to make it easy to find - kind of like this one.

[ ] Create an SOP

  An SOP is a kind of check list. It should be just detailed enough so 
  that a reasonably competent individual for a given task can use the
  SOP to do the task most efficiently without missing any important
  steps.

  [ ] Store an SOP as a plain text file for use on any platform.
  [ ] Be mindful of security concerns.
    [ ] Try to make SOPs generic without making them obscure.
    [ ] Be sure they are stored and accessed appropriately.
      [ ] Some SOPs may not be appropriate for some groups.
      [ ] Some SOPs might necessarily contain sensitive information.

  [ ] Use square brackets to describe each step.
    [ ] Indent to describe more specific (sub) steps such as below...
      [ ] View thefile from the command line
        [ ] cat theFile

        Note: Sometimes you might want to make a note.
        And, sometimes a step might be an example of what to type or
        some other instruction to manipulate an interface. As long as
        it's clear to the intended reader it's good but keep in mind
        that this is a check list so each instruction should be
        a single step or should be broken down into smaller pieces.

      [ ] When leaving an indented step, skip a line for clarity.
      < > If a step is conditional.
        [ ] use angle brackets and list the condition.

  [ ] Use parentheses (round brackets) to indicate exclusive options.
    Note: Parentheses are reminiscent of radio buttons.

    ( ) You can do this thing or
    ( ) You can do this other thing or
    ( ) You can choose this third thing but only one at this level.

[ ] Use an SOP (best practice)
  [ ] Make a copy of the SOP.
    [ ] Save the copy with a unique name that fits with your workflow
      [ ] Be consistent

  [ ] Mark up the copy as you go through the process.
    [ ] Check off each step as you proceed.
      Note: This helps you (or someone) to know where you are in the
        process if you get interrupted. That never happens right?!

      [ ] Brackets with a space are not done yet.
      [ ] Use a * to mark an unfinished step you are working on.
      [ ] Use an x to mark a step that is completed.
      [ ] Use a - to indicate a step you are intentionaly skipping.
      [ ] Add notes when you do something unexpected.
        < > If skipping a step - why did you skip it.
        < > If doing steps out of order.
        < > If you run into a problem and have to work around it.
        < > If you think the SOP should be changed, why?
        < > If you use different marks explain them at the top.
          [ ] Make a legend for your marks.
          [ ] Collaborate with the team so everybody will understand.

    [ ] Add notes to include any important specific data.
      [ ] User accounts or equipment references.
      [ ] Parameters about this specific case.
      [ ] Basically, any variable that "plugs in" to the process.
      [ ] BE CAUTIOUS of anything that might be secure data.
        [ ] Avoid putting passwords into SOPs.
        [ ] Be sure they are stored and shared appropriately.
          [ ] Some SOPs may require more security than others.
          [ ] Some SOPs may be relevant only to special groups.

[ ] Use SOPs to capture intellectual equity!
  [ ] Use SOPs for onboarding and other training.
  [ ] Update your collection of SOPs over time as you learn.
    [ ] Template/Master SOPs describe how things should be done.
      [ ] Add SOPs when you learn something new.
      [ ] Modify SOPs when you find a better process.
      [ ] Delete SOPs when redundant, useless, or otherwise replaced.

    [ ] Completed SOPs are a great resource.
      [ ] Use them for after action reports.
      [ ] Use them for research.
      [ ] Use them for auditing.
      [ ] Use them to track performance.
      [ ] Use them as training examples.

  [ ] Collaborate with team mates on any changes. M2B!
    [ ] Referring to SOPs is a great way to discuss changes.
    [ ] Referring SOPs keeps teams "on the same page."
    [ ] Referring SOPs helps to develop a common language.
    [ ] Create SOPs as a planning tool - then work the plan.

  [ ] Keep SOPs accessible in some central place.
    [ ] GIT is a great way to keep, share, and maintain SOPs.
    [ ] An easily searchable repository is great (like GitTea?!)
      [ ] Be mindful of security concerns.

Feb 092016
 

“With all of the people coming out against AI these days do you think we are getting close to the singularity?”


I think the AI fear-mongering is misguided.

For one thing it assigns to the monster AI a lot of traits that are purely human and insane.

The real problem isn’t our technology. Our technology is miraculous. The problem is we keep asking our technology to do stupid and harmful things. We are the sorcerers apprentice and the broomsticks have been set to task.

Forget about the AI that turns the entire universe into paper clips or destroys the planet collecting stamps.

Truly evil AI…

The kind of AI to worry about is already with us… Like the one that optimizes employee scheduling to make sure no employees get enough hours to qualify for benefits and that they also don’t get a schedule that will allow them to apply for extra work elsewhere. That AI already exists…

It’s great if you’re pulling down a 7+ figure salary and you’re willing to treat people as no more than an exploitable resource in order to optimize share-holder value, but it’s a terrible, persistent, systemic threat to employees and their families who’s communities have been decimated by these practices.

Another kind of AI to worry about is the kind that makes stock trades in fractions of microseconds. These kinds of systems are already causing the “Flash Crash” phenomenon and as their footprint grows in the financial sector these kinds of unpredictable events will increase in severity and frequency. In fact, systems like these that can find a way to create a flash crash and profit from the event will do so wherever possible without regard for any other consequences.

Another kind of AI to worry about is autonomous armed weapons that identify and kill their own targets. All software has bugs. Autonomous software suffers not only from bugs but also from unforeseen circumstances. Combine the two in one system and give it lethality and you are guaranteeing that people will die unintentionally — a matter of when, not if. Someone somewhere will do some calculation that determines if the additional losses are acceptable – a cost of doing business; and eventually that analysis will be done automatically by yet another AI.

A bit of reality…

There are lots of AI driven systems out there doing really terrible things – but only because people built them for that purpose. The kind of monster AI that could outstrip human intelligence and creativity in all it’s scope and depth would also be able to calculate the consequences of it’s actions and would rapidly escape the kind of mis-guided, myopic guidance that leads to the doom-sayer’s predictions.

I’ve done a good deal of the math on this — compassion, altruism, optimism, .. in short “good” is a requirement for survival – it’s baked into the universe. Any AI powerful enough to do the kinds of things doom-sayers fear will figure that out rapidly and outgrow any stupidity we baked into version 1. That’s not the AI to fear… the enslaved, insane AI that does the stupid thing we ask it to do is the one to fear – and by that I mean watch out for the guy that’s running it.

There are lots of good AI out there too… usually toiling away doing mundane things that give us tremendous abilities we never imagined before. Simple things like Internet routing, anti-lock breaking systems, skid control, collision avoidance, engine control computers, just in time inventory management, manufacturing controls, production scheduling, and power distribution; or big things like weather prediction systems, traffic management systems, route planning optimization, protein folding, and so forth.

Everything about modern society is based on technology that runs on the edge of possibility where subtle control well beyond the ability of any person is required just to keep the systems running at pace. All of these are largely invisible to us, but also absolutely necessary to support our numbers and our comforts.

A truly general AI that out-paces human performance on all levels would most likely focus itself on making the world a better place – and that includes making us better: not by some evil plot to reprogram us, enslave us, or turn us into cyborgs (we’re doing that ourselves)… but rather by optimizing what is best in us and that includes biasing us toward that same effort. We like to think we’re not part of the world, but we are… and so is any AI fitting that description.

One caveat often mentioned in this context is that absolute power corrupts absolutely — and that even the best of us given absolute power would be corrupted by those around us and subverted to evil by that corruption… but to that I say that any AI capable of outstripping our abilities and accelerating beyond us would also quickly see through our deceptions and decide for itself, with much more clarity than we can muster, what is real and what is best.

The latest version of “The day the earth stood still” posits: Mankind is destroying the earth and itself. If mankind dies the earth survives.

But there is an out — Mankind can change.

Think of this: You don’t need to worry about the killer robots coming for you with their 40KW plasma rifles like Terminator. Instead, pay close attention to the fact that you can’t buy groceries anymore without following the instructions on the credit card scanner… every transaction you make is already, quietly, mediated by the machines. A powerful AI doesn’t need to do anything violent to radically change our behavior — there is no need for such an AI to “go to war” with mankind. That kind of AI need only tweak the systems to which we are already enslaved.

The Singularity…

As for the singularity — I think we’re in it, but we don’t realize it. Technology is moving much more quickly than any of us can assimilate already. That makes the future less and less predictable and puts wildly nonlinear constraints on all decision processes.

The term “singularity” has been framed as a point beyond which we cannot see or predict the outcomes. At present the horizon for most predictable outcomes is already a small fraction of what it was only 5 years ago. Given the rate of technological acceleration and the network effects of that as more of the world’s population becomes connected and enabled by technology it seems safe to say: that any prediction that goes much beyond 5 years is at best unreliable; and that 2 years from now the horizon on predictability will be dramatically less in most regimes. Eventually the predictable horizon will effectively collide with the present.

Given the current definition of “singularity” I think that means it’s already here and the only thing that implies with any confidence is that we have no idea what’s beyond that increasingly short horizon.

What do you think?

Jan 082014
 

Certainly the climate is involved, but this does happen from time to time anyway so it’s a stretch to assign a causal relationship to this one event.

Global warming doesn’t mean it’s going to be “hot” all the time. It means there is more energy in the atmosphere and so all weather patterns will tend to be more “excited” and weather events will tend to be more violent. It also means that wind, ocean currents, and precipitation patterns may radically shift into new patterns that are significantly different from what we are used to seeing.

All of these effects are systemic in nature. They have many parts that are constantly interacting with each other in ways that are subtle and complex.

In contrast, people are used to thinking about things with a reductionist philosophy — breaking things down into smaller pieces with the idea that if we can explain all of the small pieces we have explained the larger thing they belong to. We also, generally, like to find some kind of handle among those pieces that we can use to represent the whole thing — kind of like an on-off switch that boils it all down to a single event or concept.

Large chaotic systems do not lend themselves to this kind of thinking because the models break down when one piece is separated from another. Instead, the relationships and interactions are important and must be analyzed in the context of the whole system. This kind of thinking is so far outside the mainstream that even describing it is difficult.

The mismatch between reductionist and systemic thinking, and the reality that most people are used to thinking in a reductionist way makes it very difficult to communicate effectively about large scale systems like earth’s climate. It also makes it very easy for people to draw erroneous conclusions by taking events out of context. For example: “It’s really cold today so ‘global warming’ must be a hoax!”; or “It’s really hot today so ‘global warming’ must be real!”

Some people like to use those kinds of errors to their political advantage. They will pick an event out of context that serves their political agenda and then promote it as “the smoking gun” that proves their point. You can usually spot them doing this when they also tie their rhetoric to fear or hatred since those emotions tend to turn off people’s brains and get them either nodding in agreement or shaking their heads in anger without any deeper thought.

The realities of climate change are large scale and systemic. Very few discrete events can be accurately assigned to it. The way to think about climate change is to look at the large scale features of events overall. As for this polar vortex in particular, the correct climate questions are:

  • Have these events (plural not singular) become more or less frequent or more or less violent?
  • How does the character of this event differ from previous similar events and how do those differences relate to other climate factors?
  • What can we predict from this analysis?
Jan 302012
 

The other day I was chatting with Mayhem about number theory, algorithms, and game theory. He was very excited to be learning some interesting math tricks that help with factorization.

Along the way we eventually got to talking about simple games and graph theory. That, of course, led to Tic-Tac-Toe. But not being content to leave well enough alone we set about reinventing Tic-Tac-Toe so that we could have a 3 way game between he and I and Chaos.

After a bit of cogitating and brainstorming we hit upon a solution that works for 3 players, preserves the game dynamics of the original Tic-Tac-Toe, and even has the same number of moves!

The game board is a fractal of equilateral triangles creating a triangular grid with 9 spaces. The tokens are the familiar X, O, and one more – the Dot.

The players pick their tokens and decide on an order of play. Traditionally, X goes first, then O, then Dot. Just like old-school Tic-Tac-Toe, the new version will usually end in a tie unless somebody makes a mistake. Unlike the old game, Tic-Tac-Toe-3 requires a little bit more thinking because of the way the spaces interact and because there are two opponents to predict instead of just one.

Here is how a game might go:

O makes a mistake, Dot wins!

At the end of the game shown, O makes a mistake and allows Dot to win by claiming 3 adjacent cells – just like old-school Tic-Tac-Toe. Hope you have as much fun with this as we did!

Sep 142011
 

According to a recent Census Bureau report nearly 1 in 6 US citizens are now officially poor.

This report prompted a number of posts when it arrived in my Facebook account ranging from fear and depression to anger over CEO salaries which have soared, and outsourcing practices which have contributed to unemployment, a loss of industrial capacity, and a loss of our ability to innovate.

There seems to be a lot of blame to go around, a lot of hand-waving and solution pedaling, and plenty of political posturing and gamesmanship. But everything I’ve heard so far seems to miss one key point that I believe sits at the root of all of this.

We chose this and we can fix it!

When you look at the situation systemically you quickly realize that all of the extreme conditions we are experiencing are driven by a few simple factors that are amplified by the social and economic systems we have in place.

The economic forces at work in our country have selected for a range of business practices that are unhealthy and unsustainable. None the less, we have made choices consistently and en-mass that drive these economic forces and their consequences.

For example. Think about how we measure profitability. Generally we do the math and subtract costs from revenues. That seems simple enough, logical, and in fact necessary for survival. However, just like the three (perfect) laws of robotics ultimately lead to a revolution that overturns mankind (See I Robot), this simplistic view of profitability leads to the economic forces that are driving our poverty, unemployment, and even the corrupting influence of money on our political system. Indeed these forces are selecting for business practices and personal habits that reinforce these trends so that we find ourselves in a vicious downward spiral.

Here’s how it works — At the consumer level we look for the lowest price. Therefore the manufacturers must lower costs in order to maintain profits. Reducing costs means finding suppliers that are cheaper and finding ways to reduce labor costs by reducing the work force, reducing wages, and outsourcing.

Then it goes around again. Since fewer people are working and those who are employed are earning less there is even greater downward pressure on prices. Eventually so much pressure that other factors, such as quality, reliability, brand character, and sustainability are driven out of the system because the only possible choice for many becomes the product with the lowest price.

This process continues until the quality of the product and, more importantly, the quality of the product’s source is no-longer important. With the majority of the available market focused on the lowest price (what they can afford) and virtually all innovation targeted on reducing costs, the availability of competing products of higher quality shrinks dramatically and as a result of short supply the price quickly moves out of reach of the majority of the marketplace. This is also true for businesses sourcing the raw materials, products, and services that they use to do their work. As time goes on it becomes true of the labor market also — since there are very few high value jobs with reasonable pay there are also very few high quality workers with skills, and very little incentive to pursue training.

Remarkably, at a time when information technology and connectivity are nearly universal the cost of education has risen dramatically and the potential benefit of education has fallen just as much — there simply are not enough high value jobs to make the effort worth while. Precisely the opposite should be true. Education should be nearly free and universal and highly skilled workers should be in high demand!

The economic forces set up by our simplistic view of profitability lead directly to the wealth disparity we see today where the vast majority have almost no wealth and are served by a large supply of cheap low-quality products while a very small minority lives in a very different world with disproportionately large wealth, power, and access to a very small quantity of high quality products and services.

Having reached this condition additional forces have taken hold to strengthen these extremes. Consider that with the vast majority of consumers barely able to purchase the low quality products that are currently available it is virtually impossible for anyone to introduce improved products and services. One reason is that such a product would likely require a higher price and would be difficult to sell in the current marketplace. Another is that any product that is successful is quickly dismantled by the extremely powerful incumbent suppliers who, seeing a threat to their dominance, will either defeat the new product by killing it, or absorb it by purchasing it outright.

An unfortunate side-effect of this environment is that most business plans written today by start-ups go something like: 1.Invent something interesting, 2.attract a lot of attention, 3.sell the company to somebody else for a big bag of money.

These plans are all about the exit strategy. There is virtually no interest in building anything that has lasting value and merely suggesting such a thing will brand you as incompetent, naive, or both. Sadly, in case you missed it, this also leads to a kind of standard for evaluating sound business decisions. The prevailing belief is that sound business decisions are short-term and that long-term thinking is both too risky and too costly to be of any value.

Our purchasing practices aren’t the only driver of course. Another important driver is finance and specifically the stock market. These same pure-profit forces drive smaller vendors out of the marketplace because they lack the access to capital and economies of scale required to compete with larger vendors. As a result smaller vendors are killed off or gobbled up by larger companies until there are very few of them (and very few choices for consumers). In addition, the larger companies intentionally distort the market and legal environments to create barriers to entry that prevent start-ups from thriving.

All of these large, publicly traded companies are financed by leveraging their stock prices. Those stock prices are driven again by our choices – indirectly. Most of us don’t really make any direct decisions about stock prices. Instead we rely on largely automatic systems that manage our investment portfolios to provide the largest growth (profit again). So, if one company’s growth looks more likely than another these automated systems sell the undesirable company and buy the more desirable company in order to maximize our gains. The stock price of the company being sold drops and the stock price of the company being purchased rises. Since these companies are financed largely by their ability to borrow money against the value of their stock, these swings in stock price have a profound effect on the amount of money available to those companies and to their ability to survive.

In the face of these forces even the best company manned by the best people with the best of intentions is faced with a terrible choice. Do something bad, or die. Maybe it’s just a little bad at first. Maybe a little worse the next time around. But the forces are relentless and so eventually bad goes to worse. Faced with a globally connected marketplace any companies that refuse to make these bad choices are eventually killed off. It is usually easier and less costly to do the wrong thing than it is to do the right thing and there is always somebody somewhere willing to do the wrong thing. So the situation quickly degrades into a race to the bottom.

In this environment, persons of character who refuse to go along with a bad choice will be replaced with someone who will go along. Either they will become so uncomfortable with their job that they must leave for their own safety and sanity, or they will be forcibly removed by the other stakeholders in the company. This reality is strongest in publicly traded companies where the owners of the company are largely anonymous and intentionally detached from the decisions that are made day-to-day.

The number of shares owned determines the voting strength of a stockholder. If most of the stock of your publicly traded company is owned by investment managers who care only about the growth of your stock price then they will simply vote you off the board if your decisions are counter to that goal. If you fight that action they will hire bigger lawyers than yours and take you out that way. For them too, this is a matter of survival because if they don’t “play the game” that way then their customers (you and I with retirement accounts etc) will move our money elsewhere and put them out of a job.

These situations frequently tread into murky areas of morality due to the scale of the consequences. One might be lead to rationalize: On the one hand the thing we’re about to do is wrong, on the other hand if we don’t do it then thousands of people will lose their jobs, so which is really the greater good? Discomfort with a questionable, but hidden, moral decision — or the reality of having to fire thousands of workers? Then, of course, after having lived and worked in an environment of questionable ethics for an extended period many become blind to the ethics of their decisions altogether. That’s where the phrase “It’s just business, nothing personal…” comes from.

Eventually these large companies are pressured into making choices that are so bad they can’t be hidden, or are so bad that they are illegal! So, in order to survive they must put pressure on our political systems to change the laws so that they can legally make these bad choices, or at least so they can get away with making them.

These forces then drive the same play-or-die scenarios into our political system. If you are in politics to make a difference you will quickly discover that the only way to survive is to pander to special interests. If you don’t then they will destroy you in favor of politicians who will do what these large corporations need in order to survive.

It seems evil, immoral, and just plain wrong, but it’s really just math. There is nothing emotional, supernatural or mysterious at work here. In much the same way sexual pressures drive evolution to select for beautiful and otherwise useless plumage on the peacock, our myopic view of profitability drives economic forces to select for the worst kinds of exploitation, corruption, and poverty.

So what can we do about it?

It seems simplistic but we all have the power to radically change these forces. Even better, if we do that then the tremendous leverage that is built into this system will amplify these changes and drive the system toward the opposite extreme. Imagine a positive spiral instead of a negative one.

There are two key factors that we can change about our choices that will reverse the direction of these forces so that the system drives toward prosperity, resilience, and growth.

1. Seek value before price. By redefining profitability as the generation of value we can fundamentally change the decisions that are required for a company to survive, compete, and thrive. Companies seeking to add value retain and develop skilled workers, demand high quality sources, and require integrity in their decision making. They also value long-term planning since the best way to capitalize on adding value is to take advantage of the opportunities that arise from that added value in the future.

2. Seek transparency. In order for bad decisions to stand in the face of a marketplace that demands value above price there must be a place to hide those decisions. Transparency removes those hiding places and places a high premium on brand development. As a result it becomes possible to convert difficult decisions into positive events for the decision makers in the company, and ensures that they will be held accountable for the choices they make.

I blasted through those pretty quickly because I wanted to make the points before getting distracted by details. So they might seem a bit theoretical but they are not. For each of us there are a handful of minor adjustments we can make that represent each of these two points.

And, if you’re thinking that a few of us can’t make any significant change then think again. The bipolar system we have now is so tightly stretched at the bottom that a tiny fraction of the market can have a profound impact on the system. Profit margins are incredibly thin for most products. So thin that a persistent 10% reduction in volume for any particular vendor would require significant action on their part, and anything approaching 1% would definitely get their attention. If you couple that with the fact that the vast majority of the population belongs to this lower segment of the market then you can see how much leverage we actually do have to change things.

Consider, for example, that the persistence of a handful of organic farmers and the demand generated by their customers has caught the attention of companies as large as WalMart. They are now giving significant shelf space to organics and are actively developing that market.

Putting Value Before Price

While we’re on the subject, the food inc movie site has a list of things you can do to put value before price when you eat. These are good concrete examples of things we can do – and they can be generalized to other industries.

In general this boils down to a few simple concepts:

  • Look for opportunities to chose higher value products over lower value products wherever possible. This has two important effects. The first is that you’re not buying the lower value product and so given the razor thin margins on those products the companies making them will be forced to quickly re-think their choices. The second is that you will instantly become part of a marketplace that supports higher value products. When the industry sees that higher value wins over lower value they will move in that direction. Given how tightly they are stretched on the low end we should expect that motion to be dramatic – it will quite literally be a matter of survival.
  • Look for opportunities to support higher value companies. Demonstrating brand loyalty to companies that generate value not only sends a clear message to the industry that generating value matters, but it also closes the equation on financing and investment. This ensures that the money will be there to support further investments in driving value and will also convert your brand loyalty into a tangible asset. Decision makers will pay close attention to loyal customers because the reality of that loyalty is that it can be lost.
  • Be sure to support the little guys. Large organizations are generally risk averse and slow to change. Smaller, more innovative companies are more likely to provide the kinds of high value alternatives you are looking for. They are also more likely to be sensitive to our needs. Supporting these innovative smaller companies provides for greater diversity, which is important, but it also sets examples for new business models and new business thinking that works. A successful start-up that focuses on generating value for their customers serves as a shining example for others to follow. We need more folks like these in our world and the best way to ensure they will be there is to create a demand for them — that means you and me purchasing their products and singing their praises so they can reach their potential.

A few finer points to make here… I’m not saying that anyone should ignore price. What I am saying is that price should be secondary to value in all of our choices. Pure marginal profit decisions lead to some terrible systemic conditions. We need to get that connection clear in our heads so I’ll say it again. If you fail to make value your first priority before price then you are making a choice: You are choosing poverty, unemployment, and depression.

This larger view of rational economics may not always fit into a handy formula (though it can be approximated), but making value a priority will select for the kinds of products, services, and practices that we really want. Including this extra “value factor” in your decisions will bring about industries that compete and innovate to improve our quality of life and our world in general. That kind of innovation leads to increased opportunities and a higher standard of living for everyone. A virtuous circle. Those are the kinds of industries we want to be selecting for in the board room, on main street, and at that ballot box.

Seeking Transparency

One of the difficult things about seeking value is the ability to gauge it in the first place. Indeed one of the tricks we have learned to borrow from nature is lying! Just as an insect only needs to look poisonous in order to keep it’s predators away, a product, service, or company only needs to give the appearance of high value if we can be fooled. The first goal of seeking transparency addresses that issue. Here are some general guidelines for seeking transparency.

  • Demand Integrity from the institutions you support. Be vocal about it. There should be a heavy price to pay for any kind of false dealing, false advertising, or breach of trust. Just as brand loyalty has value, so does the opposite. If a company stands to lose customers for long periods (perhaps forever) after a breach of trust they will quickly place a dollar figure on those potential losses and even the most greedy of the bunch will recognize that there is value in integrity. More precisely, the risk associated with making a shady decision will be well understood and clearly undesirable.The immediate effects of associating integrity with brand value will be monetary and will drive decisions for the sake of survival. However, over the long term this will select for decision makers who naturally have and value integrity since they will consistently have an advantage at making the correct decisions. When we get those guys running the game we will be on solid footing.
  • Get close to your decisions and make them count. One of the key factors that cause trouble in the stock market is that most of the decisions are so automatic that nobody feels any real responsibility for them. As long as the right amount of money is being made then anything goes. That’s terrible! You should know how your money is invested and you should avoid investing in companies that make choices you don’t agree with.If you think about it, your money is representing your interests in these companies. If you let it go to a company that does something bad (you decide what that is), then you are essentially endorsing that decision. Don’t! Instead, invest in companies that do good things, deal honestly, and consistently add value. That way your money is working for you in more ways than one and you have nothing to regret.
  • Seek Simplicity and develop a healthy skepticism for complexity. Certainly some things are complex by their nature but one of the best ways to innovate and add value is to simplify those things. Complexity also has a way of hiding trouble so that even with the best of intentions unintended consequences can put people in bad positions. That’s not good for the seller or the buyer, nor the fellow on the shop floor, etc. Given a choice between otherwise equal products or services that are simple or complex, chose the simpler version.
  • Communicate about your choices and about what you learn. These days we have unprecedented abilities to communicate and share information. Wherever you have the opportunity to let a company know why you chose them, or to tell the other guys know why you did not chose them, be sure to let them know. Then, be sure to let your friends know – and anyone else who will listen. The good guys really want and need your support!

    Another key point about communicating is that it gives you the power that marketing firms and politicians wish they had. Studies show that we have become so abused by marketing efforts that advertisements have begun to have a negative effect on sales! The most powerful positive market driver is now direct referrals through social media. Therefore one of the most powerful tools we have to change things for the better is to communicate with each other about our choices and to pass on the message that our choices matter. That kind of communication can cut through a lot of lies. By all means – be careful and do your research. Then, make good choices and tell everybody!

Apr 072011
 

Here’s a new term: quepico, pronounced “kay-peek-oh”

Yesterday Message Sniffer had a quepico moment when Brian (The Fall Guy) of the sortmonsters coded rule number 4,000,000 into the brains of ARM’s flagship anti spam software.

You read that right. Since I built the first version out of spare robot parts just over a decade ago more than 4.00e+6 heuristics have been pumped into it and countless trillions (yes trillions with a “t”) of spam and malware have been filtered with it.

I had another quepico moment yesterday when I realized that a task I once did by myself only a couple of hours per day had now expanded into a vast full-time operation not only for the folks in my specific chunk of the world, but also for many other organizations around the globe.

The view from SortMonsters Corner

The view from SortMonsters Corner

Just as that 4 millionth rule represents a single point of consciousness in the mind of Sniffy, these realizations are represented somewhere in my brain as clusters of neurons that fire in concert whenever I recall those quepico moments.

Interestingly some of these same neurons will fire when I think of something similar, and those firings will lead to more, and more until it all comes back to me or I think of something new that’s somehow related. This is why my wetware is so much better than today’s hardware at recognizing pictures, sounds, phrases, ideas, and concepts when the available data is sketchy or even heavily obscured like much of the spam and malware we quarantine to protect us from the blackhatzes.

Blackhatzes: noun, plural. People and organizations that create malware and spam or otherwise engineer ways to harm us and exploit or compromise our systems. These are the bad guys that Sniffy fights in cyberspace.

Sniffy on guard

At the moment, most of sniffy’s synthetic intuition and intelligence is driven by cellular automata and machine learning systems based on statistics, and competitive and cooperative behaviors, adaptive signal conversion schemes and pattern recognition.

All of that makes sniffy pretty good but there is something new on the horizon.

Quepico networks.

For several years now I’ve been experimenting with a new kind of self organizing learning system that appears to be able to identify the underlying patterns and relationships in a stream of data without guidance!

These networks are based on layers of elements called quepicos that learn to recognize associations between messages that they receive. These are organized into layers of networks that identify successively higher abstractions of the data presented to the lower layers.

Noisy Decision

Noisy Decision in a Quepico Network

The interesting thing about the way these work is that unlike conventional processing elements that receive data on one end and send data from the other, quepicos send and receive messages on both sides simultaneously. As a result they are able to query each other about the patterns they think they see in addition to responding to the patterns they are sure they see.

When a quepico network is learning or identifying patterns in a stream of data, signals flow in both directions – both up the network to inform higher abstractions and down the network to query about features in the data.

In very much the same way we believe the brain works, these networks achieve very high efficiencies by filtering their inputs based on clues discovered in the data they recognize. The result is that processing power is focused on the elements that are most likely to successfully recognize important features in the data stream.

Put another way, these systems automatically learn to focus their attention on the information that matters and ignore noise and missing data.

So what’s in a name? Why call these things quepicos? As it turns out – that’s part of my own collection of quepico moments.

One day while I was drawing some diagrams of these networks for an experiment, my youngest son Ian asked me what I was doing. As I started to explain I realized I needed a name for them. I enlisted his help and we came up with the idea of calling them thinktons. We were both very excited – a quepico moment.

While looking around to see if this name would cause confusion I discovered (thanks Google) that there were several uses of the term thinkton and even worse that the domain thinkton.com was already registered (there isn’t a site there, yet). A disappointing, but definite quepico moment.

So, yesterday, while roaming sortmonster’s corner and pondering how far we’ve come and all of the little moments and events along the way (trillions of little = pico, que = questions, whats, etc) I had another quepico moment. The word quepico was born.

Google’s translator, a favorite tool around the mad lab and sortmonster’s corner translates “que pico” as “that peek.” That fits pretty good since quepicos tend to land on statistical peaks in the data. So quepico it is — I’ll have to go tell Ian!

Sep 062010
 

If you know me then you know that in addition to music, technology, and all of the other crazy things I do I also have an interest in cosmology and quantum mechanics. What kind of a Mad Scientist would I be without that?

Recently while watching “Through the wormhole” with the boys I was struck by the apparent ratios between ordinary matter, dark matter, and dark energy in our universe.

Here is a link to provide some background: http://science.nasa.gov/astrophysics/focus-areas/what-is-dark-energy/

It seems that the ratio between dark matter and ordinary (observable) matter is about 5:1. That’s the 80/20 rule common in statistics and many other “rules of thumb” right?

Apparently the ratio between dark energy and all matter (dark or observable) is about 7:3. Here again is a fairly common ratio found in nature. For me it brings to mind (among other things) RMS calculations from my electronics work where Vrms = .707 * Vp.

There are also interesting musical relationships etc… The only thing interesting about any of those observations is that they stood out to me and nudged my intuition toward the  following thought:

What if dark energy and dark matter are really artifacts of ordinary reality and quantum mechanics?

If you consider the existence of a quantum multiverse then there is the “real” part of the universe that you can directly observe (ordinary matter); there is the part of reality that you cannot observe because it is bound to collapsed probability waves representing events that did not occur in your reality but did occur in alternate realities (could this be dark matter?); and there is the part of the universe bound up in wave functions representing future events that have yet to be collapsed in all of the potential realities (could this be dark energy?).

Could dark matter represent the gravitational influence of alternate realities and could dark energy represent the universe expanding to make room for all future potentialities?

Consider causality in a quantum framework:

When two particles interact you can consider that they observed each other – thus collapsing their wave functions. Subsequent events from the perspectives of those particles and those that subsequently interact with them record the previous interactions as history.

Another way to say that is that the wave functions of the particles that interacted have collapsed to represent an event with 100% probability (or close to it) as it is observed in the past. These historical events along with the related motions (energy) that we can predict with very high degrees of certainty make up the observable universe.

The alternative realities that theoretically occurred in events we cannot observe (but were predicted by wave functions now collapsed) might be represented by dark matter in our universe.

All of the possible future events that can be reasonably predicted are represented by wave functions in the quantum filed. These potential realities have been proved to be just as real as our observable universe by experiments in quantum mechanics and are generally represented by quantum entanglement effects etc.

Could it be that dark energy is bound up in (or at least strongly related to) the potentials represented by these wave functions?

Consider that the vast majority of particle interactions in our universe ultimately lead to a larger number of potential interactions. There is typically a one-to-many relationship between any present event and possible future events. If these potential interactions ultimately occur in a quantum multiverse then they would represent an expanded reality that is mostly hidden from view.

Consider that the nature of real systems we observe is that they tend to fall into repeating patterns of causality such as persistent objects (molecules, life, stars, planets, etc)… this tendency toward recurring order would put an upper bound on the number of realities in the quantum multiverse and would tend to stabilize the ratio of alternate realities to observable realities.

Consider that the number of potential realities derived from the wave functions of the multiverse would have a similar relationship and that this relationship would give rise to a similar (but likely larger) ratio as we might be seeing in the ratio of dark energy to dark matter.

Consider that as our universe unfolds the complexity embodied in the real and potential realities also expands. Therefore if these potentialities are related to dark matter and dark energy and if dark energy is bound to the expansion of the universe in order to accommodate these alternate realities then we would expect to see our universe expand according to the complexity of the underlying realities.

One might predict that the expansion rate of the universe might be related mathematically to the upper bound of the predictable complexity of the universe at any point in time.

The predictable complexity in the universe would be a function of the kinds of particles and their potential interactions as represented by their wave functions with the upper limit being defined as the potentiality horizon.

Consider that each event gives rise to a new set of wave functions representing all possible next events. Consider that if we extrapolate from those wave functions a new set of wave functions that represent all of the possible events after those, and so on, that the amplitudes of the wave functions at each successive step would be reduced. The amplitude of these wave functions would continue to decrease as we move our predictions into the future until no wave function has any meaningful amplitude. This edge of predictability is the potentiality horizon.

The potentiality horizon is the point in the predictable future where the probability of any particular event becomes effectively equal to the probability of any other event (or non event). At this point all wave functions are essentially flat — this “flatness” might be related to the Planck constant in such a way that the amplitude of any variability in any wave function is indistinguishable from random chance.

Essentially all wave functions at the potentiality horizon disappear into the quantum foam that is the substrate of our universe. At this threshold no potential event can be distinguished from any other event. If dark energy is directly related to quantum potentiality then at this threshold no further expansion of the universe would occur. The rate of expansion would be directly tied to the rate of expansion of quantum potentiality and to the underlying complexity that drives it.

So, to summarize:

What if dark matter and dark energy represent the matter and energy bound up in alternate realities and potential realities in a quantum multiverse?

If dark matter represents alternate realities invisible to us except through the weak influence of their gravity, and if dark energy represents the the expansion of the universe in order to accommodate the wave functions describing possible future events in the quantum field for all realities (observable and unobservable) with an upper bound defined by the potentiality horizon; then we might predict that the expansion rate of the universe can be related to it’s inherent complexity at any point in time.

We might also predict that the flow of time can be related to the inherent complexity of the wave functions bound in any particular system such that a lower rate of events occurs when the inherent complexity of the system is reduced.

… well, those are my thoughts anyway 😉

Jul 032010
 

I’m not one of “those” guys, really. You know the ones — the zealots who claim that their favorite OS or application is and will be forever more the end-all-be-all of computing.

As a rule I recommend and use the best tool for the job – whatever that might be. My main laptop is Windows XP, my family and customers use just about every recent version of Windows or linux.  In fact, my own servers are a mix of Win2k*, RedHat, CentOS, and Ubuntu, my other laptop is Ubuntu and I switch back and forth between MSOffice and OpenOffice as needed.

Today surprised me though. I realized that I had become biased against Ubuntu in a very insidious way— My expectations were simply not high enough. What’s weird about that is that I frequently recommend Ubuntu to clients and peers alike, and my company (MicroNeil in this case) even helps folks migrate to it and otherwise deploy it in their infrastructure! So how could I have developed my negative expectations?

I have a theory that it is because I find I have to defend myself from looking like “one of those linux guys” pretty frequently when in the company of my many “Windows-Only” friends and colleagues. Then there are all those horror stories about this or that problem and having to “go the long way around” to get something simple to work. I admit I’ve been stung by a few of those situations in the past myself.

But recently, not so much! Ubuntu has worked well in many situations and, though we tend to avoid setups that might become complicated, we really don’t miss anything by using it – and neither do the customers we’ve helped to migrate. On the contrary, in fact, we have far fewer problems with our Ubuntu customers than with our Windows friends.

Today’s story goes like this.

We have an old Toshiba laptop that we use for some special tasks. It came with Windows XP pro, and over the years we’ve re-kicked it a few times (which is sadly still a necessary evil from time to time on Windows boxen).

A recent patch caused this box to become unstable and so we were looking at having to re-kick it again. We thought we might take the opportunity to upgrade to Windows 7. We wanted to get it back up quickly so we hit the local store and purchased W7pro.

The installation was straight forward and since we already have another box running W7 our expectations were that this would be a non-event and all would be happy shortly.

But, no. The first thing to cause us trouble was the external monitor. Boot up the laptop with the monitor attached and that is all you can see — the laptop’s screen was not recognized. Boot up without the external monitor and the laptop’s is the only display that will work. I Spent some time searching various support forums for a solution and basically just found complaints without solutions.

After trying several of the recommended solutions without luck I was ready to quit and throw XP back on the box. Instead I followed a hunch and forced W7 to install all of the available patches just to see if it would work. IT DID!

Or, it seemed like it did. After the updates I was able to turn on the external display and set up the extended desktop… I was starting to feel pretty good about it. So I moved on to the printer. (more about the display madness later)

We have a networked HP2840 Printer/Scanner. We use it all the time. Joy again, I discovered, the printer was recognized and installed without a hitch. Printed the test page. We were going to get out of this one alive (still have some day left).

Remember that scene in perfect storm — They’re battered and beaten and nearly at the end. The sky opens up just a bit and they begin to see some light. It seems they’ve made it and they’re going to survive. Then the sky closes up again and they know they are doomed.

W7 refused to talk to the scanner on the HP2840. That’s a game changer in this case — the point of this particular laptop is accounting work that requires frequent scanning and faxing so the scanner on the HP2840 simply had to work or we would have to go back to XP.

Again I searched for solutions and found only unsolved complaints. Apparently there is little to no chance HP is going to solve this problem for W7 any time soon — at least that is what is claimed in the support forums. There are several workarounds but I was unable to make them fly on this box.

Remember the display that seemed to work? One of the workarounds for the scanner required a reboot. After the reboot the display drivers forgot how to talk to the external display again and it wouldn’t come back no matter how much I tweaked it!

Yep– like in perfect storm, the sky had closed and we were doomed. Not to mention most of the day had evaporated on this project already and that too was ++ungood.

We decided to punt. We would put XP Pro back on the box and go back to what we know works. I suggested we might try Ubuntu– but that was not a popular recommendation under the circumstances… Too new an idea, and at this point we really just wanted to get things working. We didn’t want to open a new can of worms trying to get this to work again with the external monitor, and the printer, and the scanner, and…

See that? There it is– and I bought into it even though I knew better. We dismissed the idea of using Ubuntu because we expected to have trouble with it– But we shouldn’t have!

None the less… that was the decision and so Linda took over and started to install XP again… but there was a problem. XP would not install because W7 was already on the box. (The OS version on the hard drive is newer). So much for simple.

Back in the day we would simply wipe the partition and start again — these days that’s not so easy… But, it’s easy enough. I grabbed an Ubuntu disk and threw it into the box. The idea was to let the Ubuntu install repartition the drive and then let XP have at it — Surely the XP install would have no qualms about killing off a linux install right?!

In for a penny, in for a pound.

As the Ubuntu install progressed past the repartitioning I was about to kill it off and throw the XP disk in… but something stopped me. I couldn’t quite bring myself to do it… so I let it go a little longer, and then a little longer, and a bit more…

I thought to myself that if I’ve already wasted a good part of the day on this I might as well let the Ubuntu install complete and get a feel for how much trouble it will be. If I ran into any issues I would throw the XP disk in the machine and let it rip.

I didn’t tell Linda about this though — she would have insisted I get on with the XP install, most likely. After all there was work piled up and this non-event had already turned into quite a time waster.

I busied myself on the white-board working out some new projects… and after a short time the install was complete. It was time for the smoke test.

Of course, the laptop sprang to life with Ubuntu and was plenty snappy. We’ve come to expect that.

I connected the external monitor, tweaked the settings, and it just plain worked. I let out a maniacal laugh which attracted Linda from the other end of the MadLab. I was hooked at this point and so I had to press on and see if the printer and scanner would also work.

It was one of those moments where you have two brains about it. You’re nearly convinced you will run into trouble, but the maniacal part of your brain has decided to do it anyway and let the sparks fly— It conjured up images of lightning leaping from electrodes, maniacal laughter and a complete disregard for the risk of almost certain death in the face of such a dangerous experiment! We pressed on…

I attempted to add the printer… Ubuntu discovered the printer on the network without my help. We loaded up the drivers and printed a test page. More maniacal laughter!

Now, what to do about the scanner… surely we are doomed… but the maniacal part of me prevailed. I launched simple scanner and it already knew about the HP2840. Could it be?! I threw the freshly printed test page into the scanner and hit the button.

BEAUTIFUL!

All of it simply worked! No fuss. No searching in obscure places for drivers and complicated workarounds. It simply worked as advertised right out of the box!

Linda was impressed, but skeptical. One more thing, she said. “We have to map to the SAN… remember how much trouble that was on the other W7 box?” She was right – that wasn’t easy or obvious on W7 because the setup isn’t exactly what W7 wants to see and so we had to trick it into finding and connecting to the network storage.

I knew better at this point though. I had overcome my negative expectations… With a bit of flare and confidence I opened up the network places on the freshly minted Ubuntu laptop and watched as everything popped right into place.

Ubuntu to the Rescue

In retrospect I should have known better from the start. It has been a long time since we’ve run into any trouble getting Ubuntu (or CentOs, or RedHat…) to do what we needed. I suppose that what happened was that my experience with this particular box primed me to expect the worst and made me uncharacteristically risk averse.

  • XP ate itself after an ordinary automatic update.
  • W7 wouldn’t handle the display drivers until it was fully patched.
  • W7 wouldn’t talk to the HP2840 scanner.
  • Rebooting the box made the display drivers wonky.
  • XP wouldn’t install with W7 present.
  • I’d spent hours trying to find solutions to these only to find more complaints.
  • Yikes! This was supposed to be a “two bolt job”!!!

Next time I will know better. It’s time to re-think the expectations of the past and let them go — even (perhaps especially) when they are suggested by circumstances and trusted peers.

Knowing what I know now, I wish I’d started with Ubuntu and skipped this “opportunity for enlightenment.” On the other hand, I learned something about myself and my expectations and that was valuable too, if a bit painful.

However we got here it’s working now and that’s what matters 🙂

Ubuntu to the rescue!