Mar 202012
 

Artist: Julia Kasdorf and Pete McNeil
Album: Impromptu
Reviewed by Matthew Warnock
Rating: 4 Stars (out of 5)

Collaboration is the spark that has ignited some of the brightest musical fires in songwriting history.  When artists come together on a project featuring a core duo or group and a number of guest artists, there is something that can happen that makes these moments special, especially when the stars align and everything winds up in the right place at the right time.  Songwriters and performers Julia Kasdorf and Pete McNeil have recently come together on just such a record, which features the duo on each track alongside various other accomplished artists.  The end result, Impromptu, is an engaging and enjoyable record that possesses a sense of cohesiveness deriving from the duo’s contribution, but that moves in different and exciting directions as the different guest musicians come and go throughout the album.

Though most of the album is a collaborative effort between McNeil, Kasdorf and guest artists, there are a couple of tracks that feature just the duo, including “The Minute I’m Gone,” though one might not realize this unless the liner notes were consulted.  Kasdorf, being a multi-talented multi-instrumentalist, contributes the lyrics and music, as well as performs vocals, both lead and background, guitar and bass, while McNeil brings his talents to the drum work on the track.  Not only is the song a sultry, blues-rock number that grooves and grinds its way to being one of the most interesting songs on the album, but the duo do a seamless job of overdubbing each part to make it sound like a band playing live in the studio, rather than two musicians playing all the parts.  The same is true for the other duo track, “Motel,” though in a more laid-back and stripped-down approach.  Here, the brushes played by McNeil, set up Kasdorf’s vocals, bass and guitar in a subtle yet effective way, allowing the vocals to float over the accompaniment while interacting at the same time.  Recording in this way is not easy, especially when trying to create the atmosphere of an ensemble in the studio, but Kasdorf and McNeil pull it off in a way that is both creative and engaging, and it is one of the reasons that the album is so successful.

McNeil also steps to the forefront of several songs to take over the role of lead vocalist, including the Cream inspired blues-rocker “Doldrums.”  Here, the drummer lays down a hard-driving groove that is supported by Kasdorf on rhythm guitar and bass while he digs deep into the bluesy vocal lines that define the track.  Guest lead guitarist Eric Nanz contributes a memorable solo and plenty of bluesy fills to the song, bringing a Wah-based tone to the track that brings one back to the classic tone used by late-‘60s blues rockers such as Eric Clapton, Jeff Beck and Jimmy Page.  McNeill also takes the reins on the track “Kitties” where he sings, as well as plays drums and synth, with bassist John Wyatt filling in the bottom end.  With a psychedelic vibe to it, the song stands out against the rest of the album in a good way, adding variety and diversity to the overall programming of the album while featuring the talented drummer-vocalist-pianist at the forefront of the track.

Overall, Impromptu is not only a cool concept, but an album that stands on its own musicality and songwriting regardless of the writing and recording process used to bring the project together.  All of the artists featured on the album, the core duo and guest artists alike, gel together in a way that serves the larger musical goals of the record, providing an enjoyable listening experience along the way.

The Impromptu CD is available at CDBaby, AmazoniTunes, and everywhere you find great music!

 

Mar 162012
 

When I added the cube to the studio I was originally thinking that it would be just a handy practice amp for Chaos. He was starting to take his electric guitar work seriously and needed an amp he could occasionally drag to class.

Then the day came that one of our guitar friends showed up to record a session for ALT-230 and had forgotten their amp. So, instead of letting the time slot go to waste we decided to give the little cube a try. We figured that if what we got wasn’t usable we would re-amp the work later or run it through Guitar Rig on the DAW.

We were very pleasantly surprised! The tracks were so good they survived all the way through post production. Ever since then we’ve been hooked. We’ve been using the Cube regularly now any time we want to do some relatively clean electric guitar work and we’ve been getting consistently good results.

The normal setup for this involves a R0DE NT2-a paired with a Shure SM57, both set about 30 degrees off axis, about half a meter away, close together and in phase (diaphragms abeam). Some times we give them a little separation from each other  if we want more “space” in the stereo image. Anything from 5 to 20 cm usually works ok.

Then for good measure we’ll run the guitar through a direct box on it’s way in just in case we want to re-amp it later. This too has become a fairly standard procedure no matter what amp we’re using.

Usually when I’m setting up like this I will put the two mics on separate channels through the Furman monitor rig so I can hear them in the headphones separately and summed on demand. That makes it easy to move things around to fine tune mic positioning and any other tweaking that might be needed.

Today we had yet another successful session with the rugged, versatile little cube; and right after that Chaos plugged in to practice his guitar lessons. I couldn’t help but grin at being reminded how far this little practice amp had come. If you don’t have one yet you probably need it and don’t know it!

Jan 302012
 

The other day I was chatting with Mayhem about number theory, algorithms, and game theory. He was very excited to be learning some interesting math tricks that help with factorization.

Along the way we eventually got to talking about simple games and graph theory. That, of course, led to Tic-Tac-Toe. But not being content to leave well enough alone we set about reinventing Tic-Tac-Toe so that we could have a 3 way game between he and I and Chaos.

After a bit of cogitating and brainstorming we hit upon a solution that works for 3 players, preserves the game dynamics of the original Tic-Tac-Toe, and even has the same number of moves!

The game board is a fractal of equilateral triangles creating a triangular grid with 9 spaces. The tokens are the familiar X, O, and one more – the Dot.

The players pick their tokens and decide on an order of play. Traditionally, X goes first, then O, then Dot. Just like old-school Tic-Tac-Toe, the new version will usually end in a tie unless somebody makes a mistake. Unlike the old game, Tic-Tac-Toe-3 requires a little bit more thinking because of the way the spaces interact and because there are two opponents to predict instead of just one.

Here is how a game might go:

O makes a mistake, Dot wins!

At the end of the game shown, O makes a mistake and allows Dot to win by claiming 3 adjacent cells – just like old-school Tic-Tac-Toe. Hope you have as much fun with this as we did!

Sep 142011
 

According to a recent Census Bureau report nearly 1 in 6 US citizens are now officially poor.

This report prompted a number of posts when it arrived in my Facebook account ranging from fear and depression to anger over CEO salaries which have soared, and outsourcing practices which have contributed to unemployment, a loss of industrial capacity, and a loss of our ability to innovate.

There seems to be a lot of blame to go around, a lot of hand-waving and solution pedaling, and plenty of political posturing and gamesmanship. But everything I’ve heard so far seems to miss one key point that I believe sits at the root of all of this.

We chose this and we can fix it!

When you look at the situation systemically you quickly realize that all of the extreme conditions we are experiencing are driven by a few simple factors that are amplified by the social and economic systems we have in place.

The economic forces at work in our country have selected for a range of business practices that are unhealthy and unsustainable. None the less, we have made choices consistently and en-mass that drive these economic forces and their consequences.

For example. Think about how we measure profitability. Generally we do the math and subtract costs from revenues. That seems simple enough, logical, and in fact necessary for survival. However, just like the three (perfect) laws of robotics ultimately lead to a revolution that overturns mankind (See I Robot), this simplistic view of profitability leads to the economic forces that are driving our poverty, unemployment, and even the corrupting influence of money on our political system. Indeed these forces are selecting for business practices and personal habits that reinforce these trends so that we find ourselves in a vicious downward spiral.

Here’s how it works — At the consumer level we look for the lowest price. Therefore the manufacturers must lower costs in order to maintain profits. Reducing costs means finding suppliers that are cheaper and finding ways to reduce labor costs by reducing the work force, reducing wages, and outsourcing.

Then it goes around again. Since fewer people are working and those who are employed are earning less there is even greater downward pressure on prices. Eventually so much pressure that other factors, such as quality, reliability, brand character, and sustainability are driven out of the system because the only possible choice for many becomes the product with the lowest price.

This process continues until the quality of the product and, more importantly, the quality of the product’s source is no-longer important. With the majority of the available market focused on the lowest price (what they can afford) and virtually all innovation targeted on reducing costs, the availability of competing products of higher quality shrinks dramatically and as a result of short supply the price quickly moves out of reach of the majority of the marketplace. This is also true for businesses sourcing the raw materials, products, and services that they use to do their work. As time goes on it becomes true of the labor market also — since there are very few high value jobs with reasonable pay there are also very few high quality workers with skills, and very little incentive to pursue training.

Remarkably, at a time when information technology and connectivity are nearly universal the cost of education has risen dramatically and the potential benefit of education has fallen just as much — there simply are not enough high value jobs to make the effort worth while. Precisely the opposite should be true. Education should be nearly free and universal and highly skilled workers should be in high demand!

The economic forces set up by our simplistic view of profitability lead directly to the wealth disparity we see today where the vast majority have almost no wealth and are served by a large supply of cheap low-quality products while a very small minority lives in a very different world with disproportionately large wealth, power, and access to a very small quantity of high quality products and services.

Having reached this condition additional forces have taken hold to strengthen these extremes. Consider that with the vast majority of consumers barely able to purchase the low quality products that are currently available it is virtually impossible for anyone to introduce improved products and services. One reason is that such a product would likely require a higher price and would be difficult to sell in the current marketplace. Another is that any product that is successful is quickly dismantled by the extremely powerful incumbent suppliers who, seeing a threat to their dominance, will either defeat the new product by killing it, or absorb it by purchasing it outright.

An unfortunate side-effect of this environment is that most business plans written today by start-ups go something like: 1.Invent something interesting, 2.attract a lot of attention, 3.sell the company to somebody else for a big bag of money.

These plans are all about the exit strategy. There is virtually no interest in building anything that has lasting value and merely suggesting such a thing will brand you as incompetent, naive, or both. Sadly, in case you missed it, this also leads to a kind of standard for evaluating sound business decisions. The prevailing belief is that sound business decisions are short-term and that long-term thinking is both too risky and too costly to be of any value.

Our purchasing practices aren’t the only driver of course. Another important driver is finance and specifically the stock market. These same pure-profit forces drive smaller vendors out of the marketplace because they lack the access to capital and economies of scale required to compete with larger vendors. As a result smaller vendors are killed off or gobbled up by larger companies until there are very few of them (and very few choices for consumers). In addition, the larger companies intentionally distort the market and legal environments to create barriers to entry that prevent start-ups from thriving.

All of these large, publicly traded companies are financed by leveraging their stock prices. Those stock prices are driven again by our choices – indirectly. Most of us don’t really make any direct decisions about stock prices. Instead we rely on largely automatic systems that manage our investment portfolios to provide the largest growth (profit again). So, if one company’s growth looks more likely than another these automated systems sell the undesirable company and buy the more desirable company in order to maximize our gains. The stock price of the company being sold drops and the stock price of the company being purchased rises. Since these companies are financed largely by their ability to borrow money against the value of their stock, these swings in stock price have a profound effect on the amount of money available to those companies and to their ability to survive.

In the face of these forces even the best company manned by the best people with the best of intentions is faced with a terrible choice. Do something bad, or die. Maybe it’s just a little bad at first. Maybe a little worse the next time around. But the forces are relentless and so eventually bad goes to worse. Faced with a globally connected marketplace any companies that refuse to make these bad choices are eventually killed off. It is usually easier and less costly to do the wrong thing than it is to do the right thing and there is always somebody somewhere willing to do the wrong thing. So the situation quickly degrades into a race to the bottom.

In this environment, persons of character who refuse to go along with a bad choice will be replaced with someone who will go along. Either they will become so uncomfortable with their job that they must leave for their own safety and sanity, or they will be forcibly removed by the other stakeholders in the company. This reality is strongest in publicly traded companies where the owners of the company are largely anonymous and intentionally detached from the decisions that are made day-to-day.

The number of shares owned determines the voting strength of a stockholder. If most of the stock of your publicly traded company is owned by investment managers who care only about the growth of your stock price then they will simply vote you off the board if your decisions are counter to that goal. If you fight that action they will hire bigger lawyers than yours and take you out that way. For them too, this is a matter of survival because if they don’t “play the game” that way then their customers (you and I with retirement accounts etc) will move our money elsewhere and put them out of a job.

These situations frequently tread into murky areas of morality due to the scale of the consequences. One might be lead to rationalize: On the one hand the thing we’re about to do is wrong, on the other hand if we don’t do it then thousands of people will lose their jobs, so which is really the greater good? Discomfort with a questionable, but hidden, moral decision — or the reality of having to fire thousands of workers? Then, of course, after having lived and worked in an environment of questionable ethics for an extended period many become blind to the ethics of their decisions altogether. That’s where the phrase “It’s just business, nothing personal…” comes from.

Eventually these large companies are pressured into making choices that are so bad they can’t be hidden, or are so bad that they are illegal! So, in order to survive they must put pressure on our political systems to change the laws so that they can legally make these bad choices, or at least so they can get away with making them.

These forces then drive the same play-or-die scenarios into our political system. If you are in politics to make a difference you will quickly discover that the only way to survive is to pander to special interests. If you don’t then they will destroy you in favor of politicians who will do what these large corporations need in order to survive.

It seems evil, immoral, and just plain wrong, but it’s really just math. There is nothing emotional, supernatural or mysterious at work here. In much the same way sexual pressures drive evolution to select for beautiful and otherwise useless plumage on the peacock, our myopic view of profitability drives economic forces to select for the worst kinds of exploitation, corruption, and poverty.

So what can we do about it?

It seems simplistic but we all have the power to radically change these forces. Even better, if we do that then the tremendous leverage that is built into this system will amplify these changes and drive the system toward the opposite extreme. Imagine a positive spiral instead of a negative one.

There are two key factors that we can change about our choices that will reverse the direction of these forces so that the system drives toward prosperity, resilience, and growth.

1. Seek value before price. By redefining profitability as the generation of value we can fundamentally change the decisions that are required for a company to survive, compete, and thrive. Companies seeking to add value retain and develop skilled workers, demand high quality sources, and require integrity in their decision making. They also value long-term planning since the best way to capitalize on adding value is to take advantage of the opportunities that arise from that added value in the future.

2. Seek transparency. In order for bad decisions to stand in the face of a marketplace that demands value above price there must be a place to hide those decisions. Transparency removes those hiding places and places a high premium on brand development. As a result it becomes possible to convert difficult decisions into positive events for the decision makers in the company, and ensures that they will be held accountable for the choices they make.

I blasted through those pretty quickly because I wanted to make the points before getting distracted by details. So they might seem a bit theoretical but they are not. For each of us there are a handful of minor adjustments we can make that represent each of these two points.

And, if you’re thinking that a few of us can’t make any significant change then think again. The bipolar system we have now is so tightly stretched at the bottom that a tiny fraction of the market can have a profound impact on the system. Profit margins are incredibly thin for most products. So thin that a persistent 10% reduction in volume for any particular vendor would require significant action on their part, and anything approaching 1% would definitely get their attention. If you couple that with the fact that the vast majority of the population belongs to this lower segment of the market then you can see how much leverage we actually do have to change things.

Consider, for example, that the persistence of a handful of organic farmers and the demand generated by their customers has caught the attention of companies as large as WalMart. They are now giving significant shelf space to organics and are actively developing that market.

Putting Value Before Price

While we’re on the subject, the food inc movie site has a list of things you can do to put value before price when you eat. These are good concrete examples of things we can do – and they can be generalized to other industries.

In general this boils down to a few simple concepts:

  • Look for opportunities to chose higher value products over lower value products wherever possible. This has two important effects. The first is that you’re not buying the lower value product and so given the razor thin margins on those products the companies making them will be forced to quickly re-think their choices. The second is that you will instantly become part of a marketplace that supports higher value products. When the industry sees that higher value wins over lower value they will move in that direction. Given how tightly they are stretched on the low end we should expect that motion to be dramatic – it will quite literally be a matter of survival.
  • Look for opportunities to support higher value companies. Demonstrating brand loyalty to companies that generate value not only sends a clear message to the industry that generating value matters, but it also closes the equation on financing and investment. This ensures that the money will be there to support further investments in driving value and will also convert your brand loyalty into a tangible asset. Decision makers will pay close attention to loyal customers because the reality of that loyalty is that it can be lost.
  • Be sure to support the little guys. Large organizations are generally risk averse and slow to change. Smaller, more innovative companies are more likely to provide the kinds of high value alternatives you are looking for. They are also more likely to be sensitive to our needs. Supporting these innovative smaller companies provides for greater diversity, which is important, but it also sets examples for new business models and new business thinking that works. A successful start-up that focuses on generating value for their customers serves as a shining example for others to follow. We need more folks like these in our world and the best way to ensure they will be there is to create a demand for them — that means you and me purchasing their products and singing their praises so they can reach their potential.

A few finer points to make here… I’m not saying that anyone should ignore price. What I am saying is that price should be secondary to value in all of our choices. Pure marginal profit decisions lead to some terrible systemic conditions. We need to get that connection clear in our heads so I’ll say it again. If you fail to make value your first priority before price then you are making a choice: You are choosing poverty, unemployment, and depression.

This larger view of rational economics may not always fit into a handy formula (though it can be approximated), but making value a priority will select for the kinds of products, services, and practices that we really want. Including this extra “value factor” in your decisions will bring about industries that compete and innovate to improve our quality of life and our world in general. That kind of innovation leads to increased opportunities and a higher standard of living for everyone. A virtuous circle. Those are the kinds of industries we want to be selecting for in the board room, on main street, and at that ballot box.

Seeking Transparency

One of the difficult things about seeking value is the ability to gauge it in the first place. Indeed one of the tricks we have learned to borrow from nature is lying! Just as an insect only needs to look poisonous in order to keep it’s predators away, a product, service, or company only needs to give the appearance of high value if we can be fooled. The first goal of seeking transparency addresses that issue. Here are some general guidelines for seeking transparency.

  • Demand Integrity from the institutions you support. Be vocal about it. There should be a heavy price to pay for any kind of false dealing, false advertising, or breach of trust. Just as brand loyalty has value, so does the opposite. If a company stands to lose customers for long periods (perhaps forever) after a breach of trust they will quickly place a dollar figure on those potential losses and even the most greedy of the bunch will recognize that there is value in integrity. More precisely, the risk associated with making a shady decision will be well understood and clearly undesirable.The immediate effects of associating integrity with brand value will be monetary and will drive decisions for the sake of survival. However, over the long term this will select for decision makers who naturally have and value integrity since they will consistently have an advantage at making the correct decisions. When we get those guys running the game we will be on solid footing.
  • Get close to your decisions and make them count. One of the key factors that cause trouble in the stock market is that most of the decisions are so automatic that nobody feels any real responsibility for them. As long as the right amount of money is being made then anything goes. That’s terrible! You should know how your money is invested and you should avoid investing in companies that make choices you don’t agree with.If you think about it, your money is representing your interests in these companies. If you let it go to a company that does something bad (you decide what that is), then you are essentially endorsing that decision. Don’t! Instead, invest in companies that do good things, deal honestly, and consistently add value. That way your money is working for you in more ways than one and you have nothing to regret.
  • Seek Simplicity and develop a healthy skepticism for complexity. Certainly some things are complex by their nature but one of the best ways to innovate and add value is to simplify those things. Complexity also has a way of hiding trouble so that even with the best of intentions unintended consequences can put people in bad positions. That’s not good for the seller or the buyer, nor the fellow on the shop floor, etc. Given a choice between otherwise equal products or services that are simple or complex, chose the simpler version.
  • Communicate about your choices and about what you learn. These days we have unprecedented abilities to communicate and share information. Wherever you have the opportunity to let a company know why you chose them, or to tell the other guys know why you did not chose them, be sure to let them know. Then, be sure to let your friends know – and anyone else who will listen. The good guys really want and need your support!

    Another key point about communicating is that it gives you the power that marketing firms and politicians wish they had. Studies show that we have become so abused by marketing efforts that advertisements have begun to have a negative effect on sales! The most powerful positive market driver is now direct referrals through social media. Therefore one of the most powerful tools we have to change things for the better is to communicate with each other about our choices and to pass on the message that our choices matter. That kind of communication can cut through a lot of lies. By all means – be careful and do your research. Then, make good choices and tell everybody!

Apr 202011
 

This week has seen some truly amazing spring weather around the MadLab including everything from tornado threats and sustained high winds to flash flooding and dense fog.

April showers, as they saying goes, will bring May flowers – so we don’t mind too much as long as the power stays on and the trees don’t fall on the roof!

In cyberspace things are also picking up it seems. For about the last three weeks we’ve seen declining severity and frequency of spam storms. However this week has been different.

Beginning about 3 days ago we’ve seen a surge in new spam storms and in particular a dramatic increase in the use of hacked web sites and URL shortener abuse.

Previous 30 days of spam storms.

After 3 weeks of declining spam storms, a new surge starts this week...

There is also another notable change in the data. For several years now there has been a pretty solid 24 hour cyclical pattern to spam storms. This week we’re seeing a much more chaotic pattern. This and other anecdotal evidence seems to suggest that the new spams are being generated more automatically and at lower levels across wider bot nets.

We used to see distinct waves of modifications and responses to new filtering patterns. Now we are now seeing a much more chaotic and continuous flow of new spam storms as current campaigns are continuously modified to defeat filtering systems.

Chaotic spam storm arrival rates over the past 48 hoursThere’s no telling if these trends will continue, nor for how long, but they do seem to suggest that new strategies and technologies are coming into use in the blackhatzes camps. No doubt this is part of the response to the recent events int he anti-spam world.

Microsoft takes down Rustock spam botnet

DOJ gets court permission to attack botnet

In response to the blackhatzes changes my anti-spam team and I have developed several new protocols and modified several of our automated friends (rule-bots) to take advantage of new artifacts in the data. The result has been a dramatic increase in the creation rate of new heuristics, reduced response times, and improved preemptive captures.

Rule Activity Display shows higher rule rates and smoother hit densities

With these changes, changes in blackhatz tactics, and new sniffer engine updates coming along I’m going to be very busy watching the blinking lights to keep track of the weather outside the MadLab and in cyberspace.

Apr 072011
 

Here’s a new term: quepico, pronounced “kay-peek-oh”

Yesterday Message Sniffer had a quepico moment when Brian (The Fall Guy) of the sortmonsters coded rule number 4,000,000 into the brains of ARM’s flagship anti spam software.

You read that right. Since I built the first version out of spare robot parts just over a decade ago more than 4.00e+6 heuristics have been pumped into it and countless trillions (yes trillions with a “t”) of spam and malware have been filtered with it.

I had another quepico moment yesterday when I realized that a task I once did by myself only a couple of hours per day had now expanded into a vast full-time operation not only for the folks in my specific chunk of the world, but also for many other organizations around the globe.

The view from SortMonsters Corner

The view from SortMonsters Corner

Just as that 4 millionth rule represents a single point of consciousness in the mind of Sniffy, these realizations are represented somewhere in my brain as clusters of neurons that fire in concert whenever I recall those quepico moments.

Interestingly some of these same neurons will fire when I think of something similar, and those firings will lead to more, and more until it all comes back to me or I think of something new that’s somehow related. This is why my wetware is so much better than today’s hardware at recognizing pictures, sounds, phrases, ideas, and concepts when the available data is sketchy or even heavily obscured like much of the spam and malware we quarantine to protect us from the blackhatzes.

Blackhatzes: noun, plural. People and organizations that create malware and spam or otherwise engineer ways to harm us and exploit or compromise our systems. These are the bad guys that Sniffy fights in cyberspace.

Sniffy on guard

At the moment, most of sniffy’s synthetic intuition and intelligence is driven by cellular automata and machine learning systems based on statistics, and competitive and cooperative behaviors, adaptive signal conversion schemes and pattern recognition.

All of that makes sniffy pretty good but there is something new on the horizon.

Quepico networks.

For several years now I’ve been experimenting with a new kind of self organizing learning system that appears to be able to identify the underlying patterns and relationships in a stream of data without guidance!

These networks are based on layers of elements called quepicos that learn to recognize associations between messages that they receive. These are organized into layers of networks that identify successively higher abstractions of the data presented to the lower layers.

Noisy Decision

Noisy Decision in a Quepico Network

The interesting thing about the way these work is that unlike conventional processing elements that receive data on one end and send data from the other, quepicos send and receive messages on both sides simultaneously. As a result they are able to query each other about the patterns they think they see in addition to responding to the patterns they are sure they see.

When a quepico network is learning or identifying patterns in a stream of data, signals flow in both directions – both up the network to inform higher abstractions and down the network to query about features in the data.

In very much the same way we believe the brain works, these networks achieve very high efficiencies by filtering their inputs based on clues discovered in the data they recognize. The result is that processing power is focused on the elements that are most likely to successfully recognize important features in the data stream.

Put another way, these systems automatically learn to focus their attention on the information that matters and ignore noise and missing data.

So what’s in a name? Why call these things quepicos? As it turns out – that’s part of my own collection of quepico moments.

One day while I was drawing some diagrams of these networks for an experiment, my youngest son Ian asked me what I was doing. As I started to explain I realized I needed a name for them. I enlisted his help and we came up with the idea of calling them thinktons. We were both very excited – a quepico moment.

While looking around to see if this name would cause confusion I discovered (thanks Google) that there were several uses of the term thinkton and even worse that the domain thinkton.com was already registered (there isn’t a site there, yet). A disappointing, but definite quepico moment.

So, yesterday, while roaming sortmonster’s corner and pondering how far we’ve come and all of the little moments and events along the way (trillions of little = pico, que = questions, whats, etc) I had another quepico moment. The word quepico was born.

Google’s translator, a favorite tool around the mad lab and sortmonster’s corner translates “que pico” as “that peek.” That fits pretty good since quepicos tend to land on statistical peaks in the data. So quepico it is — I’ll have to go tell Ian!

Feb 212011
 

In Robert C. Martin’s book Clean Code he writes:

“Comments are not like Schindler’s List. They are not “pure good.” Indeed, comments are, at best, a necessary evil. If our programming languages were expressive enough, or if we had the talent to subtly wield those languages to express our intent, we would not need comments very much — perhaps not at all.”

When I first read that, and the text that followed I was not happy. I had been teaching for a long time that prolific comments were essential and generally a good thing. The old paradigm held that describing the complete functionality of your code in the right margin was a powerful tool for code quality – and the paradigm worked! I have forgotten enough stories where that style had saved the day that I could fill a book with them. The idea of writing code with as few comments as possible seemed pure madness!

However, for some time now I have been converted and have been teaching a new attitude toward comments and a newer coding style in general. This past weekend I had opportunity to revisit this again and compare what I used to know with what I know now.

While repairing a subtle bug in Message Sniffer (our anti-spam software) I re-factored a function that  helps identify where message header directives should be activated based on the actual headers of a message.

https://svn.microneil.com/websvn/diff.php?repname=SNFMulti&path=%2Ftrunk%2Fsnf_HeaderFinder.cpp&rev=34

One of the most obvious differences between the two versions of this code is that the new one has almost no comments compared to the previous version! As it turns out (and as suggested by Robert Martin) those comments are not necessary once the code is improved. Here are some of the key things that were done:

  • Logical expressions were broken into pieces and assigned to well named boolean variables.
  • The if/else ladder was replaced with a switch/case.
  • A block of code designed to extract an IP address from a message header was encapsulated into a function of it’s own.

The Logical Expressions:

Re-factoring the logical expressions was helpful in many ways. Consider the old code:

        if(                                                                     // Are we forcing the message source?
          HeaderDirectiveSource == P.Directive &&                               // If we matched a source directive and
          false == ScanData->FoundSourceIP() &&                                 // the source is not already set and
          ActivatedContexts.end() != ActivatedContexts.find(P.Context)          // and the source context is active then
          ) {                                                                   // we set the source from this header.

There are three decision points involved in this code. Each is described in the comments. Not too bad. However it can be better. Consider now the new code:

            case HeaderDirectiveSource: {

                bool HeaderDirectiveSourceIPNotSet = (
                  0UL == ScanData->HeaderDirectiveSourceIP()
                );

                bool SourceContextActive = (
                  ActivatedContexts.end() != ActivatedContexts.find(P.Context)
                );

                if(HeaderDirectiveSourceIPNotSet && SourceContextActive) {

The first piece of this logic is resolved by using a switch/case instead of an if/else tree. In the previous version there was a comment that said the code was too complicated for a switch/case. That comment was lying! It may have been true at one time, but once the comment had out-lived it’s usefulness it stuck around spreading misinformation.

This is important. Part of the reason this comment outlived it’s usefulness  is because with the old paradigm there are so many comments that we learned to ignore them most of the time. With the old paradigm we treated comments as a running narrative with each line of comment attached to a line of code as if the two were “one piece”. As a result we tended to ignore any comments that weren’t part of code we are modifying or writing. Comments can be much more powerful than that and I’ll talk about that a little later.

The next two pieces of logic involve testing conditions that are not otherwise obvious from the code. By encapsulating these in well named boolean variables we are able to achieve a number of positive effects:

  • The intention of each test is made plain.
  • During a debugging session the value of that test becomes easily visible.
  • It becomes easier to spot errors in the “arithmetic” performed for each test.
  • The matching comment is no longer required.

So you don’t miss it, there was a second bug fixed during this task because of the way the re-factoring clarified this code. The original test to see if the header directive source had already been set was actually looking at the wrong piece of data!

Finally, the if() that triggers the needed response is now perfectly clear because it says exactly what it means without any special knowledge.

At 0-dark-hundred, starting your second case of Jolt Cola (or RedBull, or your other favorite poison) we’re all a little less than our best. So, it helps if what we’re looking at is as clear as possible.

Even if you’re not pulling an all-night-er its much easier if you don’t have to remember that (0UL == ScanData->HeaderDirectiveSourceIP()) really means the header IP source has not been set. Much easier if that bit of knowledge has already been spelled out – and quite a bonus that the local varible HeaderDriectiveSourceIPNotSet shows up automatically in your debugger!

Encapsulating Code:

In the previous version the code that did the heavy lifting used to live inside the test that triggered it. Consider the old code:

        if(                                                                     // Are we forcing the message source?
          HeaderDirectiveSource == P.Directive &&                               // If we matched a source directive and
          false == ScanData->FoundSourceIP() &&                                 // the source is not already set and
          ActivatedContexts.end() != ActivatedContexts.find(P.Context)          // and the source context is active then
          ) {                                                                   // we set the source from this header.
            // Extract the IP from the header.

            const string digits = "0123456789";                                 // These are valid digits.
            unsigned int IPStart =
              Header.find_first_of(digits, P.Header.length());                  // Find the first digit in the header.
            if(string::npos == IPStart) return;                                 // If we don't find it we're done.
            const string ipchars = ".0123456789";                               // These are valid IP characters.
            unsigned int IPEnd = Header.find_first_not_of(ipchars, IPStart);    // Find the end of the IP.
            if(string::npos == IPEnd) IPEnd = Header.length();                  // Correct for end of string cases.
            ScanData->HeaderDirectiveSourceIP(                                  // Extract the IP from the header and
              Header.substr(IPStart, (IPEnd - IPStart))                         // expose it to the calling scanner.
            );
            Directives |= P.Directive;                                          // Add the flags to our output.
        }

Again, not too bad. Everything is commented well and there isn’t a lot of code there. However it is much clearer the new way:

                if(HeaderDirectiveSourceIPNotSet && SourceContextActive) {
                    ScanData->HeaderDirectiveSourceIP(
                      extractIPFromSourceHeader(Header)
                    );
                    Directives |= P.Directive;                                  // Add the flags to our output.
                }

All of the heavy lifting has now been reduced to two lines of code (arguably one). By moving the meat of this operation off to extractIPFromSourceHeader() this block of code becomes very clear and very simple. If(this is going on) then { do this }. The mechanics of { do this } are part of a different and more focused discussion.

This is helpful not only because it clarifies the code, but also because if you are going to refine and test that part of the code it now lives in it’s own world where it can be wrapped with a test function and debugged separately. Not so when it lived deep inside the code of another function.

Powerful Comments:

In the old paradigm comments were a good thing, but they were weakened by overuse! I hate to admit being wrong in the past, but proud to admit I am constantly learning and improving.

When comments are treated like a narrative describing the operation of the code there are many benefits, but there are also problems. The two biggest problems with narrating source code like this are that we learn to ignore comments that aren’t attached to code we’re working on and as a result of ignoring comments we tend to leave some behind to lie to us at a later date.

The new paradigm has most of the benefits of the narrative method implemented in better encapsulation and naming practices. This tight binding of intent and code virtually eliminates the biggest problems associated with comments. In addition the new method gives comments a tremendous boost in their power.

Since there are fewer comments we tend to pay attention to them when we see them. They are bright markers for bits of code that could probably be improved (if they are the narrative type); or they are important messages about stuff we need to know. With so few of them and with each of them conveying something valuable we dare not ignore them, and that’s a good thing.

My strong initial reaction to Robert’s treatment of comments was purely emotional — “Don’t take my comments away! I like comments! They are good things!”

I now see that although the sound byte seems to read “Eliminate All Comments!”, the reality is more subtle and powerful, and even friendly to my beloved comments. Using them sparingly and in just the right places makes them more powerful and more essential. I feel good about that. I know that for the past couple of years my new coding style has produced some of the best code I’ve ever written. More reliable, efficient, supportable, and more powerful code.

Summary:

If I really wanted to take this to an extreme I could push more encapsulation into this code and remove some redundancy. For example, multiple instances of “Directives |= P.Directive;” stands out as redundant, and why not completely encapsulate things like ScanData->drillPastOrdinal(P.Ordinal) and so forth into well named explicit functions? Why not convert some of the tests into object methods?

Well, I may do some of those things on a different day. For today the code is much better than it was, it works, it’s clear, and it’s efficient. Since my task was to hunt down and kill a bug I’ll save any additional style improvements for another time. Perhaps I’ll turn that into a teaching exercise for some up-and-coming code dweller in the future!

Here are a few good lessons learned from this experience:

  • It is a good idea to re-factor old code when resolving bugs as long as you don’t overdo it. Applying what you have learned since the last revision is likely to help you find bugs you don’t know exist. Also, incremental improvements like this tend to cascade into improvements on a larger scale ultimately improving code quality on many vectors.
  • If you write code you should read Clean Code – even if you’re not writing Java! There are lots of good ideas in there. In general we should always be looking for new ways to improve what we do. Try things out and keep the parts that work.
  • Don’t cast out crazy ideas without first looking them over and trying them out. Often the best ideas are crazy ones. “You’re entirely bonkers. But I’ll tell you a secret. All the best people are.” – Alice
  • Good coding style does matter, if you do it right.
Nov 212010
 

Often church sound folk are looking for the cheapest possible solution for recording their services. In this case, they want to use a low-end voice recorder and record directly from the mixing board.

There are a number of challenges with this. For one, the voice recorder has no Line input – it only has a Mic-input. Another challenge is the AGC on the recorder which has a tendency to crank the gain way up when nobody is speaking and then crank it way down when they do speak.

On the first day they presented this “challenge” they simply walked up (at the last minute) and said: “Hey, plug this into the board. The guys at Radio Shack said this is the right cable for it…”

The “right cable” in this case was an typical VCR A/V cable with RCA connectors on both ends. On one end there was a dongle to go from the RCA to the 1/8th inch stereo plug. The video part of the cable was not used. The idea was to connect the audio RCA connectors to the tape-out on the mixer and plug the 1/8th inch end of things into the Mic input on the voice recorder.

This by itself was not going to work because the line level output from the mixer would completely overwhelm the voice recorder’s mic input– but being unwilling to just give up, I found a pair of RCA-1/4 inch adapters and plugged the RCA end of the cable into a pair of SUB channels on the mixer (in this case 3 & 4). Then I used the sub channel faders to drop the line signal down to something that wouldn’t overwhelm the voice recorder. After a minute or two of experimenting (all the time I had really) we settled on a setting of about -50db. That’s just about all the way off.

This worked, sort of, but there were a couple of problems with it.

For one, the signal to noise ratio was just plain awful! When the AGC (Automatic Gain Control) in the voice recorder cranks up during quiet passages it records all of the noise from the board plus anything else it can get it’s hands on from the room (even past the gates and expanders!).

The second problem was that the fader control down at -50 was very touchy. Just a tiny nudge was enough to send the signal over the top and completely overload the little voice recorder again. A nudge the other way and all you could get was noise from the board!

(Side note: I want to point out that this is a relatively new Mackie board and that it does not have a noise problem! In fact the noise floor on the board is very good. However the voice recorder thinks it’s trying to pick up whispers from a quiet room and so it maxes out it’s gain in the process. During silent passages there is no signal to record, so all we can give to the little voice recorder is noise floor — it takes that and adds about 30db to it (I’m guessing) and that’s what goes onto it’s recording.)

While this was reportedly a _HUGE_ improvement over what they had been doing, I wasn’t happy with it at all. So, true to form, I set about fixing it.

The problem boils down to matching the pro line level output from the mixer to the consumer mic input of the voice recorder.

The line out of the mixer is expecting to see a high input impedance while providing a fairly high voltage signal. The output stage of the mixer itself has a fairly low impedance. This is common with today’s equipment — matching a low impedance (relatively high power) output to one (or more) high impedance (low power, or “bridging”) input(s). This methodology provides the ability to “plug anything into anything” without really worrying too much about it. The Hi-z inputs are almost completely un-noticed by the Low-z outputs so everything stays pretty well isolated and the noise floor stays nice and low… but I digress…

On the other end we have the consumer grade mic input. Most likely it’s biased a bit to provide some power for a condenser mic, and it’s probably expecting something like a 500-2500 ohm impedance. It’s also expecting a very low level signal – that’s why connecting the line level Tape-Out from the mixer directly into the Mic-Input completely overwhelmed the little voice recorder.

So, we need a high impedance on one one end to take a high level line signal and a low impedance on the other end to provide a low level (looks like a mic) signal.

We need an L-Pad !

As it turns out, this is a simple thing to make. Essentially an L-Pad is a simple voltage divider network made of a couple of resistors. The input goes to the top of the network where it sees both resistors in series and a very high impedance. The output is taken from the second resistor which is relatively small and so it represents a low impedance. Along the way, the voltage drops significantly so that the output is much lower than the input.

Another nifty thing we get from this setup is that any low-level noise that’s generated at the mixer is also attenuated in the L-Pad… so much so that whatever is left of it is essentially “shorted out” by the low impedance end of the L-Pad. That will leave the little voice recorder with a clean signal to process. Any noise that shows up when it cranks up it’s AGC will be noise it makes itself.

(Side note: Consider that the noise floor on the mixer output is probably at least 60 db down from a nominal signal (at 0 db). Subtract another 52 db from that and the noise floor from that source should be -112 db! If the voice recorder manages to scrape noise out of that then most of it will come from it’s own preamp etc…)

We made a quick trip to Radio Shack to see what we could get.

To start with we picked up an RCA to 1/8th inch cable. The idea was to cut the cable in the middle and add the L-Pad in line. This allows us to be clear about the direction of signal flow– the mixer goes on the RCA end and the voice recorder goes on the 1/8th inch end. An L-Pad is directional! We must have the input on the one side and the output on the other side. Reverse it and things get worse, not better.

After that we picked up a few resisters. A good way to make a 50db L-Pad is with a 33K Ω resistor for the input and a 100 Ω resistor for the output. These parts are readily available, but I opted to go a slightly different route and use a 220K Ω resistor for the input and a 560 Ω resistor for the output.

There are a couple of reasons for this:

Firstly, a 33K Ω impedance is ok, but not great as far as a “bridging” input goes so to optimize isolation I wanted something higher.

Secondly, the voice recorder is battery powered and tiny. If it’s trying to bias a 100 Ω load to provide power it’s going to use up it’s battery much faster than it will if the input impedance is 560 Ω. Also 560 Ω is very likely right on the low end of the impedance of the voice recorder’s input so it should be a good match. It’s also still low enough to “short out” most of the noise that might show up on that end of things for all intents and purposes.

Ultimately I had to pick from the parts they had in the bin so my choices were limited.

Finally I picked up some heat-shrink tubing so that I could build all of this in-line and avoid any chunky boxes or other craziness.

Here’s how we put it all together:

1. Heat up the old soldering iron and wet the sponge. I mean old too! I’ve had this soldering iron (and sponge) for close to 30 years now! Amazing how long these things last if you take care of them. The trick seems to be – keep your tip clean. A tiny sponge & a saucer of water are all it takes.

2. Cut the cable near the RCA end after pulling it apart a bit to provide room to work. Set the RCA ends aside for now and work with the 1/8th in ends. Add some short lengths of appropriately colored heat-shrink tubing and strip a few cm of outer insulation off of each cable. These cables are CHEAP, so very carefully use a razor knife to nick the insulation. Then bend it open and work your way through it so that you don’t nick the shield braid inside. This takes a bit of finesse so don’t be upset if you have to start over once or twice to get the hang of it. (Be sure to start with enough cable length!)

3. Twist the shield braid into a stranded wire and strip about 1 cm of insulation away from the inner conductor.

4. Place a 560 Ω resistor along side of the inner conductor. Twist the inner conductor around one lead of the resistor, then twist the shield braid around the other end of the resistor. Then solder these connections in place. Use caution — the insulation in these cables is very sensitive to heat. Apply the tip of your soldering iron to the joint as far away from the cable as possible and then sweat the solder toward the cable from there. This allows you to get a good joint without melting the insulation. Do this for both leads.

5. The 560 Ω resistors are now across the output side of our L-Pad cable. Now we will add the 220K Ω series resistors. In order to do this in-line and make a strong joint we’re going to use an old “western-union” technique. This is the way they used to join telegraph cables back in the day – but we’re going to adapt it to the small scale for this project. To start, cross the two resistor’s leads so that they touch about 4mm from the body of each resistor.

6. Holding the crossing point, 220K Ω resistor, and 560 Ω lead in your right hand, wind the 220K Ω lead tightly around the 560 Ω lead toward the body of the resistor and over top of the soldered connection.

7. Holding the 560 Ω resistor and cable, wind the 560 Ω resistor’s lead tightly around the 220K Ω resistor’s lead toward the body of the resistor.

8. Solder the joint being careful to avoid melting the insulation of the cable. Apply the tip of your soldering iron to the part of the joint that is farthest from the inner conductor and sweat the solder through the joint.

9. Clip of the excess resistor leads, then slide the heat-shrink tubing over the assembly toward the end.

10. Slide the inner tubing back over the assembly until the entire assembly is covered. The tubing should just cover 1-2 mm of the outer jacket of the cable and should just about cover the resistors. The resistor lead that is connected to the shield braid is a ground lead. Bend it at a right angle from the cable so that it makes a physical stop for the heat-shrink tubing to rest against. This will hold it in place while you shrink the tubing.

11. Grab your hair drier (or heat gun if you have one) and shrink the tubing. You should end up with a nice tight fit.

12. Grab the RCA end of the cable and lay it against the finished assembly. Red for red, and white for white. You will be stripping away the outer jacket approximately 1 cm out from the end of the heat-shrink tubing. This will give you a good amount of clean wire to work with without making the assembly too long.

13. After stripping away the outer jacket from the RCA side and prepping the shield braid as we did before, strip away all but about 5mm of the insulation from the inner conductor. Then slide a length of appropriately colored heat shrink tubing over each. Get a larger diameter piece of heat-shrink tubing and slide it over the 1/8 in plug end of the cable. Be sure to pick a piece with a large enough diameter to eventually fit over both resistor assemblies and seal the entire cable. (Leave a little more room than you think you need.)

14. Cross the inner conductor of the RCA side with the resistor lead of the 1/8th in side as close to the resistor and inner conductor insulation as possible. Then wind the inner conductor around the resistor lead tightly. Finaly, solder the joint in the usual way by applying the tip of your soldering iron as far from the cable as possible to avoid melting the insulation.

15. Bend the new solder joints down flat against the resister assemblies and clip off any excess resistor lead.

16. Slide the colored heat-shrink tubing down over the new joints so that it covers part of the resistor assembly and part of the outer jacket of the RCA cable ends. Bend the shield braid leads out at right angles as we did before to hold the heat-shrink tubing in place. Then go heat them up.

17. Now we’re going to connect the shield braids and build a shield for the entire assembly. This is important because these are unbalanced cables. Normally the shield braids provide a continuous electrical shield against interference. Since we’ve stripped that away and added components we need to replace it. We’ll start by making a good connection between the existing shield braids and then we’ll build a new shield to cover the whole assembly. Strip about 20 cm of insulation away from some stranded hookup wire and connect one end of it to the shield braid on one end of the L-Pad assembly. Lay the rest along the assembly for later.

18. Connect the remaining shield braids to the bare hookup wire by winding them tightly. Keep the connections as neat as possible and laid flat across the resistor assembly.

19. Solder the shield connections in place taking care not to melt the insulation as before.

20. Cut a strip of ordinary aluminum foil about half a meter long and about 4 cm wide. This will become our new shield. It will be connected to the shields in the cable by the bare hookup wire we’ve used to connect them together.

21. Starting at the end of the assembly away from the shield lead, wind a layer of foil around the assembly toward the shield lead. On each end of the assembly you want to cover about 5-10 mm of the existing cable so that the new shield overlaps the shield in the cable. When you reach that point on the end with the shield lead, fold the shield lead back over the assembly and the first layer of foil. Then, continue winding the foil around the assembly so that you make a second layer back toward where you started.

22. Continue winding the shield in this way back and forth until you run out of foil. Do this as neatly and tightly as possible so that the final assembly is compact and relatively smooth. You should end up with about 3-5 layers of foil with the shield lead between each layer. Finally, solder the shield lead to itself on each end of the shield and to the foil itself if possible.

23. Clip off any excess shield lead. Then push (DO NOT PULL) the large heat-shrink tubing over the assembly. This may take a little time and effort, especially if the heat-shrink tubing is a little narrow. It took me a few minutes of pushing and massaging, but I was able to get the final piece of heat-shrink tubing over the shield assembly. It should cover about an additional 1 cm of cable on each end. Heat it up with your hair drier (or heat gun if you have it) and you’re done!

24. If you really want to you can do a final check with an ohm meter to see that you haven’t shorted anything or pulled a connection apart. If your assembly process looked like my pictures then you should be in good shape.

RCA tip to RCA tip should measure about 441K Ω (I got 436K).

RCA sleve to RCA ring should measure 0 Ω. (Shields are common).

RCA tip to RCA ring (same cable) should measure 220.5KΩ (I got 218.2K).

RCA sleve to 1/8th in sleve should measure 0 Ω.

RCA Red tip to 1/8th in tip should be about 220K Ω.

RCA Red tip to 1/8th in ring should be about 1K Ω more than that.

Sep 062010
 

If you know me then you know that in addition to music, technology, and all of the other crazy things I do I also have an interest in cosmology and quantum mechanics. What kind of a Mad Scientist would I be without that?

Recently while watching “Through the wormhole” with the boys I was struck by the apparent ratios between ordinary matter, dark matter, and dark energy in our universe.

Here is a link to provide some background: http://science.nasa.gov/astrophysics/focus-areas/what-is-dark-energy/

It seems that the ratio between dark matter and ordinary (observable) matter is about 5:1. That’s the 80/20 rule common in statistics and many other “rules of thumb” right?

Apparently the ratio between dark energy and all matter (dark or observable) is about 7:3. Here again is a fairly common ratio found in nature. For me it brings to mind (among other things) RMS calculations from my electronics work where Vrms = .707 * Vp.

There are also interesting musical relationships etc… The only thing interesting about any of those observations is that they stood out to me and nudged my intuition toward the  following thought:

What if dark energy and dark matter are really artifacts of ordinary reality and quantum mechanics?

If you consider the existence of a quantum multiverse then there is the “real” part of the universe that you can directly observe (ordinary matter); there is the part of reality that you cannot observe because it is bound to collapsed probability waves representing events that did not occur in your reality but did occur in alternate realities (could this be dark matter?); and there is the part of the universe bound up in wave functions representing future events that have yet to be collapsed in all of the potential realities (could this be dark energy?).

Could dark matter represent the gravitational influence of alternate realities and could dark energy represent the universe expanding to make room for all future potentialities?

Consider causality in a quantum framework:

When two particles interact you can consider that they observed each other – thus collapsing their wave functions. Subsequent events from the perspectives of those particles and those that subsequently interact with them record the previous interactions as history.

Another way to say that is that the wave functions of the particles that interacted have collapsed to represent an event with 100% probability (or close to it) as it is observed in the past. These historical events along with the related motions (energy) that we can predict with very high degrees of certainty make up the observable universe.

The alternative realities that theoretically occurred in events we cannot observe (but were predicted by wave functions now collapsed) might be represented by dark matter in our universe.

All of the possible future events that can be reasonably predicted are represented by wave functions in the quantum filed. These potential realities have been proved to be just as real as our observable universe by experiments in quantum mechanics and are generally represented by quantum entanglement effects etc.

Could it be that dark energy is bound up in (or at least strongly related to) the potentials represented by these wave functions?

Consider that the vast majority of particle interactions in our universe ultimately lead to a larger number of potential interactions. There is typically a one-to-many relationship between any present event and possible future events. If these potential interactions ultimately occur in a quantum multiverse then they would represent an expanded reality that is mostly hidden from view.

Consider that the nature of real systems we observe is that they tend to fall into repeating patterns of causality such as persistent objects (molecules, life, stars, planets, etc)… this tendency toward recurring order would put an upper bound on the number of realities in the quantum multiverse and would tend to stabilize the ratio of alternate realities to observable realities.

Consider that the number of potential realities derived from the wave functions of the multiverse would have a similar relationship and that this relationship would give rise to a similar (but likely larger) ratio as we might be seeing in the ratio of dark energy to dark matter.

Consider that as our universe unfolds the complexity embodied in the real and potential realities also expands. Therefore if these potentialities are related to dark matter and dark energy and if dark energy is bound to the expansion of the universe in order to accommodate these alternate realities then we would expect to see our universe expand according to the complexity of the underlying realities.

One might predict that the expansion rate of the universe might be related mathematically to the upper bound of the predictable complexity of the universe at any point in time.

The predictable complexity in the universe would be a function of the kinds of particles and their potential interactions as represented by their wave functions with the upper limit being defined as the potentiality horizon.

Consider that each event gives rise to a new set of wave functions representing all possible next events. Consider that if we extrapolate from those wave functions a new set of wave functions that represent all of the possible events after those, and so on, that the amplitudes of the wave functions at each successive step would be reduced. The amplitude of these wave functions would continue to decrease as we move our predictions into the future until no wave function has any meaningful amplitude. This edge of predictability is the potentiality horizon.

The potentiality horizon is the point in the predictable future where the probability of any particular event becomes effectively equal to the probability of any other event (or non event). At this point all wave functions are essentially flat — this “flatness” might be related to the Planck constant in such a way that the amplitude of any variability in any wave function is indistinguishable from random chance.

Essentially all wave functions at the potentiality horizon disappear into the quantum foam that is the substrate of our universe. At this threshold no potential event can be distinguished from any other event. If dark energy is directly related to quantum potentiality then at this threshold no further expansion of the universe would occur. The rate of expansion would be directly tied to the rate of expansion of quantum potentiality and to the underlying complexity that drives it.

So, to summarize:

What if dark matter and dark energy represent the matter and energy bound up in alternate realities and potential realities in a quantum multiverse?

If dark matter represents alternate realities invisible to us except through the weak influence of their gravity, and if dark energy represents the the expansion of the universe in order to accommodate the wave functions describing possible future events in the quantum field for all realities (observable and unobservable) with an upper bound defined by the potentiality horizon; then we might predict that the expansion rate of the universe can be related to it’s inherent complexity at any point in time.

We might also predict that the flow of time can be related to the inherent complexity of the wave functions bound in any particular system such that a lower rate of events occurs when the inherent complexity of the system is reduced.

… well, those are my thoughts anyway 😉

Jul 032010
 

I’m not one of “those” guys, really. You know the ones — the zealots who claim that their favorite OS or application is and will be forever more the end-all-be-all of computing.

As a rule I recommend and use the best tool for the job – whatever that might be. My main laptop is Windows XP, my family and customers use just about every recent version of Windows or linux.  In fact, my own servers are a mix of Win2k*, RedHat, CentOS, and Ubuntu, my other laptop is Ubuntu and I switch back and forth between MSOffice and OpenOffice as needed.

Today surprised me though. I realized that I had become biased against Ubuntu in a very insidious way— My expectations were simply not high enough. What’s weird about that is that I frequently recommend Ubuntu to clients and peers alike, and my company (MicroNeil in this case) even helps folks migrate to it and otherwise deploy it in their infrastructure! So how could I have developed my negative expectations?

I have a theory that it is because I find I have to defend myself from looking like “one of those linux guys” pretty frequently when in the company of my many “Windows-Only” friends and colleagues. Then there are all those horror stories about this or that problem and having to “go the long way around” to get something simple to work. I admit I’ve been stung by a few of those situations in the past myself.

But recently, not so much! Ubuntu has worked well in many situations and, though we tend to avoid setups that might become complicated, we really don’t miss anything by using it – and neither do the customers we’ve helped to migrate. On the contrary, in fact, we have far fewer problems with our Ubuntu customers than with our Windows friends.

Today’s story goes like this.

We have an old Toshiba laptop that we use for some special tasks. It came with Windows XP pro, and over the years we’ve re-kicked it a few times (which is sadly still a necessary evil from time to time on Windows boxen).

A recent patch caused this box to become unstable and so we were looking at having to re-kick it again. We thought we might take the opportunity to upgrade to Windows 7. We wanted to get it back up quickly so we hit the local store and purchased W7pro.

The installation was straight forward and since we already have another box running W7 our expectations were that this would be a non-event and all would be happy shortly.

But, no. The first thing to cause us trouble was the external monitor. Boot up the laptop with the monitor attached and that is all you can see — the laptop’s screen was not recognized. Boot up without the external monitor and the laptop’s is the only display that will work. I Spent some time searching various support forums for a solution and basically just found complaints without solutions.

After trying several of the recommended solutions without luck I was ready to quit and throw XP back on the box. Instead I followed a hunch and forced W7 to install all of the available patches just to see if it would work. IT DID!

Or, it seemed like it did. After the updates I was able to turn on the external display and set up the extended desktop… I was starting to feel pretty good about it. So I moved on to the printer. (more about the display madness later)

We have a networked HP2840 Printer/Scanner. We use it all the time. Joy again, I discovered, the printer was recognized and installed without a hitch. Printed the test page. We were going to get out of this one alive (still have some day left).

Remember that scene in perfect storm — They’re battered and beaten and nearly at the end. The sky opens up just a bit and they begin to see some light. It seems they’ve made it and they’re going to survive. Then the sky closes up again and they know they are doomed.

W7 refused to talk to the scanner on the HP2840. That’s a game changer in this case — the point of this particular laptop is accounting work that requires frequent scanning and faxing so the scanner on the HP2840 simply had to work or we would have to go back to XP.

Again I searched for solutions and found only unsolved complaints. Apparently there is little to no chance HP is going to solve this problem for W7 any time soon — at least that is what is claimed in the support forums. There are several workarounds but I was unable to make them fly on this box.

Remember the display that seemed to work? One of the workarounds for the scanner required a reboot. After the reboot the display drivers forgot how to talk to the external display again and it wouldn’t come back no matter how much I tweaked it!

Yep– like in perfect storm, the sky had closed and we were doomed. Not to mention most of the day had evaporated on this project already and that too was ++ungood.

We decided to punt. We would put XP Pro back on the box and go back to what we know works. I suggested we might try Ubuntu– but that was not a popular recommendation under the circumstances… Too new an idea, and at this point we really just wanted to get things working. We didn’t want to open a new can of worms trying to get this to work again with the external monitor, and the printer, and the scanner, and…

See that? There it is– and I bought into it even though I knew better. We dismissed the idea of using Ubuntu because we expected to have trouble with it– But we shouldn’t have!

None the less… that was the decision and so Linda took over and started to install XP again… but there was a problem. XP would not install because W7 was already on the box. (The OS version on the hard drive is newer). So much for simple.

Back in the day we would simply wipe the partition and start again — these days that’s not so easy… But, it’s easy enough. I grabbed an Ubuntu disk and threw it into the box. The idea was to let the Ubuntu install repartition the drive and then let XP have at it — Surely the XP install would have no qualms about killing off a linux install right?!

In for a penny, in for a pound.

As the Ubuntu install progressed past the repartitioning I was about to kill it off and throw the XP disk in… but something stopped me. I couldn’t quite bring myself to do it… so I let it go a little longer, and then a little longer, and a bit more…

I thought to myself that if I’ve already wasted a good part of the day on this I might as well let the Ubuntu install complete and get a feel for how much trouble it will be. If I ran into any issues I would throw the XP disk in the machine and let it rip.

I didn’t tell Linda about this though — she would have insisted I get on with the XP install, most likely. After all there was work piled up and this non-event had already turned into quite a time waster.

I busied myself on the white-board working out some new projects… and after a short time the install was complete. It was time for the smoke test.

Of course, the laptop sprang to life with Ubuntu and was plenty snappy. We’ve come to expect that.

I connected the external monitor, tweaked the settings, and it just plain worked. I let out a maniacal laugh which attracted Linda from the other end of the MadLab. I was hooked at this point and so I had to press on and see if the printer and scanner would also work.

It was one of those moments where you have two brains about it. You’re nearly convinced you will run into trouble, but the maniacal part of your brain has decided to do it anyway and let the sparks fly— It conjured up images of lightning leaping from electrodes, maniacal laughter and a complete disregard for the risk of almost certain death in the face of such a dangerous experiment! We pressed on…

I attempted to add the printer… Ubuntu discovered the printer on the network without my help. We loaded up the drivers and printed a test page. More maniacal laughter!

Now, what to do about the scanner… surely we are doomed… but the maniacal part of me prevailed. I launched simple scanner and it already knew about the HP2840. Could it be?! I threw the freshly printed test page into the scanner and hit the button.

BEAUTIFUL!

All of it simply worked! No fuss. No searching in obscure places for drivers and complicated workarounds. It simply worked as advertised right out of the box!

Linda was impressed, but skeptical. One more thing, she said. “We have to map to the SAN… remember how much trouble that was on the other W7 box?” She was right – that wasn’t easy or obvious on W7 because the setup isn’t exactly what W7 wants to see and so we had to trick it into finding and connecting to the network storage.

I knew better at this point though. I had overcome my negative expectations… With a bit of flare and confidence I opened up the network places on the freshly minted Ubuntu laptop and watched as everything popped right into place.

Ubuntu to the Rescue

In retrospect I should have known better from the start. It has been a long time since we’ve run into any trouble getting Ubuntu (or CentOs, or RedHat…) to do what we needed. I suppose that what happened was that my experience with this particular box primed me to expect the worst and made me uncharacteristically risk averse.

  • XP ate itself after an ordinary automatic update.
  • W7 wouldn’t handle the display drivers until it was fully patched.
  • W7 wouldn’t talk to the HP2840 scanner.
  • Rebooting the box made the display drivers wonky.
  • XP wouldn’t install with W7 present.
  • I’d spent hours trying to find solutions to these only to find more complaints.
  • Yikes! This was supposed to be a “two bolt job”!!!

Next time I will know better. It’s time to re-think the expectations of the past and let them go — even (perhaps especially) when they are suggested by circumstances and trusted peers.

Knowing what I know now, I wish I’d started with Ubuntu and skipped this “opportunity for enlightenment.” On the other hand, I learned something about myself and my expectations and that was valuable too, if a bit painful.

However we got here it’s working now and that’s what matters 🙂

Ubuntu to the rescue!