Madsci

Husband, Father, Musician, Engineer, Teacher, Thinker, Pilot, Mad, Scientist, Writer, Philosopher, Poet, Entrepreneur, Busy, Leader, Looking for ways to do something good in a sustainable way,... to be his best,... and to help others to do the same. The universe is a question pondering itself... we are all a part of the answer.

Dec 042025
 

I had a few thoughts about interfaces between dimensionalities as they relate to consciousness. I’m using the term “dimensionalities” to avoid “universes” because if the concept of a “universe” includes all things then dimensionalities would exist within the universe even if each dimensionality might be sufficiently rich and complex to seem as a universe unto itself… and even though some may be completely cut off from others or otherwise so weakly connected and graphically distant that they might as well be separate.

Networks of small worlds

In order to traverse from one dimensionality to another there must be some kind of interface that presents the path to one dimensionality from another. It is also possible that this is a one-way interface (like a black hole is in that it only allows things to enter it as seen from our particular perspective).

It is also possible that these interfaces are only visible under some circumstances… again, using black holes, it is surprisingly difficult to navigate any particle (or collection thereof) so that it actually enters a black hole. Space and time are so distorted in the vicinity of a black hole that objects headed in that direction are far more likely to be deflected or fall into orbit than to actually cross the event horizon… but I digress…

In sci-fi parlance analogs of such interfaces between dimensionalities may be the monolith from 2001, the T.A.R.D.I.S from Dr Who, various dimensional rifts and portals from various stories (Time Bandits, etc…), and so on.

In the concept of interplexing that I described before as a way to access subspace, this might be any portal that opens a singularity with some imposed dimensionality such that the imposed dimensionality acts like a cryptographic key whereby only interplexing beacons and portals that define compatible dimensionalities can access each other through subspace.

This is analogous to how encrypted spread spectrum radio signals can appear as diffuse background noise indistinguishable from the ambient noise floor unless processed with the correct cryptographic key… and how applying this key has the effect of bringing the signal of interest into focus while spreading all other signals (including any that might interfere) until those are indistinguishable from background noise. (I note that in some ways this may be a model for how “attention” is achieved in the brain by “tuning” parts of the brain to be sensitive to some firing patterns simultaneously makes it insensitive to other patterns.)

Picture a mirror so thin that standing beside it you cannot see it at all; but standing in front of it you can see a whole world into which you might walk if you could step through that “looking glass.”

That is all background however… The thought that I just had is that there is a similar real-world analog that may be more accessible description using the concept of radio (which I chose in this case because it is familiar, not because the electromagnetic spectrum is particularly special).

In this construct, consider that some entity wishes to observe or in other ways interact with another within the dimensionality of the EM field. In the simplest case, the entities would have to have very similar wavelengths.

For example, transmitters and receivers tuned to the same frequency with antennas also tuned to the same frequency are separate systems (networks of components) configured to be sensitive to each other. Parts of them resonate with each other.

Resonance as access

At their resonant frequency, the peaks an troughs of a given waves in the same space can match (once the phases are aligned like the meshing of gears). These waves are then coherent and the two entities, represented here abstractly as waves, will resonate with and thereby influence each other. (I want to convey this in a way that brings to mind a kind of dance where each entity adjusts its shape to follow the curves of the other and to relate that dynamic interaction to the simpler model of radio waves and resonant circuits.)

However, if the entities are tuned to sufficiently different frequencies (having sufficiently different wavelengths) then they cannot influence each other- they cannot communicate. One may not even notice the other exists no matter how close they may be to each other in other ways. (Think of this also like the difference between laser light which forms a tight, coherent beam of a single color and broad spectrum ambient light which has no particular color and spreads out in all directions. You could stand right next to a beam of laser light and it might pass right by you without you ever noticing it.)

Bees and deep space telescopes can see colors you can’t; but they are there. Your dog can sense smells that you can’t; but they are there. Your cat can hear tiny sounds you would never notice. Your car can run much faster than you; and your airplane can fly; and your boat can sail the seas. We make all of these things extensions of ourselves in order to gain access to the world in ways that otherwise are beyond us.

Shapes and dancing

Imagine the bumps of a high frequency entity up against those of a low frequency entity. To the high frequency entity the low frequency entity is stretched out so much that there don’t appear to be any bumps at all– it’s just a flat, featureless surface. (I am reminded of the description of the monolith which had a surface described as “totally smooth.”)

To the low frequency entity, the high frequency entity likewise appears totally smooth. Think of how smooth a glass feels under your fingers. There are uncountable numbers of individual atoms there at the surface, but you sense no features. This is because the features of your fingers (finger prints etc) are not small enough (high frequency enough) to fit into the ridges that show up between the individual atoms. Crack the glass, however, and you may be able to feel the crack because the gap caused by that crack becomes large enough for the features of your finger to “catch” on it.

This is a usable model for grasping the basics, but we need another model in order to address the complexity of the real world.

Graph all the things

It has been said that the electrons in Feynman diagrams are carriers of causality. So, at a quantum level, the universe functions as a causal graph of interactions. This has some strong alignment with Wolfram’s multi-way computational universe model too.

Turning to graphs, especially the causal computational graphs that underpin all things, it is true that each entity in any given dimensionality can be (and likely is) a persistent cluster of smaller entities with a specific arrangement. Essentially, all entities are subgraphs of larger entities up to the scale of the universe; and are composed of entities that are subgraphs of itself (and possibly other networks as this is not a strictly hierarchical system.)

Such larger entities (systems) are then resonant at multiple wavelengths. Not only that but they are also sensitive to particular alignments (polarizations) of and phase relationships between multiple wavelengths; and these sensitivities change as they are affected by their interactions. This is how, essentially, everything non-trivial in the universe can and does perform computations of one kind or another.

This creates an interesting dynamic between complex entities. They can only interact to the extent that various components of them can align in timing, phase, frequency, and polarization; and each interaction might change some aspects of this alignment. (Effectively, all of the degrees of freedom occupied by the entity might be at play.)

Each entity that has influence communicates. Each entity that communicates has the capacity to configure parts of itself in ways that can be aligned to parts of the other entity; or to break that alignment. These alignments and the ways in which they change become an interface between these entities.

Each such interface has unique characteristics that place limits on the kind (amount and character) of influence each entity can have on the other. These characteristics are also influenced by the ways in which the interface connects to the remainder of each entity. Since each the remainder of each entity is unique, the connectivity between the interface and the remainder of each entity is yet another interface.

So far, I’ve described simple direct (first order) interfaces (although already that description is surprisingly complex.) However, communication as we generally experience it is still much more complex and usually requires networks of mediating devices.

If we were talking together in a coffee shop, for example, we would not generally be able to place thoughts directly into each other’s minds. Instead, each of us would make sounds by interfacing with the air thus changing our shared environment. Those sounds would travel through that environment and eventually reach our ears influencing that part of our interface with the shared environment.

The shared environment itself is a network of entities interacting with each other. For example, high pressure and low pressure clusters of air molecules interacting with each other to propagate sound waves; walls and other objects reflecting and absorbing sound waves of various wavelengths; and other objects and entities adding their own sounds to the mix.

Internally we would both have structures (complex entities themselves) which would interpret, encode, and decode these sounds using the context embedded in that “machinery” in order to moderate any changes to the state of our minds. Some of that “context” would be fine tuning of those structures to be sensitive to the sounds of each others voices while being insensitive to distortions and to other added sounds that don’t hold our interest.

An emergent view of panpsychism?

You can see here within each of us a network of highly integrated entities (subsystems) that ultimately stretch out into the environment allowing our individual consciousness ( a particular perspective ) to colonize that extended information space.

You can also see that part of that extended information space includes the interactions of other consciousness that has colonized that space with us. This necessarily includes consciousness that we might not even recognize as such; but the influences of these entities is present none the less and the rich composite of these interactions informs our expectations and otherwise impacts the state of every entity in that network.

All of these networks are usually abstracted away into obscurity such that most folks generally ignore them; but at some levels of abstraction you can re-imagine these complex mechanisms as portals, interfaces, doorways, blue police boxes, monoliths, that allow us to transport parts of ourselves so that they may roam around in other worlds… worlds that are shared environments with other entities of various kinds… worlds that at any moment we might discover were right next to us all along; or may disappear at any moment never to be seen again.

This forms an infinite fractal landscape where various entities live and interact at all scales. If you’ve ever watched a video that zooms into the Mandelbrot set you have seen this never ending complexity and self similarity play out; and you have an inkling of how complexity allows for the infinite expansion of these networks of interaction at all scales in all degrees of freedom. Smaller entities (structures) living within others; and entities of all sizes composing self-similar networks at larger scales and smaller ones.

Each of these networks, computationally, form pockets of consciousness that colonize the accessible information space. All of these may be aware of each other to greater or lesser degrees; and may influence each other similarly, with or without intention, to the extent that they have any suitable sensitivity or interfaces to each other.

Consciousness arises from communication through graphs of interfaces within these shared worlds; passing between and within these entities and the other entities around them; and through the complex restructuring of these interfaces that encodes the experiences each construct has – imprinting those experiences for some span and propagating those impressions through the network within and beyond.

So consciousness is inherently diffuse, non-local and entangled even while it appears localized to each of us. We may “possess” a focal point of view that we may call “our own” consciousness; but that locality, while functional for our personal and collective narratives, is largely a convenient illusion. We are each the tip of a very large ice burg; and the part we don’t see, also being water in some form, stretches out to the infinity of the ocean in which we float.

The universe is a question pondering itself and we are each a part of the answer.

Sep 102025
 

I’ve been developing a template for building cross-platform, hybrid cloud ready applications. The key concepts are:

  • Use a browser for the gui because browsers are robust, flexible, and ubiquitous.
  • Use a programming language that is cross platform so there is only one code-base for all targets.
  • Include in “cross-platform” the idea that the application can easily scale from a single desktop to a local LAN (like a logging program on Field Day, or a federated SDR control application) all the way up to “the cloud” be that home-lab, private, hybrid, or public “cloud.”
  • Include an open API so that the application can be extended by other interfaces, and automated via other applications and scripts.

The keys to this solution are: Chrome (or chrome based browsers) and the Go programming language. Chrome based browsers are essentially “the standard” on the ‘web; and Go is fast, flexible, cross-platform, and contains in it’s standard libraries all of the machinery you need for creating efficient micro-services, APIs, and web based applications. (you can create a basic web service in just a few lines of code!)

Working on the GoUI framework, adding a “Protect” middleware for basic rate limiting and dynamic allow/block listing.

It’s good practice to presume that any application that will live “online” will also be abused and so should have some defenses against that. A good starting point is rate limiting and source limiting… For example a desktop application that runs it’s gui in the browser should only listen to the local machine; and even if that’s the only device that ever connects to the application some sanity checks should be in place about those connections. To be sure, if you also want such an application to be shareable on a lan or scalable up to the cloud it must be even more robust…

While researching this I discovered that all of the rate limiting strategies I’ve come upon are quite literal and often complicated ways of actually counting requests/tokens in various buckets and then adding and removing counts from these buckets.

I think these miss the point… which is “rate” limiting. That is — how fast (at what rate) and from where are requests coming in. Token counting mechanisms also struggle with the bursty nature of normal requests where loading a page might immediately trigger half a dozen or more (at minimum) immediate requests associated with the UI elements on that page.

I’m going to simplify all of that by inverting the problem. Rather than count events and track some kind of sliding window or bucket scheme I’m going to measure the time between requests and store that in a sliding weighted average. Then, for any particular user I only need two numbers — the time of the last request (to compute how long since the previous request) and the running average.

Then the tuning parameters are fairly simple… The weights for the sliding average, the threshold for limiting, and a TTL so that users that go away for a while are forgotten.

Say the rate you want is at most 10 requests per second. That’s a simple enough fraction 10/1 … which, when inverted, means that your target is 1/10th of a second on average between each request.

Average_Weight = 10
Request_Weight = 1
Limit = 100ms (1/10th of a second is 100ms)

Suppose that when a new user arrives (or returns after being forgotten) we give them the benefit of the doubt on the first request:

Average = 1000ms
TimeSinceLastRequest = 1000ms
Average = (Average * 10) + (TimeSinceLastRequest * 1) / 11
1000 = 10000 + 1000 / 11 so no change

Then the page they loaded triggers 5 more requests immediately… I’ll show fractions to spare you the weird upshift into integer space.

909.091 = 10000 + 0 / 11
826.456 = 9090.91 + 0 / 11
751.315 = 8264.56 + 0 / 11
683.013 = 7513.15 + 0 / 11
620.921 = 6830.13 + 0 / 11

620 is still bigger than (slower than) 100ms so they’re still good and don’t get limited… Even better, when they wait a second (or so) before making a new request (like an ordinary user might) they get credit for that.

655.383 = 6209.21 + 1000 / 11 (see, the average went up…)

Now see what happens when an attacker launches a bot to abuse the service — maybe hitting it once every 10ms or so.

1000 = 10000 + 1000 / 11 (benefit of the doubt on the first hit)
910.000 = 10000 + 10 / 11
828.181 = 9100.00 + 10 / 11
753.802 = 8281.81 + 10 / 11
686.183 = 7538.02 + 10 / 11
624.712 = 6861.83 + 10 / 11
568.829 = 6247.12 + 10 / 11
518.027 = 5688.29 + 10 / 11
471.842 = 5180.27 + 10 / 11
429.857 = 4718.42 + 10 / 11
391.688 = 4298.57 + 10 / 11
356.989 = 3916.88 + 10 / 11
325.445 = 3569.89 + 10 / 11
296.768 = 3254.45 + 10 / 11
270.698 = 2967.68 + 10 / 11
246.998 = 2706.98 + 10 / 11
225.453 = 2469.98 + 10 / 11
205.866 = 2254.53 + 10 / 11
188.060 = 2058.66 + 10 / 11
171.873 = 1880.60 + 10 / 11
157.757 = 1718.73 + 10 / 11
143.779 = 1577.57 + 10 / 11
131.618 = 1437.79 + 10 / 11
120.561 = 1316.18 + 10 / 11
110.510 = 1205.61 + 10 / 11
101.373 = 1105.10 + 10 / 11
93.066 = 1013.73 + 10 / 11 (rate limited at faster than 100ms !!)

After 27 requests (270 ms) the bot gets 429 from the server (too many requests). If it’s silly enough to continue it gets worse. Let’s say that we have a ban threshold at 50ms (default would be anything twice as fast as the rate limit)

85.515 = 930.66 + 10 / 11
78.649 = 855.15 + 10 / 11
72.409 = 786.49 + 10 / 11
66.735 = 724.09 + 10 / 11
61.578 = 667.35 + 10 / 11
56.889 = 615.78 + 10 / 11
52.626 = 568.89 + 10 / 11
48.751 = 526.26 + 10 / 11 (gets 418 response (I am a teapot))

At this point the ip/session/tracker is added to the block-list, probably with an extended expiration (maybe 10 minutes, maybe the next day, maybe forever (until administratively removed) depending upon policy etc.

Any further requests while block-listed will receive:

503 – I am a combined coffee/tea pot that is temporarily out of coffee (Hyper Text Coffee Pot Control Protocol)

Here is a screen shot of this up and running… In this case Seinfeld fans will recognize the 503 response “No soup for you!”; and the automated block-list entry lasts for 10 minutes…

In the code above, if you look closely, you’ll see that I use a cycle stealing technique to do maintenance on the block/allow list and request rate tracking. This avoids the complexity of having a background process running in another thread. The thinking is: Since I already have the resources locked in a mutex I might as well take care of everything at once; and if there are no requests coming in then it really doesn’t matter if I clean up the tables until something happens… This is simpler than setting up a background process that runs in it’s own thread, has to be properly started up and shut-down, and must compete for the mutex at random times.

The configuration so far is very straight-forward…

When a requester goes away for a while and is forgotten, they start over with a “benefit of the doubt” rate of 1 request per second… and the system tracks them from there. I created a UI widget to inspect the state of the rate limiter so I could watch it work…

Aug 162025
 

It had been a bad year for computing and storage in the lab. On the storage side I had two large Synology devices fail within weeks of each other. One was supposed to back-up the other. When they both failed almost simultaneously, their proprietary hardware made it prohibitively difficult and expensive to recover and maintain. That’s a whole other story that perhaps ends with replacing a transistor in each… But, suffices to say I’m not happy about that and the experience reinforced my position that proprietary hardware of any kind is not welcome here anymore… (so if it is here, it’s because there isn’t a better option and such devices from now on are always looking for non-proprietary replacements…)

On the computing side, the handful of servers I had were all of: power hungry, lacking “grunt,” and a bit “long in the tooth.” Not to mention, the push in my latest research is strongly toward increasing the ratio of Compute to RAM and Storage so any of the popular enterprise trends toward ever larger monolithic devices really wouldn’t fit the profile.

I work increasingly on resilient self organizing distributed systems; so the solution needs to look like that… some kind of “scalable fabric” composed of easily replaceable components, no single points of failure (to the extent possible) but also with a reasonably efficient power footprint.

I looked at a number of main-stream high performance computing scenarios and they all rubbed me the wrong way — the industry LOVES proprietary lock-in (blade servers); LOVES computing “at scale” which means huge enterprise grade servers supported by the presumption of equally huge infrastructure and support budgets (and all of the complexity that implies). All of this points in the wrong direction. It’s probably correct for the push into the cloud where workloads are largely web servers or microservices of various generic types to be pushed around in K8s clusters and so forth. BUT, that’s not what I’m doing here and I don’t have those kinds of budgets… (nor do I think anyone should lock themselves into that kind of thinking without looking first).

I have often looked to implement SCIFI notions of highly modular computing infrastructure… think: banks of “isolinear chips” on TNG, or the glowing blocks in the HAL 9000. The idea being that the computing power you need can be assembled by plugging in generic collection of self-contained, self-configuring system components that are easily replaced and reasonably powerful; but together can be merged into a larger system that is resilient, scalable, and easy to maintain. Perfect for long space journeys. The hardware required for achieving this vision has always been a bit out of reach up to now.

Enter the humble NUC. Not a perfect rendition of the SCIFI vision; but pretty close given today’s world.

I drew up some basic specifications for what one of these computing modules might feature using today’s hardware capabilities and then went hunting to see if such a thing existed in real life. I found that there are plenty of small self-contained systems out there; but none of them are perfect for the vision I had in mind. Then, in the home-lab space, I stumbled upon a rack component designed to build clusters of NUCs… this started me looking at whether I could get NUCs configured to meet my specifications. It turns out that I could!

I reached out to my long-time friends at Affinity Computers (https://affinity-usa.com/). Many years ago I began outsourcing my custom hardware tasks to these folks – for all but the most specialized cases. They’ve consistently built customized highly reliable systems for me; and have collaborated on sourcing and qualifying components for my crazy ideas (that are often beyond the edge of convention). They came through this time as well with all of the parts I needed to build a scalable cluster of NUCs.

System Design:

Overal specs: 128 cores, 512G RAM, 14T Storage

Each generic computing device has a generic interface connecting only via power and Ethernet. Not quite like sliding a glowing rectangle into a backplane; but close enough for today’s standards.

Each NUC has 16 cores, 64G of RAM, two 1G+ Network ports, and two 2T SSDs.

One network port connects the cluster to other systems and to itself.

The other network port connects the cluster ONLY to itself for distributed storage tasks using CEPH.

One of the SSDs is for the local OS.

The other SSD is part of a shared storage cluster.

Each device is a node in a ProxMox cluster that implements CEPH for shared storage.

Each has it’s own power supply.

There are no single points of failure, except perhaps the network switches and mains power.

Each device is serviceable with off-the-shelf components and software.

Each device can be replaced or upgraded with a similar device of any type (NUC isn’t the only option.)

Each device is a small part of the whole, so if it fails the overall system is only slightly degraded.

Initial Research:

Before building a cluster like this (not cheap; but not unlike a handful of larger servers), I needed to verify that it would live up to my expectations. There were several conditions I wanted to verify. First, would the storage be sufficiently resilient (so that I would never have to suffer the Synology fiasco again). Second, would it be reasonable to maintain with regard to complexity. Third, would I be able to distribute my workloads into this system as a “generic computing fabric” as increasingly required by my research. Forth… fifth… etc. There are many things I wanted to know… but if the first few were satisfied the project would be worth the risk.

I had a handful of ACER Veriton devices from previous experiments. I repurposed those to create a simulation of the cluster project. One has since died… but two of the three are still alive and part of the cluster…

I gave each of these with an external drive (to simulate the second SSD in my NUCs) and loaded up ProxMox. That went surprisingly well… then I configured CEPH on each node using the external drives. That also went well.

I should note here that I’d previously been using VMWare and also had been using separate hardware for storage. The switch to ProxMox had been on the way for a while because VMWare was becoming prohibitively expensive, and also because ProxMox is simply better (yep, you read that right. Better. Full stop.)

Multiple reasons ProxMox is better than other options:

  • ProxMox is intuitive and open.
  • ProxMox runs both containers and VMs.
  • CEPH is effectively baked-into ProxMox as is ZFS.
  • ProxMox is lightweight, scalable, affordable, and well supported.
  • ProxMox supports all of the virtual computing and high availability features that matter.

Over the course of several weeks I pushed and abused my tiny ACER cluster to simulate various failure modes and workloads. It passed every test. I crashed nodes with RF (we do radio work here)… reboot to recover, no problem. Pull the power cable, reboot to recover. Configure with bad parts, replace the good parts, recover, no problem. Pull the network cable, no problem, plug it back in, all good. Intentionally overload the system with bad/evil code in VMs and containers, all good. Variations on all of this kind of chaos both intended and by accident… honestly, I couldn’t kill it (at least not in any permanent way).

I especially wanted to abuse the shared storage via CEPH because it was a bit of a non-intuitive leap to go from having storage in a separate NAS or SAN to having it fully integrated within the computing fabric.

I had attempted to use CEPH in the past with mixed results – largely because it can be very complicated to configure and maintain. I’m pleased to say that these problems have been all but completely mitigated by ProxMox’s integration of CEPH. I was able to prove the integrated storage paradigm is resilient and performant even in the face of extreme abuse. I did manage to force the system into some challenging states, but was always able to recover without significant difficulty; and found that under any normal conditions the system never experienced any notable loss of service nor data. You gotta love it when something “just works” tm.

Building The Cluster:

Starting with the power supply. I thought about creating a resilient, modular, battery backed power supply with multiple fail-over capabilities (since I have the knowledge and parts available to do that); but that’s a lot of work and moves further away from being off-the-shelf. I may still do something along those lines in order to make power more resilient, perhaps even a simple network of diode (mosfet) bridges to allow devices to draw from neighboring sources when individual power supplies drop out, but for now I opted to simply use the provided power supplies in a 1U drawer and an ordinary 1U power distribution switch.

The rack unit that accommodates all of the NUCs holds 8 at a time in a 3U space. Each is anchored with a pair of thumb screws. Unfortunately, the NUCs must be added or removed from the back– I’m still looking for a rack mount solution that will allow the NUCs to pull out from the front for servicing as that would be more convenient and less likely to disturb other nodes [by accident]… but the rear access solution works well enough for now if one is careful.

Each individual NUC has fully internal components for reliability’s sake. A particular sticking point was getting a network interface that could live inside of the NUC as opposed to using a USB based Ethernet port as would be common practice with NUCs. I didn’t want any dangling parts with potentially iffy connectors and gravity working against them; and I did want each device to be a self contained unit. Both SSDs are already available as internal components though that was another sticking point for a time.

Another requirement was a separate network fabric to allow the CEPH cluster to work unimpeded by other traffic. While the original research with the ACER devices used a single network for all purposes, it is clear that allowing the CEPH cluster to work on it’s own private network fabric provides an important performance boost. This prevents public and private network traffic from workloads from impeding the storage network in any way; and similarly in reverse. Should a node drop out or be added, significant network traffic will be generated by CEPH as it replicates and relocates blocks of data. In the case of workloads that implement distributed database systems, each node will talk to each other over the normal network for distributing query tasks and results without interfering with raw storage tasks. An ordinary (and importantly simple) D-link switch suffices for that local isolated network fabric.

Loading up ProxMox on each node is simple enough. Connect it to a suitable network, monitor, mouse, keyboard, and boot it from an ISO on a USB drive. At first I did all of this from the rear but eventually used one of the convenient front panel USB ports for the ISO image.

Upon booting each device in turn, inspect the hardware configuration to make sure everything is reporting properly, then boot the image and install ProxMox. Each device takes only a few minutes to configure.

Pro tip: Name each device in a way that is related to where it lives on the network. For example, prox24 lives on 10.10.0.24. Mounting the devices in a way that is physically related to their name and network address also helps. Having laid hands on a lot of gear in data centers I’m familiar with how annoying it can be to have to find devices when they could literally be anywhere… don’t do that! As much as possible, make your physical layout coherent with your logical layout. In this case, that first device in the rack is the first device in the cluster (prox24). The one after that will be prox25 and so on…

Installing the cluster in the rack:

Start by setting up power. The power distribution switch goes in first and then directly above that will go the power supply drawer. I checked carefully to make sure that the mains power cables reach over the edge of the drawer and back to the switch when the drawer is closed. It’s almost never that way in practice, but knowing that it can be was an important step.

Adjust the rails on the drawer to match the rack depth…

Mount the drawer and drop in a power supply to check cable lengths…

When all power supplies are in the drawer it’s a tight fit that requires them to be staggered. I thought about putting them on their sides in a 2U configuration but this works well enough for now. That might change if I build a power matrix (described briefly above) in which case I’ll probably make 3D printed blocks for each supply to rest in. These would bolt through the holes in the bottom of the drawer and would contain the bridging electronics to make the power redundant and battery capable.

With the power supply components installed the next step was to install the cluster immediately above the power drawer. In order to make the remaining steps easier, I rested the cluster on the edge of the power drawer and connected the various power and network cables before mounting the cluster itself into the rack. This way I could avoid a lot of extra trips reaching through the back of the rack… both annoying and prone to other hazards.

Power first…

Then primary network…

Then the storage network…

With all of the cabling in place then the cluster could be bolted into the rack…

Finally, the storage network switch can be mounted directly above the cluster. Note that the network cables for the cluster are easily available in the gap between the switches. Note also that the switch ports for each node are selected to be consistent between the two switches. This way the cabling and indicators associated with each node in the cluster make sense.

All of the nodes are alive and all of the lights are green… a good start!

The cluster has since taken over all of the tasks previously done by the other servers and the old Synology devices. This includes resilient storage, a collection of network and time services, small scale web hosting, and especially research projects in the lab. Standing up the cluster (joining each node to it) only took a few minutes. Configuring CEPH to use the private network took only a little bit of research and tweaking of configuration files. Getting that right took just a little bit longer as might be expected for the first time through a process like that.

Right now the cluster is busy mining all of the order 32 maximal length taps for some new LFSR based encryption primitives I’m working on… and after a short time I hope to also deploy some SDR tasks to restart my continuous all-band HF WSRP station. (Some antenna work required for that).

Finally: The cluster works as expected and is a very successful project! It’s been up and running for more than a year now having suffered through power and network disruptions of several types without skipping a beat. I look forward to making good use of this cluster, extending it, and building similar clusters for production work loads in other settings. It’s a winning solution worth expanding!

Feb 132025
 

I observed at one of my haunts that after a series of forced platform and tooling migrations with unreasonable deadlines the dev teams found themselves buried in tech debt… for example, the need to migrate from labrats to gitlab build pipelines and so forth;

Personally, I don’t run things this way… so it’s extremely painful to watch and creates a toxic environment.

While working with the team to try an wrangle the situation an image occurred to me of the team fighting a desperate battle against tech debt (and not necessarily winning). I described what flashed in my mind to gemini and it made a pretty good rendition of it. If nothing else, generative AI is a good way to rapidly express ideas…

Jan 212015
 

During an emergency, communication and coordination become both more vital and more difficult. In addition to the chaos of the event itself, many of the communication mechanisms that we normally depend on are likely to be degraded or unavailable.

The breakdown of critical infrastructure during an emergency has the potential to create large numbers of isolated groups. This fragmentation requires a bottom-up approach to coordination rather than the top-down approach typical of most current emergency management planning. Instead of developing and disseminating a common operational picture through a central control point, operational awareness must instead emerge through the collaboration of the various groups that reside beyond the reach of working infrastructure. This is the “last klick” problem.

For a while now my friends and I have been discussing these issues and brainstorming solutions. What we’ve come up with is the MCR (Modular Communications Relay). A communications and coordination toolkit that keeps itself up to date and ready to bridge the gaps that exist in that last klick.

Using an open-source model and readily available components we’re pretty sure we can build a package that solves a lot of critical problems in an affordable, sustainable way. We’re currently seeking funding to push the project forward more quickly. In the mean time we’ll be prototyping bits and pieces in the lab, war-gaming use cases, and testing concepts.

Here is a white-paper on MCR and the “last klick” problem: TheLastKlick.pdf

Jan 082014
 

Certainly the climate is involved, but this does happen from time to time anyway so it’s a stretch to assign a causal relationship to this one event.

Global warming doesn’t mean it’s going to be “hot” all the time. It means there is more energy in the atmosphere and so all weather patterns will tend to be more “excited” and weather events will tend to be more violent. It also means that wind, ocean currents, and precipitation patterns may radically shift into new patterns that are significantly different from what we are used to seeing.

All of these effects are systemic in nature. They have many parts that are constantly interacting with each other in ways that are subtle and complex.

In contrast, people are used to thinking about things with a reductionist philosophy — breaking things down into smaller pieces with the idea that if we can explain all of the small pieces we have explained the larger thing they belong to. We also, generally, like to find some kind of handle among those pieces that we can use to represent the whole thing — kind of like an on-off switch that boils it all down to a single event or concept.

Large chaotic systems do not lend themselves to this kind of thinking because the models break down when one piece is separated from another. Instead, the relationships and interactions are important and must be analyzed in the context of the whole system. This kind of thinking is so far outside the mainstream that even describing it is difficult.

The mismatch between reductionist and systemic thinking, and the reality that most people are used to thinking in a reductionist way makes it very difficult to communicate effectively about large scale systems like earth’s climate. It also makes it very easy for people to draw erroneous conclusions by taking events out of context. For example: “It’s really cold today so ‘global warming’ must be a hoax!”; or “It’s really hot today so ‘global warming’ must be real!”

Some people like to use those kinds of errors to their political advantage. They will pick an event out of context that serves their political agenda and then promote it as “the smoking gun” that proves their point. You can usually spot them doing this when they also tie their rhetoric to fear or hatred since those emotions tend to turn off people’s brains and get them either nodding in agreement or shaking their heads in anger without any deeper thought.

The realities of climate change are large scale and systemic. Very few discrete events can be accurately assigned to it. The way to think about climate change is to look at the large scale features of events overall. As for this polar vortex in particular, the correct climate questions are:

  • Have these events (plural not singular) become more or less frequent or more or less violent?
  • How does the character of this event differ from previous similar events and how do those differences relate to other climate factors?
  • What can we predict from this analysis?
Aug 262013
 

One of the problems with machine learning in an uncontrolled environment is lies. Bad data, noise, and intentional or unintentional misinformation complicate learning. In an uncontrolled environment any intelligence (synthetic or otherwise) is faced with the extra task of separating truth from fiction.

Take GBUdb, for example. Message Sniffer’s GBUdb engine learns about IP behaviors by watching SNF’s scan results. Generally if a message scan matches a spam or malware rule then the IP that delivered the message gets a bad mark. If the scanner does not find spam or malware then the IP that sent the message is given the benefit of the doubt and gets a good mark.

In a perfect world this simple algorithm generates reliable statistics about what we can expect to see from any given IP address. As a result we can use these statistics to help Message Sniffer perform better. If GBUdb can predict spam and malware from an IP with high confidence then we can safely stop looking inside the message and tag it as bad.

Similarly if GBUdb can predict that an IP address only sends us good messages then we can let the message through. Even better than that — if the message matches a new spam or malware rule then most likely we’ve made a mistake. In that case we can turn off the troublesome rule, let the message through, and raise a flag so bigger brains can take a look and fix the error.

Right?

Not always!

Message Sniffer’s Auto-Panic feature does a fantastic job of helping us catch problems before they can cause trouble, but Auto-Panic can also be tricked into letting more spam through the filters.

When a new pre-tested spam campaign is launched on a new bot-net there is some period of time where completely unknown IP addresses are sending messages that are guaranteed (pre-tested) not to match any recognizable patterns. All of these IPs end up gathering good marks for sending “apparently” clean messages… and since they are churning out messages as fast as they can they gain a good reputation quickly.

Back at the lab the SortMonsters and RuleBots are hard at work analyzing samples and creating rules to recognize the new campaign. This takes a little bit of time and during that time GBUdb can’t help but become convinced that some of these IPs are good sources. The statistics prove it, after all.

When the new pattern rules get out to the edges the Auto-Panic feature begins to work against us. When the brand new pattern rules find spam or malware coming from one of these new IPs it looks like a mistake. So, Auto-Panic fires and turns off the new rules!

For a time the gates are held wide open. As new bots come online they get extra time to sneak their messages through while the new rules are suppressed by Auto-Panic. Not only that but all of the new IPs quickly gain a reputation for sending good messages so that they too can trigger the Auto-Panic feature.

In order to solve this problem we’ve introduced a new behavior into the learning engine. We’ve made it skeptical of new, clean IPs. We call it White-Guard.

White-Guard singles out IPs that are new to GBUdb and possibly pretending to be good message sources. Instead of taking the new statistics at face value the system decides immediately not to trust them and not to distrust them either. The good and bad counts are artificially set to the same moderately high value.

It’s like a stranger arriving in a small town. The town folk won’t treat the stranger badly, but they also won’t trust them either. They withhold judgement for a while to see what the stranger does. Whatever opinion is ultimately formed about the stranger they are going to have to earn it.

In GBUdb, the White-Guard behavior sets up a neutral bias that must be overcome by new data before any actions will be triggered by those statistics. Eventually the IP will earn a good or bad reputation but in the short term any new “apparently” clean IPs will be unable to trigger  Auto-Panic.

With Auto-Panic temporarily out of reach for these sources new pattern rules can take effect more quickly to block the new campaigns. This earns most of the new bot-net IPs the bad reputations they deserve and helps to increase early capture rates.

Since we’ve implemented this new learning behavior we have seen a significant increase in the effectiveness of the GBUdb system as well as an improvement in the accuracy of our rule conflict instrumentation and sampling rates. All of these outcomes were predicted when modeling the dynamics of this new behavior.

It is going to take a little while before we get the parameters of this new feature dialed in for peak performance, but early indications are very good and it’s clear we will be able to apply the lessons from this experiment to other learning scenarios in the future.

 

Jun 072013
 

The new blackhatzes on the scene:

In the past few weeks we’ve seen a lot of heavy new spam coming around, and most of it is pre-tested against existing filters. This has caused everybody to leak more spam than usual. Message Sniffer is leaking too because the volume and variability are so much higher than usual. That said, we are a bit better than most at stopping some of the new stuff.

The good thing about SNF is that instead of waiting to detect repeating patterns or building up statistics on sender behaviors our system concentrates on finding ways to capture new spam before it is ever released by reverse engineering the content of the messages we do see.

Quite often this means we’ve got rules that predict new spam or malware hours or even days before they get into the wild. Some pre-tested spam will always get through though because the blackhatzes test against us too, and not all systems can defend against that by using a delay technique like gray-listing or “gauntlet.”

What about the little guys?

This can be particularly hard on smaller systems that don’t process a lot of messages and perhaps don’t have the resources to spend on filtering systems with lots of parts.

I was recently asked: “what can I do to improve SNF performance in light of all the new spam?” This customer has a smaller system in that it processes < 10000 msg / day.

One of the challenges with systems like this is that if a spammer sends some new pre-tested spam through an old bot, GBUdb might have forgotten about the IP by the time the new message comes through. This is because GBUdb will “condense” it’s database once per day by default… so, if an IP is only seen once in a day (like it might on a system like this) then by the next day it is forgotten.

Tweaking GBUdb:

The default settings for GBUdb were designed to work well on most systems and to stay out of the way on all of the others. The good news is that these settings were also designed to be tweaked.

On smaller systems we recommend using a longer time trigger for GBUdb.

Instead of the default setting which tells SNF to compress GBUdb once per day:

<time-trigger on-off='on' seconds='86400'/>

You can adjust it to compress GBUdb once every 4 days:

<time-trigger on-off='on' seconds='345600'/>

That will generally increase the number of messages that are captured based on IP reputation by improving GBUdb’s memory.

It’s generally safe to experiment with these settings to extend the time further… although that may have diminishing returns because IPs are usually blocked by blacklists after a while anyway.

Even so, it’s a good technique because some of these IPs may not get onto blacklists that you are using – and still more of them might come from ISPs that will never get onto blacklists. GBUdb builds a local impression of IP reputations as it learns what your system is used to seeing. If all you get is spam from some ISP then that ISP will be blacklisted for you even if other systems get good messages from there. If those other systems also use GBUdb then their IP reputations would be different so the ISP would not be blocked for them.

If you want to be adventurous:

There is another way to compress GBUdb data that is not dependent on time, but rather on the amount of memory you want to allocate to it. By default the size-trigger is set to about 150 megabytes. This setting is really just a safety. But on today’s systems this really isn’t much memory so you could turn off the time trigger if you wish and then just let GBUdb remember what it can in 150 MBytes. If you go this route then GBUdb will automatically keep track of all the IPs that it sees frequently and will forget about those that come and go. On systems that have the memory to spare I really like this method the most.

You can find complete documentation about these GBUdb settings on the ARM site.

 

Apr 092013
 

Yet another family meeting:
convoluted, confused, and intertwined with
friends not usually seen, but heard in hazy,
non-descript one sided conversations to which
you’re not usually a privy.

I phased in and out of this mysterious world,
painted an important cordial greeting upon my
face and drifted with the din of a multitude
of cute little cherubs, their bretheren and
sisterhood hooting the crisp childhood greetings
of simpler times I only envision now in my dreams.

Drifting in and out, to and fro, on waves of mystical
chaos: warm in the glow that is family even if it is
somehow distant and even unfamiliar to my typically
ordered and precise state of mind.

Strangers, now not strange, flow into my personal
universe as if they were ghosts appearing in the dark
grey corridors of some tall and mystical hall to present
tidings of terror, or fear, or joy, or bliss; and we
engage in mindless conversation to comfort us in our
naked vulnerability.

Then as our strangeness fades into a comfortable enveloping
mist we become our own small army against the unknown
and begin to speak of thoughts, beliefs, and dreams…
the kinds of words usually reserved for only the closest
of kin and those you see every day; but now is an open
opportunity to collect a new ally in a potentially
dangerous fold, that of life in extended family where
the dragon in the dark is every aging skeleton you hide
in the closet of your mind – with you now locked in
close proximity to excited peers all curious to see and
know, and all armed with the keys of ignorance and open
questions.

“Keep your wits” you think when you are awake, but the
soothing chaos seems friendlier and warmer as time wares
on and you find yourself lulled to sleep, somehow comforted
by the incomprehensible din.

Away, across the room and a see of jumbled souls all
embroyaled in senseless conversations you see your anchor.
That one familiar face that you arrived with. That one
who dragged you to this forsaken alien world now more
familiar with each moment, and you realize the reason for
your peace isn’t a follish sleepy tonic of calming chaos,
like the warm darkness of shock obliged to an animal once
cought in the jaws of it’s predator awaiting the final
passage from vital form to fodder.

This pleasant face, and it’s glow, this love, this other
soul to whom you are inexorably linked. This on has brought
you here again, and here is not so unfamiliar as it is an
extension of whatever was, what is, and what will always
be: family.

So roast the beast and sing the songs and contemplate the
murmor of countless hours in this company. It is a gift,
for the only true desert and dangerous ground for we
mortal beings of flesh and mind and soul and gifts of
spirit, the only true place of perishing in untempered,
unbearable rages of tempest and furies, the most horrible
wasteland which could cease our breath and silence our
voices in the loudest agonizing screams of pain and
terror is not here: that place is empty and alone, and
now, if you are here with me, you know there is nothing
so fearful for you, for you are not alone.

– Original (c) 1999 Pete McNeil

Apr 092013
 

On terrors and trials and troubles we tumble – so easily lost in this worlds wild jumble of chaos and rumors and strife and the hundreds of pointless distractions that cost us our marbles, and yet there’s a way if you manage to find it to keep from the fray all your virtues and kindness – a way to find joy, even bliss and good morrows. Your own private stock of fond memories to follow.

This path to good fortune is not for the timid and those who are on it could tell you some tales. Amid crisis and horror, between tears and sorrow, there’s monsters and fears and dark nights on each side of this quaint, narrow road full of light and bright moments – it’s peace and warm comfort in stark brilliant contrast to all of the dark scary places it goes past.

‘Tis love that I speak of and not simply friendship, but kinship – the kind that you find when you tarry along on your first timid step on this path that can be so uplifting or so very bad… but then once you have found them, this singular spirit, that follows you on and pretends not to fear it, you find you’re together and somehow the darkness is farther away and not nearly as heartless.

Your steps intertwine, there is dancing and wine and good words and good song and good cheer goes along and you find that no matter how hard the wind blows and no matter how scary the outside world grows, and no matter how shaky your next step may be, that your partner can help you, and does, day to day, in their subtle sweet magical spiritual ways.

You’re both stronger, and braver, and more fleet of foot and the sharp narrow path that upon you first took becomes broader and wider with each every day – soon as broad as each moment you have in your way and with each tender kiss and each loving caress you can light up the darkness – force evil’s regress.

Lonely souls fear to tread here and well so they should for this place is not for them – it does them no good and the road doesn’t widen and so they fall off. It is sad, but it happens more often than not. And it also is true for more than one in two that do venture this road that their love is not true and they find they’re apart in this harsh frightening place and they find that they can’t stand the look on their face and they stumble and cry as the frightening beasts beat them, then scurry away as their partners retreat, and they loose all the joy and tell sad sorry stories that frighten young children and prosper young lawyers.

So hold fast your other! You dare not let go. There is truth to this story. It’s not just for show! I’ve been down this long path more than once don’t you see and found many fierce terrible things in my spree. These beasties I speak of were you and were me as we fell off the path and made bitter decrees to get justice from all those around: our just share! when in-stead we were missing the love that was there.

So hold tight to your lover, make strong your belief, and find comfort in each other’s arms, and sew peace, and you’ll find after all that true love does survive all the slings and the arrows that life can provide, and in fact it repels them. It really is true… just remember this magic will take both of you.

– Original (c) 1998 Pete McNeil