When we think about Artificial Intelligence, we have a large array of potential models to choose from. We can imagine a rule based engine, a neural network, or some other, more exotic method such as a Generative Adversarial Network, that classifies an input and then executes an action based upon it’s classification. Rarely though, do we ever take a step back and look at the system as a whole and how the system maintains its Viability (you remember Viability from my Eight V’s…blog post). That is the topic of this blog post, system Viability and we’re going to look at it through the lens of The Viable Systems Model.
The Viable Systems Model was proposed and refined by Stafford Beer and others (John von Nemann, Norbert Weiner, W. Ross Ashby, Alan Turing, and many others) between the 1930’s and 1970’s. What these theoreticians were attempting to model were the universal mechanisms for a system, any system, to be self-governing and self-perpetuating – Viable. The field of study Beer and his associates were operating in was called Cybernetics, the science of communication and automated control. You could think of Cybernetics as a contemporary and companion to Systems Science, Control Theory, and all of the other disciplines that seek to describe and understand how things (gene regulatory networks, organisms, groups, corporations, political bodies, societies, etc.) adapt and change over time. Beer was primarily concerned with corporations and economic governance, but, his models and theory is general enough and universal enough to be applied within any discipline including art and sports training.
This all sounds intuitive enough. Almost fifty years ago a group of theoreticians came up with a model of governance and feedback. How does that impact us now? Heck, in the early 1970’s computers were the size of large rooms that had special elevated floors and programs were on punch cards or paper tape. They didn’t have cell phones, or even fuzzy logic rice cookers. Polyester was cool, hair styles were generally regrettable, and the Brady Bunch was in first run.
Unlike Mike Brady’s perm, and penchants for flammable petroleum-based synthetic fabrics, this theoretical system of governance has found a place within machine learning and is the corner stone of the “Eight V’s of Big Data and Artificial Intelligence” in the area of Viability. Now, our task is to very briefly examine and explain the Viable Systems Model (VSM) within the context of Artificial Intelligence and Big Data.
The VSM is a five system or layer model that governs or controls different aspects of an entities existence through time, layers 1 -3 respond to stimuli that influence activities in the “here and now,” layer 4 deals with reconfigurations for future or predictive elements that will influence the entities long term viability and layer 5 which seeks to balance or buffer layers 1-3 against layer 4.
System 1 – Is the activity itself that describes the system, a living system, for example metabolizes and respires (burns food, produces waste). Lions do this, right?
System 2 – These are communication systems between within the living body, a nervous system or signally system of some sort. Lions have nervous systems.
System 3 – This is the monitor and control system for System 1, in a living system, this system regulates simple activities like metabolic rate and respiration rate. In humans and mammals, this can be thought of as the autonomic or involuntary nervous system. Depending upon the complexity of the organism, System 3 can also encompass circadian rhythms and other innate behaviors. Lions sleep, area awake, hunt, mate and have other certain behaviors.
System 4 – This is the first set of outwardly looking systems that take in input from the external world or milieu. These systems can be thought of as external sensors, touch, sight, hearing, and so forth. With these sensors are also rules that allow for self-preserving behaviors. For example, System 2 communicates to System 1 that the organism is running low on energy (is hungry). System 4 identifies the communication as hunger and identifies a food source and begins to eat. Lions do this very frequently, we can think of this as typical individual lion behavior.
System 5 – This is the component of the system that governs or balances System 4 activities against Systems 1-3. For example, if we look at pride of lions, we see System 5 activities taking place with feeding priorities, young weened cubs are higher up the feeding ladder (eating with their mothers who did the hunting) than are older cubs who eat last. This is done to assure the next generation of cubs can nutritionally make it to adulthood while maintain the social order of the pride. In this case, System 5 is the lion pride dynamics that govern a group of lions and modulates their behavior. On a systemic level, we can equate System 5 Rosseau’s Social Contract, https://en.wikipedia.org/wiki/The_Social_Contract.
Another way to think of System 5 activities are those activities that allow the organism or entity to co-exist with other entities like it and interact within its milieu.
All of this translates directly to artificial intelligence. If we look back at the concise definition proposed by Accenture in that was put forth in an earlier post, we see that artificial intelligence is defined as the ability to “sense, comprehend, and act”, we see that VSM maps directly. Systems 1-3 sense, System 4 comprehends, and System 5 balances the needs that have been sensed by Systems 1 – 3 with the actions proposed by System 4. If all five of these systems are tuned and trained appropriately, then exists a system that is viable over time and can change and adapt to its environment. This is ideally what we want in an artificial intelligence. It does us very little good to develop an Artificial Intelligence that only works at time point zero or use case zero, that’s like being a lion and not understanding the concept of hunting or eating. If that is true, as a lion, your viability will be very short.
As we build AI’s, we need to keep this abstract model in mind and think about the Viability, the continued Viability of the products that we are creating. There are very few universal truths, one of them is, change is difficult, even for AI’s, and what the VSM does is give the AI a built-in mechanism to introduce self-change in response to the inputs that it is receiving. Models need continual training to remain relevant. When viewed through the lens of the VSM, AI’s become more than just automated decision points but entities that adapt over time to the changing landscape of their niche. This then brings us back to the utility of accelerated computing, in order to make truly viable AI’s there needs to be continual training, and continual monitoring and modeling of the external milieu as well as internal response model. This continual level of self-monitoring requires accelerated computing to maintain viability or the monitoring activities over take the AI’s ability to respond, think of this as a modern day “swap of death” situation. So, save your lions, use accelerated computing to enable your AI’s to be truly Viable Systems.