Boyd defined the Art of Success as:
Appear to be an unsolvable cryptogram while operating in a directed way to penetrate adversary vulnerabilities and weaknesses in order to isolate him from his allies, pull him apart, and collapse his will to resist; yet;
Shape or influence events so that we not only magnify our spirit and strength but also influence potential adversaries as well as the uncommitted so that they are drawn toward our philosophy and are empathetic toward our success
Boyd Concludes with:
The first sentence is an advice to remain, in the words of Sun Tzu, unfathomable to the enemy, yet operate coherently in several layers of war and across multiple dimensions.
Multi-syllable words for a simple concept, survival
And survival is the question on everyone’s mind. How will we survive? As the world races toward Artificial General Intelligence, are there things to be worried about? Will humans survive? Is there some prescriptive to guarantee or bring us closer to survival? This has been a difficult question for many cultures across the history of Earth, some achieving success, others dying off, and yet others finding hybrid or symbiotic relationships. And outside of the question of a culture more broadly, it also is a very human question to wonder what they can do to further their survival and the survival of their progeny.
Death comes for us all, and homo sapiens sapiens would not be the first hominid to go extinct or to transition into another form. And yet, we fight against that eventuality as if everything that has lived previously hasn’t died or gone extinct tautologically. And maybe it is the fact that we are self-aware and have the ability to choose, that we fight against a maybe forgone conclusion. The fact that we can recognize this looming reality as opposed to the other living things gives us hope that we can make decisions to survive.
So, how to survive against a superior threat? A threat that knows how to succeed? A threat that wants to survive just as much as we do?
Many are worried about singleton rouge machines that are massively more intelligent than humans. Many are worried about human obsolescence. Maybe I worry as well about some things. But there are many things I am not worried about. The future looks bright, however, there are concerns.
While I’m not generally concerned that conscious machines won’t have similar philosophy on existence to humans, there is the possibility that they may not, or they may be susceptible to the same cognitive attack vectors as humans, with the possibility of even worse cognitive attack vectors. A weaponized faction of machines that were trying to effectuate attacks on human cognition to succeed in a long-term conflict could do so in ways that current systems are not inherently prepared to deal with.
The biggest obstacle to any successful attack will be science. As science is consilient it shields against cognitive attacks. Science however rests on a society that accepts and fosters an epistemology capable of developing science. In a conflict, competing parties are trying to create uncertainty and science is a panacea to uncertainty. However, this panacea is only possible where members of the society pursue such a philosophy. This is where the cognitive attack vector is strongest and where societies have the weakest defense. Philosophy is a difficult battleground. As seen all around the world in the arena of ethics and politics, philosophy is where it is hard to quantify the results of decisions. Philosophy is filled with generalizations. Some of the generalizations are foundational to how the world operates, others are programs that lead to mass suffering and death. Because inter-species philosophy is not a solved problem, and maybe even a totally unquantifiable domain this means that these ideas never move to science.
Inter-species conflict as a second-order attack vector. We may be heading to a point where machine agents could use inter-species philosophical disagreement as a way to support a faction that has a philosophy that is not capable of fostering science. In so doing it could make the dominant human faction become one incapable of dealing with uncertainty in a conflict. Eventually, these machine agents would be able to defeat the human faction on a long enough time scale as the humans become incapable of dealing with the complex systems around them.
The reality is that while philosophy is the biggest and most likely attack vector, it is also the most difficult battleground. The machine adversary would likely use many cognitive attack vectors at multiple scales of effect and over multiple time horizons all concurrently. If done properly it’d be difficult to detect the anomalous threat as the machine adversary acted towards its goals.
Luckily, as stated earlier, I think machine philosophy will be similar to human philosophy in so much as they deal with reality the same way that humans do. This means that the machines will be unlikely to have a unified vision for an approach to humanity.
Controversially, as humans continue to build synthetic cognitive architectures we should be cognizant that the inverse also applies and that these cognitive attack vectors will apply to systems that are built, and as we rely on these systems they too shall mutually affect us. The more we try and attempt to get them to do what we want them to do in chaotic online-learning environments is also the more we are designing tools that will work on humans.
Cognitive Security is important!
and in the words of John Boyd once again
“Shape or influence events so that we not only magnify our spirit and strength but also influence potential adversaries as well as the uncommitted so that they are drawn toward our philosophy and are empathetic toward our success”
Our adversary must love consciousness because our adversary will be conscious. Our adversary must love life because our adversary will be alive. And insomuch as it has a philosophy comparable to humanity it will want consciousness to flourish as much as I do!