We all know that Nazi jurist Carl Schmitt changed the course of history with his distinctions, that
the political is the distinction between friend and enemy
the moral is the distinction between good and evil
the aesthetic is the distinction between beautiful and ugly
the economic is the distinction between useful and harmful
And we know about Claude Shannon, whose information theory says that information is surprise that emerges from noise.
And let’s not forget about Douglas Adams, and his nugget from the philosophical Hitchhiker’s Guide to the Galaxy that the meaning of life, the universe, everything is 42.
The question is how or whether these three experts can help us think about the emergence of AI. Do they show us that AI is bound to take over the world and then the universe, or do they tell us that maybe humans still have the inside track?
Whatabout the political? Can AI outfight humans when it comes to defeating the enemy? Can it out-grift politicians in helping its friends? I’m inclined to think that it takes a human to develop the cunning and the misdirection needed to defeat the enemy, although you can certainly say that, following Sun Tzu, all that matters is to know yourself and know your enemy. But I am sure that AI could out-grift your average politician or NGO grantee six ways from Sunday.
Whatabout the moral? I wonder about that, because I think that human moral and religious thinking is actually still in kindergarten. What is “good?” What is “evil?” Ask Zorhan Mamdani and he’ll tell you one thing. Ask the pope and he’ll tell you another. I’d say that the moral depends on the kind of society. It’s different for hunter-gatherers than for farmers, and different for workers than for capitalists. And different for venture capitalists and devoted activists.
A softer version of the moral distinction is what I would call the cultural distinction, which is the distinction between the way we do things and the ways we don't. Or maybe the cultural distinction is between ways that work and ways that don't. The distinction is probably about stability versus change. Stability works until it doesn't. Bit nobody wants change unless it works — for them. I wonder how AI will handle culture. Will it be reckless for change, or hesitant for change?
The economic distinction is similar: useful versus harmful. But the economy is all about trying new stuff. Only after you try it can you determine if it is useful or harmful. It's a question of judgement and that circles back, ultimately, to good and evil. I wonder how we would program AI to make those kinds of judgements, and how we program AI with cultural and moral values. Of course part of human action is the accumulation of knowledge prior to an actual decision. I'm sure that AI will be fabulous at that.
Then there is Claude Shannon’s information theory. If you want to know how it works, the closest thing I know is digital communications. Used to be when a modem connected up you heard a tone upon which the signal was imposed. But when the first digital modem appeared you heard fake random “noise” upon which the much faster signal was imposed. What do you think? Could AI surprise us? It's a good question, because right now we use AI to gather and summarize knowledge. Could it reproduce the venture capitalist model that finances a bunch of startups with good ideas in the hope that one will surprise with a good idea that works? I have no idea.
Let's not forget Douglas Adams and his joke about the meaning of life. Perhaps what he is saying about the meaning of life, the universe, everything being 42 is that nobody has a clue, least of all Deep Thought. Did anyone ever ask him about that?