The Tightrope

With the emergence of increasingly intelligent and general AI-systems it has become popular to start thinking of intelligence as a commodity. I think this has two interesting implications.

One: If intelligence is to be viewed as a commodity, the current state of the “intelligence market” is best described as undergoing a massive shortage. Increasing the supply of intelligence has some very positive implications for its potential consumers, which is all of us.

Two: If intelligence is to be viewed as a commodity, humans currently have a de-facto monopoly on its production. Increasing the supply of intelligence has some very negative implications for its current producers, which is all of us.

In this essay, I will argue that walking the tightrope of making AI as useful as possible without making humans fundamentally useless is an impossibly difficult task.

The Intelligence Shortage

In the Swiss equivalent of high-school, you choose a focus area. A subject that you focus on much more than students that choose a different one. Probably due to a lack of decisiveness, I chose what’s best described as “business, economics and law”. For the most part it was very boring, but what stuck with me were the first few lessons in microeconomics. How supply and demand interact, and especially what’s required for a market to work well. One of those conditions is that the participants in the market are well-informed about it. An actor can only make a rational choice about what to buy at what cost, what to sell at what price, if they understand the actual value of the product they are buying or selling.

In a complex world, this is not a fully attainable goal. Even with a relatively homogenous good, like oil or grain, and plenty of time to analyze the market for buying and selling the good, and all the products that can be made from it, it’s difficult to tell whether one is truly buying the best option or selling at the best possible price. And neither is it in everybody’s interest for this ability to be common. Whenever I evaluate my choice for health insurance or internet service provider, I’m annoyed by how brazenly these companies leverage the fact that consumers have limited time to analyze their offerings and lack the knowledge of the market to do so effectively. Another well known example of this is the common practice of agreeing to terms of use without reading them at all. I, and presumably almost every other consumer, simply lack the time, legal knowledge and frankly willpower to do so.

On an unconscious level, we face this problem in all our purchasing decisions, because in all of them, our limited and imperfect cognition can be leveraged to get us to buy something.

But this uninformed decision-making isn’t just an attribute of our consumer behavior. My choice of careers, my choice of weekend activity and even my choice of social action is full of guesswork. I don’t have the time or the desire to analyze all the possible hikes in my area and find the one that fits my preferences the most. I don’t have the psychological insights about myself and anyone else to really say what major to pick with confidence. And neither do I have the committee of PR-experts to check whether my jokes are landing or whether that guy that I’m annoyed with is actually treating me badly.

The result of this is that we accept risks that we don’t understand and take actions, the consequences of which we didn’t consciously evaluate.

I claim this state of society is well described as an intelligence shortage. Intelligence at scale, the ability to look for and process all of the information necessary to avoid making the mistakes mentioned above, is a luxury good. Only large corporations and the wealthiest of individuals can afford to buy it. The rest of us are limited to the intelligence we can produce ourselves.

As much of this uncertainty, that results from the scarcity of intelligence, is an accepted part of the human condition, I don’t think many people appreciate the fundamental change that abundant intelligence could mean for everybody in every part of life. Many of the flaws of modern capitalism could disappear as markets get closer to being ideal on the “informed decision-making”-axis. Everyone could have the ability to make smarter choices in all domains of life. We could optimize for a fulfilling, socially successful, environmentally friendly and fun existence much more effectively. And of course, since intelligence is what propelled us from caves to glass towers, its abundance would mean that we could do so among the stars.

To be a Hand Weaver in a World of Mechanized Looms, Forever.

Today, the word “Luddite” is generally used for someone who opposes technological advancements or their applications. The term originates from the Luddite Movement, made up of hand weavers that were displaced by the mechanized and later powered looms. They named themselves after an obscure figure “Ned Ludd” who is said to have broken two knitting frames (early mechanical knitting devices) in a fit of rage about poor working conditions. His rage is understandable. The introduction of mechanized textile production forced many manual textile workers to accept pay-cuts and much worse working conditions, including in regard to safety.

The mechanics of this story are pretty simple: The weavers offered something that became available at a much lower cost without involving them, so they had to find another way to provide value. The only thing they found sucked really bad in comparison.

This should be a cautionary tale for everyone in the early 21st century. Intelligence, previously something that only we humans offered, is currently becoming available at much lower costs, without involving us.

However, this is an imperfect analogy. Rather than just substituting a narrow skill like weaving, AI is universally applicable. It would be bad enough if AI just replaced human knowledge workers. In an industrialized society, a majority of people are not employed to move things around physically. If all the mental work were replaced, we’d have to, again, fight over the few remaining jobs that involve physical work. But it’s worse than that because intelligence isn’t just book smarts. Intelligence encompasses steering the robot arm so it washes dishes without breaking them, threading wires through car frames, and driving trucks to their destinations. Even beyond that, it includes the ability to determine what a therapy patient needs to hear to overcome their insecurities or what one should say to make a friend laugh.

Intelligence, whether artificial or not, is so universally helpful to everyone, that producing it at industrial scale will make everyone useless to everyone else, in all domains. This stands in obvious contrast with some of the assumptions that modern societies are based on. Even someone who has nothing was, until now, able to offer their mind in exchange for value. Countries value their workforce, the powerful to some degree need the less powerful. If this ceases to be true, we enter a very dangerous incentive structure.

Flickering Idealism

Above, I describe two scenarios, a utopian one and a dystopian one. But they are not actually separate. The former concerns itself purely with what’s technologically possible, while ignoring the incentive structure that those technological possibilities create.

This essay is titled “The Tightrope” because I often get the impression people think there is a narrow path to abundant intelligence, that does not involve the ousting of humans from the domain of usefulness. In my view, this balancing act between the utopia and the dystopia is impossible, because what enables the utopia to be so good also means that no normal person will be able to afford it.

I’m not convinced that we will manage to solve this problem. Yes, UBI would be an approach in the short term. But why would it be a stable state? Capitalism has been relatively stable because, when markets work well, they actually benefit all participants. Thus, the incentives drive even the powerful to act within the system, rather than to change it. If what people, has to offer, looses its value, and we have to support them with something like UBI, whether paid out in money or in “AI-use credits”, the incentives no longer align with the system’s stability. We would have to uphold an unprecedented level of generosity from the ones at the top, countering their incentives, forever. Incentives tend to win in the long run because the light of idealism, no matter how bright it shines initially, tends to flicker.

Note that i wrote this essay assuming the premise that AI development is going to go well in terms of alignment. For the record: i do not think that this is guaranteed at all.

Next
Next

AutoBench - A little Side Project