18 AI

AI is rapidly taking over the world of venture finance, with more than a quarter of VC money now flowing into the space. (2023)

Smith

AI coming

A whole lot of research is being done on the productivity effects of generative AI tools, and they all seem to conclude the same thing: Generative AI gives a much bigger boost to low performers than to high performers.

No study so far showing that more talented people are able to use generative AI more effectively than less talented people. All of the evidence points to generative AI as an equalizer.

It’s not hard to think of why this might be the case. Whereas previous forms of information technology complemented human cognition, generative AI tends to substitute for human cognition.

Traditional IT acted like a shovel — something that complemented people’s natural abilities — while generative AI acts more like a steam shovel. A steam shovel handles the muscle-power for you; GPT-4 handles the detail-oriented thinking for you. Technologies that substitute for natural ability tend to make natural ability less scarce, and therefore less valuable.

This doesn’t mean generative AI will decrease inequality overall. The computation-intensive nature of these tools means that physical capital — access to large amounts of cheap GPUs or other key hardware — might make a comeback as a source of wealth. But by boosting the performance of the least skilled on cognitive tasks, generative AI looks like it could level the human-capital playing field.

In order to program computers the traditional way, or even to apply lots of kinds of software, you had to have a mind that could think like a computer. But generative AI is specifically set up to interface with people who don’t think like machines.

GPT-4 handles the detail-oriented thinking for you.

The computation-intensive nature of these tools means that physical capital — access to large amounts of cheap GPUs or other key hardware — might make a comeback as a source of wealth. But by boosting the performance of the least skilled on cognitive tasks, generative AI looks like it could level the human-capital playing field.

Abundant energy complements average people’s skills.

Energy that’s both cheap and widely portable will enable all sorts of economic activity that average people will be easily able to master. Battery-powered appliances and industrial tools, robots that can be ordered around with generative AI, cheap chemical manufacturing and earth moving, fast efficient vehicles with long ranges, 3d printers, and so on. The power of every construction worker and factory worker and food delivery worker and nurse will be magnified by the new energy abundance.

[]/fig/Energy_use_per_capita_1800-2010.png)

Cheaper, more portable energy seems like it’ll help put us back on a technology curve more like the one we were on before the 70s.

The Revenge of the Normies thesis is, of course, an exercise in optimism. As flattering as the age of human capital was for my nerdy tribe, such a small slice of society shouldn’t be the only ones who get to thrive. We’ve had a four-decade-long celebration and veneration of talent and excellence in America; we could use an equally long period where the vast middle class and working class are the people who reap the most rewards.

Smith (2023) Is it time for the Revenge of the Normies

Smith

AI Risks Thinking

AI risk thinkers were always able to come up with lots of scary sci-fi scenarios about how generative AI could cause a global calamity. Those scenarios weren’t obviously impossible; it’s clear that they’re worth worrying about.

But when it came to recommendations for policy to diminish the risk of these scenarios becoming reality, the AI risk people were always short on actionable ideas. You can study how AI models work and try to understand them — Anthropic and others are working on interpretability. But because the really scary doomsday scenarios all depend on AI that’s much more advanced than what exists today, knowledge about how AI works now won’t necessarily help us avert those possibilities. The people who are scared of AI doomsday risk tend to believe in a “fast takeoff” in which AI goes very very rapidly from the GPT-style chatbots we know today to something more like Skynet or the Matrix. It’s basically a singularity argument:

The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good’s intelligence explosion model, an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

“Fast takeoff” means that A) we basically can’t know much if anything about what AI will be like by the time it becomes truly scary, and B) it could jump to “truly scary” level at any minute. Thus the only action that most of the prominent AI risk people have been able to advise us to do is to “shut it all down” — to simply not make better AI.

A permanent halt to AI development simply isn’t something AI researchers, engineers, entrepreneurs, or policymakers are prepared to do.

I see this as another case of a modern intellectual movement that is far better at identifying problems than it is at suggesting solutions. My prediction is that basically all of these movements will attract a lot of initial attention, but then gradually be ignored over time. The AI scenarios that EA folks suggest certainly are scary. But until EA comes up with some solution other than “shut it all down”, the people developing AI are simply going to pray for the serenity to accept the things they cannot change.

Smith (2023) At least five things for your Thanksgiving weekend

18.1 AI’s Environmental Impact

Naughton

AI requires staggering amounts of computing power. And since computers require electricity, and the necessary GPUs (graphics processing units) run very hot (and therefore need cooling), the technology consumes electricity at a colossal rate. Which, in turn, means CO2 emissions on a large scale – about which the industry is extraordinarily coy, while simultaneously boasting about using offsets and other wheezes to mime carbon neutrality.

The implication is stark: the realisation of the industry’s dream of “AI everywhere” (as Google’s boss once put it) would bring about a world dependent on a technology that is not only flaky but also has a formidable – and growing – environmental footprint.

A study in 2019, for example, estimated the carbon footprint of training a single early large language model (LLM) such as GPT-2 at about 300,000kg of CO2 emissions – the equivalent of 125 round-trip flights between New York and Beijing. Since then, models have become exponentially bigger and their training footprints will therefore be proportionately larger.

But training is only one phase in the life cycle of generative AI. In a sense, you could regard those emissions as a one-time environmental cost. What happens, though, when the AI goes into service, enabling millions or perhaps billions of users to interact with it? In industry parlance, this is the “inference” phase – the moment when you ask Stable Diffusion to “create an image of Rishi Sunak fawning on Elon Musk while Musk is tweeting poop emojis on his phone”. That request immediately triggers a burst of computing in some distant server farm. What’s the carbon footprint of that? And of millions of such interactions every minute – which is what a world of ubiquitous AI will generate?

The first systematic attempt at estimating the footprint of the inference phase was published last month and goes some way to answering that question. The researchers compared the ongoing inference cost of various categories of machine-learning systems (88 in all), covering task-specific (ie fine-tuned models that carry out a single task) and general-purpose models (ie those – such as ChatGPT, Claude, Llama etc – trained for multiple tasks).

The findings are illuminating. Generative tasks (text generation, summarising, image generation and captioning) are predictably more energy- and carbon-intensive compared with discriminative tasks. Tasks involving images emit more carbon than ones involving text alone. Surprisingly (at least to this columnist), training AI models remains much, much more carbon-intensive than use of them for inference. The researchers tried to estimate how many inferences would be needed before their carbon cost equalled the environmental impact of training them. In the case of one of the larger models, it would take 204.5m inference interactions, at which point the carbon footprint of the AI would be doubled.

The best hope for the planet would be for generative AI to topple down the slippery slope into Gartner’s “trough of disillusionment”, enabling the rest of us to get on with life.

Naughton (2023) Why AI is a disaster for the climate