Ready-to-mind

AI tools distract from AI risks

Think about the intelligence staircase concept of Tim Urban. If we plot the intelligence of different beings, then from a cosmic perspective, the difference between humans and other apes is not that much. With that small difference, we have built a civilisation far greater than the other apes ever could.

AI is seemingly unrestrained in its ability to climb this staircase. 5f33053b-de8c-44f9-864d-0af9cc093174_2074x2124

AI agents today can perform better than most humans on mathematical and verbal reasoning tasks. They are in the narrow band on this spectrum between the smartest and least-smart human. They will likely soon surpass it.


There are so many magical AI tools that are released every day.

The scope of what you can generate from a single prompt has zoomed through - silly poems, simple explanations, good essays, computer programs, mathematical proofs, well researched reports, entire websites, images, comics, songs and videos - all in a matter of 2 years.

It is so tempting to stop and play. The night that we realised comics can be generated by single prompts, I stayed up with my friend until 3am just experimenting with that.

We haven’t made a comic since, because now we use coding agents to build websites and run machine learning experiments. Yesterday I was tempted to explore Veo and make a short film.


But what’s the point of experimenting with these tools? The capability of AI tools in a year is so likely to be so much better than all the handholding is potentially a waste of time (since I can build the same thing next year with less effort) or a distraction (if my time is better spent preparing for AI futures in other ways).

The Bitter Lesson of AI research is that ā€œgeneral methods that leverage computationā€ are ultimately the most effective. This means that specially crafted AI approaches, such as studying linguistics to design grammars very carefully, eventually loses out to generic approaches that use lots of computation and data, such as just letting a neural network learn (by training with lots of compute) how to predict words from lots of language data.

Last year people talked about prompt engineering a lot, and in the GPT-2 era, it took even more skill to make the next token prediction LLM believe it was doing something. Today, you can make loads of typos and omit words to the point a friend would not know what you mean, but an AI will perfectly respond to your query.

Today we discuss how designing the architecture of your project before letting coding agents fill out the details is a better way to go than just ā€œvibe codingā€. But even this will likely change soon. All these techniques of coping with the limits of the model are like doing something specially crafted. As long as the scaling laws hold, and we keep finding energy and compute, the general methods will keep winning and these specific tools of the day and their capability will be a forgotten detail in history.

Stopping to play with the AI tools of today is like failing to heed the Bitter Lesson.

Of course, one reason to use these tools is that you need to do something today, and what you need to do is better done with these tools than without.

But if you are concerned about AI safety, stopping to adapt and master AI tools today is distracting. We are in the midst of an AI takeoff and instead of focussing on the finger, we can focus on where it is pointing.

Where we know we need to be making more progress is in AI governance, control, alignment, policy, diplomacy, safety, etc. etc.

We don’t know how to organise a society, a political system and an economy which has superintelligent AIs - we need people to write and think about this.

#ai #gentle-computing