AI will solve society's AI problems
We are worried about the potential of AI to irreparably displace human workers, thus greatly weakening most individuals.
We are worried about superintelligence and misaligned intelligences that may be beyond human control and cause a lot of harms to us.
⌠et.c.
But what if the solution to all these AI problems was staring us right in the face?
Computers are tools for the mind.
All the complexity of anticipating the very rapidly changing near futures ahead of us, what if we use computers to help us there too. What if we employ our best AI tools to help solve and cope with these issues?
Then, each time AI capability improves, we can see it as our ability to building a world with AI improve and not see it as us getting closer and closer to doom.
Why could this idea fail?
- There is too much more financial and strategic incentive to build and deploy AI for other things than AI safety
- AI capability is jagged - it could get good at things unrelated to its ability to good AI safety research
- AI is misaligned and does bad AI safety research on purpose
- It is harder to use AI to make progress in policy, politics, society, law, the economy than it is to use AI to drive technological and AI capability improvements.
- Safety is not cool. It is conceived as âbrakesâ on a car - we need a concept of safety that isnât a party pooper. Safety as âpower steeringâ, Safe AI is AI you can control better to make it do what you want.
What do we need to do to make this idea work?
- Make it very easy for AI safety researchers to use the best models , e.g. compute credit grants
- Build a strong culture where the first demonstrations of new models are always on things like this
- Push a narrative like unsafe AI = useless AI, so companies lagging on the safety benchmarks see financial consequences from worsening investor sentiment.
- The most socially positive people need to be pushed, challenged, incentivised, and convinced that they should become as AI native as possible, so their capability and impact is amplified.
What will it look like?
- A lot of technical research already uses AI to make AI better or safer
- More AI use for public engagement with research
- AI use for legal and policy thinking
- Using AI in futures thinking / scenario planning
- Using AI to build essential safety software very fast
Overall, itâs not a panacea, but it seems like a fertile ground for exploration and we should do it anyway because it will make us more effective even if it doesnât completely solve the problem.