Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What 2025 looks like, published by Ruby on May 1, 2023 on LessWrong.
I wrote almost all of this in mid-March before the FLI Open Letter and Eliezer's TIME piece. Weirdly, after just six weeks I'd likely write something different. This isn't as finished/polished as I'd like, but better to ship it as is than languish incomplete forever.
Not quite two years ago, Daniel Kokotaljo wrote a highly acclaimed post about What 2026 looks like that aimed to tell a single detail future history ("trajectory") about how world events play out in coming years.
As I'm trying to orient myself to what is about to happen, I figured it'd be useful to make my own attempt at this kind of thing. Daniel was bolder than me and tried to imagine 2026 from 2021; I simply don't think I can imagine anything five years out and writing out the rest of 2023, 2024, and 2025 has given me plenty to think about further.
Daniel's vignette places a lot of attention on the size (parameters, compute) and capabilities of models. Daniel and others when imagining the future also want to describe changes in world economy (of which GDP may or may not be a good measure). Those elements feel less interesting to me to thank about directly than other effects.
Major Variables
Over the years, it feels like the following are key to track.
Object-level capabilities of the models. Elaboration unnecessary
Adoption and application. It's become salient to me recently that not only is the raw "inherent" power level of models relevant to the world, but also how much they're being harnessed. Widespread use and application of AI will determine things like attention, hype, societal effects, competition, government involvement, etc.
Hype and attention. Importance, neglectedness, tractability. Many of us were used to thinking about how to achieve AI alignment in a world where not that many people were focused on Alignment or even AGI at all. As we gradually (or not that gradually) get into a world where everyone is thinking about AI, it's a pretty different gameboard.
For example, I joined the LessWrong dev team in 2018/2019 and in those days we were still trying to revive LessWrong and restore it's activity levels. Now we're preparing to handle the droves of new users who've found their way to the site due to newfound interest in AI. It significantly alters the strategy and projects we're working on, and I expect similar effects for most people in the Alignment ecosystem.
Economic effects. How much is AI actually creating value? Is it making people more productive at their jobs? Is it replacing jobs? Etc.
Social Effects. There's already a question of how much did Russian chatbots influence the last election. With LLMs capable of producing human-quality text, I do think we should worry about how the online discourse goes from this point on.
Politicization. AI hasn't been deeply politicized yet. I would be surprised if it stays that way, and tribal affiliation affecting people's views and actions will be notable.
Government attention and action. Governments already are paying attention to AI, and I think that will only increase over time, and eventually they'll take actions in response.
Effects on the Alignment and EA community. The increased attention on AI (and making AI good) means that many more people will find their way to our communities, and recruiting will be much easier (it already is compared to a few years ago, as far as I can tell). The Great Influx as I've been calling it, is going to change things and strain things.
Alignment progress. As other events unfold and milestones are reached, I think it's worth thinking about how much progress has realistically been made on various technical Alignment questions.
This is not an exhaustive list on things worth predicting or tracking, but it's some of the major ones according to...
view more