I've got nothing to predict for 2026
But I've a few questions and would love to hear yours!
I’ve read a saying that goes, “It is difficult to make predictions, especially about the future.”
Some attribute it to Danish tradition, others to baseball legend Yogi Berra. The internet is a weird place.
I won’t dig further into ownership of the quote, but it’s a helpful lens for all the 2026 prediction lists you may have seen.
I don’t want to add another list. I’m not good at predicting, and I’d rather not use vague language I could brag about in a year if something vaguely similar happens.
But I do have questions about what’s coming - mostly based on what we’ve seen over the past few months.
So I’ve put them together below, and I’d really like to get yours in the comments!
I’ll try to use what you share to inspire, inform, and explore new topics in 2026 for Artifacts!
Artifacts has questions, not predictions
It wasn’t easy to add and cut, but these are the questions that made it in coupled with pieces of Artifacts somehow related :)
Will we find a (better) way to pay Wikipedia or newspapers when scraping them for AI?
There is currently no standard way to compensate Wikipedia or news outlets when their content is crawled for AI training or inference, even though we know these sources are foundational to how AI systems “know” things. Wikipedia has reported that 65% of its most expensive traffic comes from bots, meaning it is not only unpaid, but actively bearing the cost of this extraction.
Can we stop sentences like “Writing is thinking, not just putting together words”?
Any stats around how much is synthetic content is around can’t help but being partial, but we all now see a lot of horrible patterns of AI writing, like weird short sentences, “this, not that”, those weird long dashes and “delve into”. We know why it is the case, but could we please get back to writing better?
Do we really want AI for sexting or brain rots?
AI was sold as a tool to help tackle some of humanity’s hardest problems. Instead, the race to adoption seems to be shrinking ambitions: erotic chatbots, parasocial companions, endless content sludge. Will we keep going down this slope?
Will we get just fewer buttons on screens?
With AI taking over computer use, interfaces are shrinking and becoming simpler, with most of the work happening behind the screens. If this is the new frontier of design, one may wonder how users will keep control and dark patterns (or maybe a better word) won’t surface even more.
Will age verification become normal?
Australia and UK have started already and Europe and the US are thinking of it. Despite age verification not being the silver bullet to make social media better, maybe it will become a default when logging in? To be seen also how much companies will want to bear this weight.
Will we know more about how social media actually works?
One path to making social media better is how much we know about them, and therefore how much we’re allowed to look into. Data access is still far from being robust, regulators are going after platforms, but the journey is still long.
Will we be able to make it clear what we want to see on social media ?
It’s no mystery some users reclaim more ability to steer what they get to see on social media and some are building platforms for better control. This may increase user satisfaction but also clash with some retention practices, which also means money. What matters more, though?
Or will it simply become cool to be off social media?
Digital detox or physical artifacts to use less phones are getting traction, as well as with bans in schools. Maybe the next cool kid is the one who is not posting?
Will there be more space for new up and coming tech companies?
We’ve seen some good antitrust and competition efforts across the US, especially in regard to Google, and some good wins of the EU through the Digital Markets Act. If this keeps going as hoped and planned, it’ll be easier to see new players and innovation.
Will we have an “Obama moment” for AI? And ow much synthetic content will flood politics & information?
Or, more likely, a negative one.
Will a political candidate use AI so pervasively - in messaging, targeting, persuasion - that the risks suddenly become obvious to everyone?
Are Large Language Models already not enough?
Yann LeCun, formerly VP and Chief AI Scientist at Meta, is among the most vocal skeptics of LLMs and is now leading race to world model AIs, that should attempt to understand the world and can simulate cause-and-effect and what-if scenarios to predict outcomes. He’s not alone and these model may be the alternative to LLMs hallucinations.
Last but not least, will the AI bubble finally blow up?
This may come with a few implications.
Thoughts, comments, other questions? Or even reply to this email!
Save for Later
Why friendship is just different, now? And why design also is getting different.
An LLM that hasn’t seen the past 100 years. And how the current ones exert influence over what we think.
Well, they read good books. And they do too. But poetry can actually just be math.
He’s good at predicting, so you can read what may lie ahead for 2026.
How ChatGPT Is Weirdly Turning Into Facebook - this guy is so good
Another guy doing and writing cool stuff:
The Bookshelf
I’m not sure about how much science is in this one, but must say that “Mathematical Thinking” from Rutheford succeeds at putting together a few interesting thoughts on how we think and rules about it. It’s a quick read, but worthwhile.
📚 All the books I’ve read and recommended in Artifacts are here.
Nerding
Sometimes the problem about automations is you don’t know what to build. Twin makes it easier to brainstorm and think about it, so it’s cool to take a look and see what to tinker with.













