From Google to Near: Illia Polosukhin argues at NOVA that AI should belong to its users
From Google to Near: Illia Polosukhin argues at NOVA that AI should belong to its users
Detalhe da Notícia
Illia Polosukhin, coauthor of the Transformer architecture and cofounder of Near Protocol, came to the Confluence Conference, held at NOVA on 17 and 18 November, with a clear message: the race for artificial intelligence is not only technological, but also political, economic, and social.
The event, coorganized by the NOVA Blockchain Lab at the NOVA Information Management School and the NOVA SBE Data, Operations & Technology Knowledge Center, was a satellite conference of the 1st IEEE International Conference on Distributed Ledger Technologies. It brought together researchers, industry leaders, startups, and students for two days focused on blockchain technology.
In conversation with the audience, Polosukhin revisited the journey that took him from Google Research – where he helped develop Transformers, now behind systems like ChatGPT, Claude, and many others –to the blockchain ecosystem.
He left Google in 2017 to launch Near AI, a company built around the idea of “teaching machines to code,” long before it became mainstream. At the time, the concept sounded far-fetched: computers writing code from natural language instructions. But a practical obstacle quickly appeared. How do you pay, quickly and globally, the people generating and validating data to train these models, spread from Eastern Europe to Asia?
Traditional financial rails were slow, expensive, and filled with compliance barriers. Existing cryptocurrencies didn’t solve the issue either. “If I send a few dollars and the fee costs the same as the amount I’m sending, the system breaks,” he explained. That challenge ultimately led to Near Protocol, designed from the start as infrastructure for global micropayments, distributed work coordination, and later, a foundation for decentralized applications.
A central point of Polosukhin’s talk was the link between AI and power. As models move closer to forms of “general intelligence,” the real risk is not just technical bugs, but who gets to shape the rules.
“Whoever controls these models controls the biases, what gets shown or hidden, and the decisions suggested to millions of people,” he warned, noting that the systems that already filter news and recommendations fundamentally shape how many people perceive reality.
He described a near future balancing between utopia and a “1984” style dystopia: instead of relying on apps and traditional interfaces, each person could operate through a personal “AI operating system” — an agent acting on their behalf and negotiating with other AI agents, from ordering meals to planning travel or managing investments.
The underlying question, he stressed, is straightforward and hard to resolve: is this system aligned with the user, or with a company seeking to maximize revenue?
As an alternative to today’s centralized, platform-controlled model, Polosukhin presented the idea of user-owned AI, where the system’s objective function is designed to prioritize the user’s well-being rather than advertising metrics or revenue.
To make this technically feasible, Near is building Decentralised Confidential Machine Learning (DCML): a decentralized computing network that uses GPUs and data centers worldwide without revealing user data, model parameters, or the code powering the models.
The foundation of this system lies in Trusted Execution Environments (TEEs), secure hardware-based execution environments from manufacturers like Intel, NVIDIA, and AMD. These allow code to run privately while producing cryptographic proofs that a specific model, with specific code, was executed — without disclosing its contents. On top of this, Near is proposing a “confidential cloud,” where any operator can connect their hardware and be compensated, via blockchain, for providing compute capacity.
Two early implementations of this vision were also introduced at the conference:
- Private.ai – a fully private alternative to ChatGPT, where conversations are inaccessible to anyone and never used for training, while still benefiting from large-scale cloud models.
- Cloud.nearbit.ai – an API that allows developers to integrate private AI into their apps without directly accessing sensitive user data, a particularly relevant issue in Europe and under the GDPR.
Polosukhin also addressed three pressing topics: energy, hype, and the future of work.
On today’s main bottleneck in AI, he was direct: the constraint is no longer just GPUs — it’s electricity. Training and testing large-scale models requires thousands of GPUs running for days, placing major pressure on electric grids and data center infrastructure worldwide.
Asked whether AI could be “the next dot-com bubble,” he acknowledged the possibility of overinvestment and financial corrections but separated that from the technology itself. What current models can do — compared to just two years ago — “is not a bubble.” In his view, even an “overbuild” of GPUs could be beneficial if decentralized networks learn to use excess capacity.
Regarding students entering a world where “agents will write code,” he offered a mix of caution and encouragement:
"We’re in a unique moment. Ten years from now, almost nothing will be built the way it is today. But right now there’s a huge window. If you understand computer science fundamentals and know how to work with AI, you can be 10 or even 100 times more productive than an engineer just a few years ago."