AI with reasoning power will be less predictable, Ilya Sutskever says

Share This Post

[ad_1]

FILE PHOTO: Former OpenAI chief scientist Ilya Sutskever predicted that reasoning capabilities will make technology far less predictable.

FILE PHOTO: Former OpenAI chief scientist Ilya Sutskever predicted that reasoning capabilities will make technology far less predictable.
| Photo Credit: Reuters

Former OpenAI chief scientist Ilya Sutskever, one of the biggest names in artificial intelligence, had a prediction to make on Friday: reasoning capabilities will make technology far less predictable.

Accepting a “Test Of Time” award for his 2014 paper with Google’s Oriol Vinyals and Quoc Le, Sutskever said a major change was on AI’s horizon.

An idea that his team had explored a decade ago, that scaling up data to “pre-train” AI systems would send them to new heights, was starting to reach its limits, he said. More data and computing power had resulted in ChatGPT that OpenAI launched in 2022, to the world’s acclaim.

“But pre-training as we know it will unquestionably end,” Sutskever declared before thousands of attendees at the NeurIPS conference in Vancouver. “While compute is growing,” he said, “the data is not growing, because we have but one internet.”

Sutskever offered some ways to push the frontier despite this conundrum. He said technology itself could generate new data, or AI models could evaluate multiple answers before settling on the best response for a user, to improve accuracy. Other scientists have set sights on real-world data.

But his talk culminated in a prediction for a future of superintelligent machines that he said “obviously” await, a point with which some disagree. Sutskever this year co-founded Safe Superintelligence Inc in the aftermath of his role in Sam Altman’s short-lived ouster from OpenAI, which he said within days he regretted.

Long-in-the-works AI agents, he said, will come to fruition in that future age, have deeper understanding and be self-aware. He said AI will reason through problems like humans can.

There’s a catch.

“The more it reasons, the more unpredictable it becomes,” he said.

Reasoning through millions of options could make any outcome non-obvious. By way of example, AlphaGo, a system built by Alphabet’s DeepMind, surprised experts of the highly complex board game with its inscrutable 37th move, on a path to defeating Lee Sedol in a match in 2016.

Sutskever said similarly, “the chess AIs, the really good ones, are unpredictable to the best human chess players.”

AI as we know it, he said, will be “radically different.”

[ad_2]

Source link

spot_img

Related Posts

Dating Men in Their 30s: What Changes and What Stays the Same

Navigating the world of relationships can be a unique...

The Calorie-a-Day Strategy: Balancing Nutrition and Weight Loss

When it comes to weight loss, the approach to...

All Deals Travel: Fast and Easy Car Rentals from Eugene Airport

Traveling can be an exciting experience, but it also...

agua bacteriostatica 22

Agua Bacteriostatica Para Inyeccion Nunca debe inyectar agua bacteriostática directamente...

How Much Does Breast Implant Cost: A Comprehensive Guide

Breast augmentation is a popular cosmetic procedure for women...

Desk Job Diet: Eating for Energy & Fat Loss

Understanding the Challenges of a Sedentary Lifestyle Working a desk...
spot_img