OpenAI CEO Sam Altman has firmly defended the company’s latest large-scale language model, GPT-5, against a wave of criticism that followed its release. Speaking at the company’s AI summit in San Francisco, Altman explained that much of the negative reaction stems from misunderstanding what GPT-5 is designed to accomplish. While previous models like GPT-4 and GPT-4o were primarily optimized for conversation, creativity, and productivity tasks, GPT-5’s architecture marks a decisive shift toward what Altman calls “scientific artificial intelligence” — a form of AI capable of conducting, verifying, and accelerating real scientific research. 

Altman emphasized that GPT-5 is not just an upgrade in scale but a reimagining of how AI learns and reasons. The model’s new “multi-domain reinforcement layer” allows it to cross-reference findings across disciplines such as physics, molecular biology, and mathematics, generating hypotheses, validating them through simulation, and flagging anomalies for human review.

According to internal reports, early research partners have already used GPT-5 to optimize battery chemistry, design new protein structures, and predict patterns in climate data that were previously hidden to human researchers. Critics, however, have accused OpenAI of overhyping the model’s general capabilities. Some users on social media complained about slower response times, limited creativity in casual conversation, and restrictions in its public interface. Altman addressed these concerns directly, acknowledging that GPT-5 was not built to entertain but to “transform the pace of discovery”.

He likened the shift to the early days of computing, when mainframes were criticized for lacking user-friendly interfaces even though they revolutionized science behind the scenes. Industry observers have noted that GPT-5’s infrastructure relies on an expanded network of NVIDIA Blackwell GPUs and a proprietary OpenAI interconnect called “Synapse”, which allows thousands of AI nodes to collaborate on complex reasoning tasks.

This distributed system represents one of the largest coordinated AI grids ever deployed, capable of managing hundreds of terabytes of contextual memory. Altman also hinted that GPT-6 and GPT-7 are already in early research phases, each aiming to integrate symbolic logic, autonomous experimentation, and continuous learning in controlled environments.

 He stated that OpenAI’s long-term mission is to merge generative intelligence with analytical rigor, creating AI that is as precise in its reasoning as it is fluent in human language. “This is not just about smarter chat,” Altman said. “It’s about creating a tool that can help humanity understand reality itself.” For now, GPT-5 is being gradually rolled out to enterprise clients, research institutions, and selected developers through the OpenAI API, while public access remains limited to ensure ethical safeguards and stability. Despite the controversy, the company’s focus on scientific AI may signal a turning point — one that redefines artificial intelligence not as an entertainment system, but as an engine of discovery.

 
 
🎧 G1Radio Live: ON AIR | Listen Now
Broadcasting Worldwide · Music · Podcasts · News in Voice
💻 📱 📲 🚗 📡