A company that often works with OpenAI to test and evaluate its artificial intelligence models, called Metr, has expressed concerns that it didn't have enough time to properly test one of OpenAI's newest and most advanced models, called o3. In a recent blog post, Metr mentioned that it was only able to conduct a limited review of o3's capabilities and safety features. This raises questions about the thoroughness of the testing process for such a powerful and potentially influential AI model.
Back to all articles
HUGEAINEWS