OpenAI: A Rollercoaster of Good and Bad News

Written by Brandon Lwowski, July 2024 |

OpenAI seems to take one step forward and then another step back. Are they a force for good in the AI world or a company fraught with issues? With every piece of good news, there’s often something negative that follows, casting a shadow over their progress. 

Take their recent partnership with TIME, for example. This was a significant move, as OpenAI now has access to over 100 years of digital data, including photos and articles. This partnership means they can retrain their models on data they have explicit permission to use, rather than scraping private data from the internet. This should lead to more accurate and reliable outputs, potentially reducing the risk of the AI giving dangerous or nonsensical advice, “you should eat on rock a day” *cough cough*. 

 However, this positive development was quickly overshadowed by reports of OpenAI’s questionable internal practices. According to a letter obtained by The Washington Post, OpenAI has been accused of using non-disclosure agreements to prevent employees from speaking out about the company. These agreements allegedly discouraged communication with the SEC about securities violations, forced employees to waive their rights to whistleblower incentives, and required them to notify the company if they contacted government regulators. Additionally, there are claims that OpenAI has stripped vested equity from employees who refused to sign these NDAs upon leaving the company. 

Another leaked document revealed a new project codenamed “Strawberry”, which can be a huge step forward in GenAI. This project apparently aims to train AI to plan tasks and mimic human-like reasoning skills, addressing a significant limitation of current Generative AI technology. If true, this could be a groundbreaking development, potentially revolutionizing many industries. 

But then, there’s the news of a hack last year that was kept from the public was released, raising further concerns about data privacy and the security risks associated with using these models. With OpenAI leading the GenAI research, they are also navigating uncharted territory with many eyes watching their every move and speed bumps are expected. It seems like every forward step seems to be followed by some bad PR release, which hinders their efforts to improve adoption and democratize Generative AI. 

In the end, OpenAI’s journey is a complex one, filled with both promising advancements and significant challenges. Their ability to balance these highs and lows will be crucial in determining their long-term impact on the AI landscape.