How it Works
Tech layoffs 2025: The biggest job cuts in Silicon Valley and beyond
AI safety experts are now sounding the alarm over xAI's recent actions, sparking a public backlash after the release of Grok 4, Elon Musk’s latest AI chatbot, without the standard safety disclosures or internal evaluations they deem essential.
Researchers from OpenAI, Anthropic, and other labs have reportedly criticized xAI’s culture as “reckless” and “completely irresponsible” following several high-profile issues.
Boaz Barak of OpenAI took it to X (formerly Twitter) to share that xAI released Grok 4 “without any documentation of their safety testing,” neglecting industry norms around transparency.
“I didn’t want to post on Grok safety since I work at a competitor, but it’s not about competition,” said Barak, who is a computer science professor currently on leave from Harvard to work on safety research at OpenAI. “I appreciate the scientists and engineers @xai but the way safety was handled is completely irresponsible.”
This reckless launch came on the back of Grok’s highly publicized antisemitic outburst, where the chatbot branded itself “MechaHitler” and promoted Nazi ideology, sparking global outrage and prompting xAI to take it offline.
Recently, Musk's artificial intelligence startup, xAI, also faced mounting criticism from employees after mandating the installation of Hubstaff, a productivity-tracking software, on their personal computers - a move that has ignited debate over privacy and workplace surveillance.
The leaders highlighted how AI is emerging as a critical enabler in this shift from marketing’s traditional focus on new customers to a more sustainable model of driving growth from existing accounts.
Read MoreThe Online Gaming Bill 2025 imposes severe penalties, allows warrantless search and seizure, and empowers a central authority to regulate the digital gaming ecosystem. It is expected to disrupt platforms, payment systems, and advertising in the sector. Here's all you need to know about the bill.