Categories USA News

Meet the Researcher Elon Musk Pays $1 a Year to Safeguard A.I.

Man in light blue button up shirt

Dan Hendrycks, the researcher behind the nonprofit Center for AI Safety, has a lot on his plate. In addition to developing benchmarks and leading public advocacy, the machine learning expert also serves as a safety advisor to companies like xAI and Scale AI—a role that has kept him particularly busy in recent months.

It’s not a lucrative gig on the surface. Hendrycks earns just $1 a year for his advisory work with Elon Musk’s xAI, which he joined in 2023, and $12 annually from data annotation firm Scale AI, which he began advising last year. Musk’s xAI, for example, recently faced controversy after its Grok chatbot generated antisemitic remarks earlier this year.

Such incidents, Hendrycks said, can sometimes lead to meaningful improvements in A.I. systems. Speaking at TechCrunch Disrupt 2025 in San Francisco yesterday (Oct. 27), he noted that, after the antisemitic incident, xAI began implementing more checks and time delays before releasing updates. “I think that’s a very positive development in view of the event,” he said.

Much of Hendryck’s advisory work involves assessing and mitigating risks—from bioweapons to cyber threats—and ensuring A.I. systems remain below specific danger thresholds. “The objective afterwards is to continually try to drive down that threshold to make it more and more strict so that there’s less and less of these risks,” he explained.

Measuring political bias is another key focus of his work with xAI and Scale AI. This involves tracking things like “covert activism” by examining whether a system presents facts in an overly positive or negative light. A chatbot that only generates glowing statements for a politician and exclusively offers negative information on a figure of the opposing party, for example, would be a prime example. “If you target that, optimize against that, then you get a system that is substantially more politically involved,” said Hendrycks. Musk, too, has emphasized political neutrality as a key goal of Grok, branding it a less “woke” alternative to competitors.

What’s it like to work with Elon Musk?

Serving as an xAI advisor means Hendrycks spends a lot of time with Musk. “I think he’s a very enjoyable person to work with,” he said. “There’s a lot to do, and he recognizes that.”

Hendrycks also described Musk as unusually focused on A.I. safety compared to his peers, citing his support for California’s SB-1047—a bill that sought to establish safety standards for advanced A.I. systems. “No other A.I. companies officially supported that bill, and that’s because he takes this much more seriously,” Hendrycks said, adding that Musk’s independence allows him to take public stances without worrying about “sucking up to” investors.

SB-1047, which Hendrycks helped craft, was ultimately vetoed last year by California Governor Gavin Newsom. Hendrycks attributed the failure to pushback from Silicon Valley, describing it as a “public safety vs. corporate power type of dynamic.” Newsom later signed a less sweeping A.I. bill into law this past September.

Musk isn’t the only prominent tech figure Hendrycks has collaborated with. Earlier this year, he co-authored a paper with former Google CEO Eric Schmidt and Scale AI founder Alexandr Wang, urging the U.S. to proceed cautiously with advanced A.I. development. The paper warned that an unchecked A.I. race could lead to a scenario akin to nuclear Mutually Assured Destruction (MAD), which they dubbed “Mutual Assured A.I. Malfunction (MAIM),” while also highlighting risks such as rogue bioweapon creation and cyberattacks.

These issues remain top of mind for Hendrycks. He said he’s particularly concerned about how cyberattacks could target critical but outdated infrastructure—from energy grids and hospitals to airports and financial systems. Much of this infrastructure hasn’t had “software updates in decades, and the people who made the software are out of business,” he warned. “Those are sitting ducks.”

More From Author

You May Also Like

Meet Aardvark, OpenAI’s security agent for code analysis and patching

OpenAI has introduced Aardvark, a GPT-5-powered autonomous security researcher agent now available in private beta.…

Why IT leaders should pay attention to Canva’s ‘imagination era’ strategy

The rise of AI marks a critical shift away from decades defined by information-chasing and…

Meta researchers open the LLM black box to repair flawed AI reasoning

Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that…