Enlarge (credit: Bill Oxford | iStock / Getty Images Plus) The US AI Safety Institute—part of the National Institute of Standards and Technology (NIST)—has finally announced its leadership team after much speculation. Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that "there's a 50 percent chance AI development could end in 'doom.'" While Christiano's research background is impressive, some fear that by appointing a so-called "AI doomer," NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation. There have been rumors that NIST staffers oppose the hiring.

BING NEWS:
  • OpenAI Dissolves AI Safety Team after Co-founder Resignation
    OpenAI has dissolved its team that focused on the development of safe AI systems and the alignment of human capabilities with AI.
    05/18/2024 - 8:34 pm | View Link
  • What's Going On at OpenAI? Both its Chief Scientist and an AI Safety Leader Just Quit
    Ilya Sutskever was one of OpenAI's co-founders, and was key to last year's ousting of CEO Sam Altman. Jan Lieke worked on keeping AIs safe. It's more turmoil for a company that's already had its share ...
    05/15/2024 - 3:42 am | View Link
  • Feds appoint “AI doomer” to run AI safety at US institute
    The US AI Safety Institute—part of the National Institute ... some fear that by appointing a so-called "AI doomer," NIST may be risking encouraging non-scientific thinking that many critics ...
    04/17/2024 - 2:30 am | View Link
  • More

 

Welcome to Wopular!

Welcome to Wopular

Wopular is an online newspaper rack, giving you a summary view of the top headlines from the top news sites.

Senh Duong (Founder)
Wopular, MWB, RottenTomatoes

Subscribe to Wopular's RSS Fan Wopular on Facebook Follow Wopular on Twitter Follow Wopular on Google Plus

MoviesWithButter : Our Sister Site

More Business News