BillionaireClubCollc
  • News
  • Notifications
  • Shop
  • Cart
  • Media
  • Advertise with Us
  • Profile
  • Groups
  • Games
  • My Story
  • Chat
  • Contact Us
home shop notifications more
Signin
  •  Profile
  •  Sign Out
Skip to content

Billionaire Club Co LLC

Believe It and You Will Achieve It

Primary Menu
  • Home
  • Politics
  • TSR
  • Anime
  • Michael Jordan vs.Lebron James
  • Crypto
  • Soccer
  • Dating
  • Airplanes
  • Forex
  • Tax
  • New Movies Coming Soon
  • Games
  • CRYPTO INSURANCE
  • Sport
  • MEMES
  • K-POP
  • AI
  • The Bahamas
  • Digital NoMad
  • Joke of the Day
  • RapVerse
  • Stocks
  • SPORTS BETTING
  • Glamour
  • Beauty
  • Travel
  • Celebrity Net Worth
  • TMZ
  • Lotto
  • COVD-19
  • Fitness
  • The Bible is REAL
  • OutDoor Activity
  • Lifestyle
  • Culture
  • Boxing
  • Food
  • LGBTQ
  • Poetry
  • Music
  • Misc
  • Open Source
  • NASA
  • Science
  • Natural & Holstict Med
  • Gardening
  • DYI
  • History
  • Art
  • Education
  • Pets
  • Aliens
  • Astrology
  • Farming and LiveStock
  • LAW
  • Fast & Furious
  • Fishing & Hunting
  • Health
  • Credit Repair
  • Grants
  • All things legal
  • Reality TV
  • Africa Today
  • China Today
  • "DUMB SHIT.."
  • Stocks

Ex-OpenAI employee speaks out about why he was fired: 'I ruffled some feathers'

Former OpenAI former employee Leopold Aschenbrenner spoke out about his firing.Jaap Arriens/GettyLeopold Aschenbrenner spoke about his firing from OpenAI's superalignment team in a podcast. He said HR warned him after he shared a memo about OpenAI's security with two board members.He said his firing was related to sharing a brainstorming doc with outside researchers. A former OpenAI researcher opened up about how he "ruffled some feathers" by writing and sharing some documents related to safety at the company, and was eventually fired.Leopold Aschenbrenner, who graduated from Columbia University at 19, according to his LinkedIn, worked on OpenAI's superalignment team before he was reportedly "fired for leaking" in April. He spoke out about the experience in a recent interview with podcaster Dwarkesh Patel released Tuesday.Aschenbrenner said he wrote and shared a memo after a "major security incident" that he didn't specify in the interview, and shared it with a couple of OpenAI board members. In the memo, he wrote that the company's security was "egregiously insufficient" in protecting against the theft of "key algorithmic secrets from foreign actors," Aschenbrenner said. The AI researcher previously shared the memo with others at OpenAI, "who mostly said it was helpful," he added.HR later gave him a warning about the memo, Aschenbrenner said, telling him that it was "racist" and "unconstructive" to worry about China Communist Party espionage. An OpenAI lawyer later asked him about his views on AI and AGI and whether Aschenbrenner and the superalignment team were "loyal to the company," as the AI researcher put it.Aschenbrenner claimed the company then went through his OpenAI digital artifacts.He was fired shortly after, he said, with the company alleging he had leaked confidential information, wasn't forthcoming in its investigation, and referenced his prior warning from HR after sharing the memo with the board members.Aschenbrenner said the leak in question referred to a "brainstorming document on preparedness, on safety, and security measures" needed for artificial general intelligence, or AGI, that he shared with three external researchers for feedback. He said he had reviewed the document before sharing it for any sensitive information and that it was "totally normal" at the company to share this kind of information for feedback.Aschenbrenner said OpenAI deemed a line about "planning for AGI by 2027-2028 and not setting timelines for preparedness" as confidential. He said he wrote the document a couple of months after the superalignment team was announced, which referenced a four-year planning horizon.In its announcement of the superalignment team posted in July 2023, OpenAI said its goal was to "solve the core technical challenges of superintelligence alignment in four years.""I didn't think that planning horizon was sensitive," Aschenbrenner said in the interview. "You know it's the sort of thing Sam says publicly all the time," he said, referring to CEO Sam Altman.An OpenAI spokesperson told Business Insider that the concerns Aschenbrenner raised internally and to its Board of Directors "did not lead to his separation.""While we share his commitment to building safe AGI, we disagree with many of the claims he has since made about our work," the OpenAI spokesperson said.Aschenbrenner is one of several former employees who have recently spoken out about safety concerns at OpenAI. Most recently, a group of nine current and former OpenAI employees signed a letter calling for more transparency in AI companies and protection for those who express concern about the technology.Read the original article on Business Insider

Welcome to Billionaire Club Co LLC, your gateway to a brand-new social media experience! Sign up today and dive into over 10,000 fresh daily articles and videos curated just for your enjoyment. Enjoy the ad free experience, unlimited content interactions, and get that coveted blue check verification—all for just $1 a month!

Source link

Share
What's your thought on the article, write a comment
0 Comments
×

Sign In to perform this Activity

Sign in
×

Account Frozen

Your account is frozen. You can still view content but cannot interact with it.

Please go to your settings to update your account status.

Open Profile Settings

Ads

  • Original Billionaire128 iPhone Case

    $ 15.50
  • Premium Billionaire128 Women’s Crop Tee

    $ 22.50
  • Original Billionaire128 Poster

    $ 17.50
  • News Social

    • Facebook
    • Twitter
    • Facebook
    • Twitter
    Copyright © 2024 Billionaire Club Co LLC. All rights reserved