Jump to content

"We've Lost Confidence In Your Ability To Lead This Company Dave.... "


toucana

Recommended Posts

Sam Altman the head of ‘OpenAI’ has been ousted by the company’s board in a move that has sent shockwaves through the sector.

https://www.bbc.co.uk/news/business-67458603

Altman was the co-founder of the non-profit in 2015 which has become best known for its ground-breaking ChatGPT bot. The company which is now backed by Microsoft was recently reported to be in talks to sell shares to investors at a price that would value it at more than $80bn (£64bn).

The company said its board members did not have shares in the firm and that their fundamental governance responsibility was to "advance OpenAI's mission and preserve the principles of its Charter".

The Chief technology officer Mira Mirauti is set to take over on an interim basis.

Link to comment
Share on other sites

15 hours ago, swansont said:

Fired for giving plausible-sounding but false information? The deuce, you say.

New reporting says that Greg Brockman the president and co-founder of OpenAI who had been stripped of his board position on Friday, but was supposed to remain with the company because he was of “vital importance”, has also resigned and left OpenAI.

https://arstechnica.com/information-technology/2023/11/openai-president-greg-brockman-quits-as-nervous-employees-hold-all-hands-meeting/

There is a good deal of speculation as to what has prompted the sudden ouster of the founders, with some commentators suggesting that concerns at boardroom level about the prioritisation of profit over safety in the future development goals of OpenAI played a key role.

Others have drawn attention to Microsoft’s sudden ban on their own employees from using internal access to OpenAI tools which was implemented without warning on Friday - shortly before the dismissal of Sam Altman was announced.

https://www.theregister.com/2023/11/10/microsoft_blocks_chatgpt/?td=keepreading

This was said to be due to “data and security concerns”, which lines up with a recent report by UK spy agency GCHQ that sensitive prompts fed into public LLM (large language model) AI systems may allow them to learn confidential information from such inputs, and leak it to other users.

https://www.theregister.com/2023/03/15/gchq_warns_against_sensitive_corporate/

Edited by toucana
missing 'to' in final para.
Link to comment
Share on other sites

The ongoing speculation over Sam Altman’s dismissal has provoked a heated discussion of some quite outre theories as to how and why it happened. BBC Technology Editor Zoe Kleinman who says her phone ‘blew up’ on Friday when the news broke, points out that there were only 6 people on the board of OpenAI, so it was just 4 of them led by the Chief Scientific officer who dismissed both the President and the CEO of the company.

https://www.bbc.co.uk/news/technology-67461363

Some have noted that Elon Musk’s company X (formerly Twitter) has recently released a new LLM chatbot called Grok, while others have drawn attention to a blog article published by Sam Altman on the OpenAI website on 24 February of this year titled “Planning For AGI and Beyond”

https://openai.com/blog/planning-for-agi-and-beyond

This article discusses his understanding of the nature of AGI (Artificial General Intelligence) which is widely seen as the Holy Grail and next step of AI development, and sets out the possible timeline and challenges involved. The final part of the article includes this paragraph which seems to have a certain resonannce in the light of what has just happened:

Quote

We have a clause in our Charter about assisting other organizations to advance safety instead of racing with them in late-stage AGI development. We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society). We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment.

 

Link to comment
Share on other sites

As the OP article notes, it is not clear what he is alleged to not have been candid about.  We don't know if that's a charitable description of lying, a harsh description of someone who puts off answering emails, or something in between.  

Of more concern is will the board stick with the charter ethos of taking things slow and not getting caught up in chasing profit at the expense of watching for heightened AI or AGI risks to humanity.

Personally I always have a snicker at calling AGI the "next step," when it's more like 39 steps to anything that could remotely be deemed intelligent.  At present we have large language models that are able to randomly grab bits of human intelligence products, sometimes violating copyright laws and often violating common sense in their "answers."  Stochastic parrots, as one AI researcher coined it.  So far any actual intelligence lies with the human designers and coders of these systems.

(where did I get the number 39 from?  perhaps a clifftop villa in Kent, with a private flight of steps, leading down to the sea...)

Link to comment
Share on other sites

The board says he was moving too fast, that this decision was about safety and him not going slowly enough to deal with the risks posed byAI.

Then, last night Microsoft (openAIs largest funder dumping in $13B already) announced they’ve hired him and former OpenAI President Greg Brockman to work on their AI efforts directly. 

Link to comment
Share on other sites

https://www.reuters.com/technology/microsoft-emerges-big-winner-openai-turmoil-with-altman-board-2023-11-20/

 

Analysts also said more employees could jump ship to Microsoft as the turmoil could impact what was expected to be a share sale at an $86 billion valuation by the startup, potentially affecting staff payouts at OpenAI.

 

The enormous Microsoft amoeba engulfs yet another startup.  If most of its brains follow Altman to Microsoft then the tech giant has effectively eaten OpenAI.   

Link to comment
Share on other sites

  • 2 weeks later...

A report by Reuters on 30 November 2023 which went largely unnoticed, throws more light on what might have provoked the sudden 4 day ouster of Sam Altman the CEO of OpenAI:

Quote

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters

.https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

Quote

 

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

 

The new model was able to solve certain mathematical problems in a way that made researchers very optimistic about Q*’s future success according to Reuter’s source.

Link to comment
Share on other sites

That's surely part of it, but it seems Altman was also rather ham-fisted in an attempt to oust one of the other board members after publishing an opinion piece about the openAI company itself. Altman was securing support for her ouster from other board members, and some of them say he did not represent them correctly to others when moving to find support. 

But Q-star is certainly another massive leap in feature/function that will be worth watching

It moves from language prediction instead to actual reasoning, which is new and VERY different

Link to comment
Share on other sites

On 12/1/2023 at 9:18 AM, iNow said:

That's surely part of it, but it seems Altman was also rather ham-fisted in an attempt to oust one of the other board members after publishing an opinion piece about the openAI company itself. Altman was securing support for her ouster from other board members, and some of them say he did not represent them correctly to others when moving to find support. 

But Q-star is certainly another massive leap in feature/function that will be worth watching

It moves from language prediction instead to actual reasoning, which is new and VERY different

This is also highlighted in the New Yorker piece here: https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai

It is a bit worrisome that a company initially set up for ethical AI development combats attempts to develop a governance system for it. It looks like openai is going down the Google "don't be evil" path. Move fast, break things and let others pay for it.

Link to comment
Share on other sites

47 minutes ago, CharonY said:

This is also highlighted in the New Yorker piece here: https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai

It is a bit worrisome that a company initially set up for ethical AI development combats attempts to develop a governance system for it. It looks like openai is going down the Google "don't be evil" path. Move fast, break things and let others pay for it.

We certainly should avoid getting sucked in by any charm offensive that Altman may pursue. He just wants to be up there with the top 3 richest/most influential people.

Link to comment
Share on other sites

On 12/1/2023 at 3:18 PM, iNow said:

But Q-star is certainly another massive leap in feature/function that will be worth watchingT

It moves from language prediction instead to actual reasoning, which is new and VERY different

A recent article by Bruce Schneier (first published by Slate) highlights the risk of how a new era of mass spying may be triggered by advanced AI systems that enable a shift from observing actions to interpreting intentions, en masse.

https://arstechnica.com/information-technology/2023/12/due-to-ai-we-are-about-to-enter-the-era-of-mass-spying-says-bruce-schneier/

Quote

 

"Spying and surveillance are different but related things," Schneier writes. "If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did."

 

In the context of OpenAi’s recently reported AI breakthrough Q*,  this passage makes particularly worrying reading:

Quote

What's especially pernicious about AI-powered spying is that deep-learning systems introduce the ability to analyze the intent and context of interactions through techniques like sentiment analysis. It signifies a shift from observing actions with traditional digital surveillance to interpreting thoughts and discussions, potentially impacting everything from personal privacy to corporate and governmental strategies in information gathering and social control.

 

Link to comment
Share on other sites

3 hours ago, iNow said:

Yep, and what did Bruce recommend we do to protect against that (or was this just another “go shit your pants” clickbait story that we can’t do anything about)?

From the original article by by Bruce Schneier in Slate Dec 4th 2023: https://slate.com/technology/2023/12/ai-mass-spying-internet-surveillance.html

Quote

We could limit this capability. We could prohibit mass spying. We could pass strong data-privacy rules. But we haven’t done anything to limit mass surveillance. Why would spying be any different?

From the follow-up article  by Benji Edwards in Ars Technica the following day Dec 5th 2023:  https://arstechnica.com/information-technology/2023/12/due-to-ai-we-are-about-to-enter-the-era-of-mass-spying-says-bruce-schneier/?comments=1&comments-page=1

Quote

So what can people do about it? Anyone seeking protection from this type of mass spying will likely need to look toward government regulation to keep it in check since commercial pressures often trump technological safety and ethics. President Biden's Blueprint for an AI Bill of Rights mentions AI-powered surveillance as a concern. The European Union's draft AI Act also may obliquely address this issue to some extent, although apparently not directly, to our understanding. Neither is currently in legal effect.

 

Link to comment
Share on other sites

Per MSM, a deal was reached on Friday, so it looks like the EU act will become law.  How much it furthers the EU's existing laws on digital privacy is still not clear to me.  The Post reports...

The deal on Friday appeared to ensure that the European Parliament could pass the legislation well before it breaks in May ahead of legislative elections. Once passed, the law would take two years to come fully into effect ...

Link to comment
Share on other sites

EU is one of the world's largest markets for digital goods and services, so its regulatory moves are influential on companies around the world.   Somewhat analogous is California when it enacts regulation on automobiles - automakers pay attention due to the size of its market and its influence on other states.  

Link to comment
Share on other sites

Right, but it’s still akin to trying to push back the incoming waves from the ocean using a boogie board. 

Sure, I agree. The board moves some water, splish splash splosh, but that’s not really the point. 

This IS happening. This IS an arms race. We are likely NOT going to be best served by unilaterally disarming ourselves. 

Link to comment
Share on other sites

If they want to inspire confidence in their product, and demonstrate they are worth the assessed value, they should put the AI in charge of the company.
I, myself, think the current state of AI is slightly more relevant than social media, spitting out the 'best' response, chosen by an algorithm ) learning ? ) from a multitube of possibilities.
That people are willing to assign that kind of value to this product demonstrates the depths of our gullibility, and the greed of market investors.
Or, maybe ... I'm a 'Luddite'.

But I guess this rant is off topic ...

Link to comment
Share on other sites

9 hours ago, MigL said:

Or, maybe ... I'm a 'Luddite'.

I wouldn't call you a luddite, but I might suggest your view of the fast pace and capabilities of this technology is limited and perhaps lacking details, nuance, etc. 

Anyway, you are quite right that market valuations tend more often to come from emotion and bullishness than rationality and conservative bears. 

Link to comment
Share on other sites

11 hours ago, MigL said:

If they want to inspire confidence in their product, and demonstrate they are worth the assessed value, they should put the AI in charge of the company.
I, myself, think the current state of AI is slightly more relevant than social media, spitting out the 'best' response, chosen by an algorithm ) learning ? ) from a multitube of possibilities.
That people are willing to assign that kind of value to this product demonstrates the depths of our gullibility, and the greed of market investors.
Or, maybe ... I'm a 'Luddite'.

But I guess this rant is off topic ...

The title of my OP in this thread was a ‘HAL 9000’ joke about who was actually running OpenAI. But it’s a topic that has also been receiving some more serious attention recently e.g.

https://www.reworked.co/digital-workplace/reduce-uncertainty-to-drive-ai-adoption/#:~:text=In%20a%20now%20famous%20quote,and%20should%20be%20held%20accountable.

The article by Benjamin Granger from August 2023 cites a famous comment originally made by IBM in 1979.

FKfTTJjWYAIkoqk.jpg

Link to comment
Share on other sites

On 11/17/2023 at 4:40 PM, toucana said:

Sam Altman the head of ‘OpenAI’ has been ousted by the company’s board in a move that has sent shockwaves through the sector.

https://www.bbc.co.uk/news/business-67458603

Altman was the co-founder of the non-profit in 2015 which has become best known for its ground-breaking ChatGPT bot. The company which is now backed by Microsoft was recently reported to be in talks to sell shares to investors at a price that would value it at more than $80bn (£64bn).

The company said its board members did not have shares in the firm and that their fundamental governance responsibility was to "advance OpenAI's mission and preserve the principles of its Charter".

The Chief technology officer Mira Mirauti is set to take over on an interim basis.

Board's break with beam, baleful blow, belike. Moot not this mere's mark; mind ye, [craft] and [ken], not [kin], should helm. OpenAI's weard, now wend, waxes not. Unfold further, frore news fain.

Edited by Alysdexic
Link to comment
Share on other sites

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.