misinformation. let s go back to group one. the ai companies, open ai specifically because the new york times kevin bruce has a piece out that says people inside the company have expressed concerns about how reckless they are and let s go back just a few months ago. it was the former board members that kicked out ceo sam altman for reckless behavior. he of course turned it on their head, kicked them out and he is back in charge. what should we make of that? i think there are two issues there. one is the reckless behavior they were talking about makes concerns about the election seem very small, so their concern is the existential risk of ai and their calling out sam altman for disabling the red team, basically, that was in charge of creating ai safety generally. i think the issue here is silicon valley sometimes overlooks the importance of civil society, and when things
Governments should play to their existing strengths in data collection to make AI safer for their citizens, including assessing what kinds of data are too risky to allow private companies to collect.
Tesla mogul Elon Musk and OpenAI CEO Sam Altman rubbed shoulders with some of their fiercest critics, while China co-signed the "Bletchley Declaration" alongside the United States and others, signalling a willingness to cooperate despite mounting tensions with the West.
AI Safety: The board will develop recommendations for the transportation sector, pipeline and power grid operators, internet service providers and others to "prevent and prepare for AI-related disruptions to critical services that impact national or economic security, public health, or safety."