Live Breaking News & Updates on Future society

Stay informed with the latest breaking news from Future society on our comprehensive webpage. Get up-to-the-minute updates on local events, politics, business, entertainment, and more. Our dedicated team of journalists delivers timely and reliable news, ensuring you're always in the know. Discover firsthand accounts, expert analysis, and exclusive interviews, all in one convenient destination. Don't miss a beat — visit our webpage for real-time breaking news in Future society and stay connected to the pulse of your community

Trahan Architects Unveils Design for USA Pavilion at Expo 2025 Osaka

Engage in a global conversation on the future of our environment and cities at the USA Pavilion for Expo 2025 Osaka-Kansai, featuring exhibitions on sustainability and entrepreneurship.

United-states , France , Osaka , Japan , Kansai , Japan-general- , Australia , Tokyo , Australian , Japanese , American , Articlesumayya-vally

Bethenny Frankel Calls This Perfume the 'Fine Wine' of Fragrance

Bethenny Frankel is a beauty expert, and her opinion of the Future Society perfume collection has helped it gain popularity — details here

Madagascar , Hollywood , California , United-states , Jennifer-aniston , Bethenny-frankel , Nordstrom , Academy-award , Walgreens , Society-haunted-rose-eau-de-parfum , Future-society , Haunted-rose

ALSACE WINES FRANCE PAVILLON GOLD PARTNERS AT THE OSAKA WORLD EXPO 2025

ALSACE WINES FRANCE PAVILLON GOLD PARTNERS AT THE OSAKA WORLD EXPO 2025
traveldailymedia.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from traveldailymedia.com Daily Mail and Mail on Sunday newspapers.

Osaka , Japan , Osaka-bay , Japan-general- , Vins , Provence-alpes-côd-azur , France , Japanese , French , Sosuke-fujimoto , Society-for-our , France-pavilion

Envisioning Africa's AI governance landscape in 2024

Envisioning Africa's AI governance landscape in 2024
ecdpm.org - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from ecdpm.org Daily Mail and Mail on Sunday newspapers.

Windhoek , Khomas , Namibia , Ethiopia , Morocco , Senegal , Mauritius , Benin , Sierra-leone , South-africa , Rwanda , Egypt

First day of Xavier Bettel's working visit to Japan

On the occasion of his working visit to Japan, the Minister of Foreign Affairs
and Foreign Trade, Xavier Bettel, travelled to Osaka to take part in a ceremony
marking the launch of construction work on the Luxembourg pavilion for the 2025
World Expo.

Osaka , Japan , Tokyo , Luxembourg , Japanese , Doki , Ichinoki-manatsu , Xavier-bettel , Society-for-our , Development-cooperation , Sustainable-development-goals , Foreign-affairs

The 42 best Valentine's Day gifts of 2023, per celebrities

Shop the best 2023 Valentine's Day gifts for men and women, based on celebrities' favorite beauty products, jewelry and underwear — including Oprah's picks.

United-kingdom , New-york , United-states , Hollywood , California , British , Gwyneth-paltrow , Camilla-cabello , Justin-bieber , Monica-vinader , Bella-hadid , Jeremy-allen-white

AI regulation imminent as IMF predicts a 40% global impact on jobs

AI regulation imminent as IMF predicts a 40% global impact on jobs
thebftonline.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from thebftonline.com Daily Mail and Mail on Sunday newspapers.

Ghana , Republic-of-ghana , Republic-of-ghana-national-artificial-intelligence-strategy , Ministry-of-communication , Artificial-intelligence , Future-society , Ghana-national-artificial-intelligence-strategy ,

Many AI Safety Orgs Have Tried to Criminalize Currently-Existing Open-Source AI

<p>I've seen a few conversations where someone says something like this:</p>
<blockquote>
<p>I've been using an open-source LLM lately -- I'm a huge fan of not depending on OpenAI, Anthropic, or Google. But I'm really sad that the AI safety groups are trying to ban the kind of open-source LLM that I'm using.</p>
</blockquote>
<p>Someone then responds:</p>
<blockquote>
<p>What! Almost no one <em>actually</em> wants to ban open source AI of the kind that you're using! That's just a recklessly-spread myth! AI Safety orgs just want to ban a tiny handful of future models -- no one has tried to pass laws that would have banned current open-sourced models!</p>
</blockquote>
<p>This second claim is false.</p>
<p>Many AI "safety" organizations or people have in the past advocated bans that would have criminalized the open-sourcing of models <em>currently extant</em> as of now, January 2024. Even more organizations have pushed for bans that would cap open source AI capabilities at more or less exactly their current limits.</p>
<p>(I use open-sourcing broadly to refer to making weights generally available, not always to specific open-source compliant licensing.)</p>
<p>At least a handful of the organizations that have pushed for such bans are well-funded and becoming increasingly well-connected to policy makers.</p>
<p>Note that I think it's <em>entirely understandable</em> that someone would not realize such bans have been the goal of some AI safety orgs!</p>
<p>For comprehensible reasons -- i.e., how many people judge such policies to be a horrible idea, including many people interested in AI safety -- such AI safety organizations have often directed the documents explaining their proposed policies to bureaucrats, legislative staffers, and so on, and not been proactive in communicating their goals to the public.</p>
<p>Note also that <strong>not all</strong> AI safety organizations or AI-safety concerned people are trying to do this -- although, to be honest, a disturbing number are.</p>
<p>At least a handful of people in some organizations believe -- as do I -- that open source has been increasingly <a href="https://www.beren.io/2023-11-05-Open-source-AI-has-been-vital-for-alignment/">vital for AI safety work</a>. Given how <em>past</em> ban proposals would have been harmful, I think many future such proposals are therefore likely to be harmful as well, especially given that the arguments for them look pretty much identical.</p>
<p>Anyhow, a partial list:</p>
<h2><strong>1: Center for AI Safety</strong></h2>
<p>The Center for AI Safety is a well-funded (i.e., with > <a href="https://www.openphilanthropy.org/grants/?organization-name=center-for-ai-safety">9 million USD</a>) 501c3 that focuses mostly on AI safety research and on outreach. You've probably heard of them because they gathered signatures for their <a href="https://www.safe.ai/statement-on-ai-risk">sentence</a> about AI safety.</p>
<p>Nevertheless, they are also involved in policy. In response to the National Telecommunications and Information Administration's (NTIA) request for comment they <a href="https://www.regulations.gov/comment/NTIA-2023-0005-1416">sent</a> proposed regulatory rules to them.</p>
<p>These rules propose defining "powerful AI systems" as any systems that meet or exceed certain measures for any of the following:</p>
<blockquote>
<p><strong>Computational resources used to train the system</strong> (e.g., 10^23 floating-point operations or “training FLOP”; this is approximately the amount of FLOP required to train GPT-3. Note that this threshold would be updated over time in order to account for algorithmic improvements.) [Note from 1a3orn; this means updated downwards]</p>
<p><strong>Large parameter count</strong> (e.g., 80B parameters)</p>
<p><strong>Benchmark performance</strong> (e.g., > 70% performance on the Multi-task Language Understanding benchmark (MMLU))</p>
</blockquote>
<p>Systems meeting any of these requirements, according to the proposal, are subject to a number of requirements that would effectively ban open-sourcing them.</p>
<p>Llama 2 was trained with > 10^23 FLOPs, and thus would have been banned beneath this rule. Fine-tunes of Llama 2 also obtain <a href="https://www.reddit.com/r/LocalLLaMA/comments/159l9ll/llama270bguanacoqlora_becomes_the_first_model_on/">greater than 70%</a>on the MMLU and thus <em>also</em> would have been banned beneath this rule.</p>
<p>Note that -- despite how this would have prevented the release of Llama 2, and thus thousands of fine-tunes, and enormous quantities of <a href="https://www.beren.io/2023-11-05-Open-source-AI-has-been-vital-for-alignment/">safety research</a> -- the document boasts that its proposals "only regulate a small fraction of the overall AI development ecosystem."</p>
<h2><strong>2: Center for AI Policy</strong></h2>
<p>The Center for AI Policy -- different from the Center for AI Safety! -- is a DC-based lobbying organization. The announcement they <a href="https://www.lesswrong.com/posts/unwRBRQivd2LYRfuP/introducing-the-center-for-ai-policy-and-we-re-hiring">made</a> about their existence made some waves -- because the rules that they initially proposed would have required the <em>already-released</em> Llama-2 to be regulated by a new agency.</p>
<p>However, in a recent interview they <a href="https://www.thebayesianconspiracy.com/2023/12/202-the-center-for-ai-policy-talks-government-regulation/">say</a> that they're "trying to use the lightest touch we can -- we're trying to use a scalpel." Does this mean that they have changed their views?</p>
<p>Well, they haven't made any legislation they're proposing visible yet. But in the same interview they say that models trained with more than 3x10^24 FLOPs or getting > 85 on the MMLU would be in their "high risk" category, which according to the interview explicitly means they would be banned from being open sourced.</p>
<p>This would have outlawed the <a href="https://huggingface.co/blog/falcon-180b">Falcon 180b</a> by its FLOP measure, although -- to be fair -- the Falcon 180b was open-sourced by an organization in the United Arab Emirates, so it's not certain that it would matter.</p>
<p>As far as the MMLU measure, no open source model at this level has <em>yet</em> been released, but GPT-4 scores ~90% on the MMLU. Thus, this amounts to a law attempting to permanently crimp open source models beneath GPT-4 levels, an event I otherwise think is reasonably likely in 2024.</p>
<p>(I do not understand why AI safety orgs think that MMLU scores are a good way to measure danger.)</p>
<h2><strong>3: Palisade Research</strong></h2>
<p>This non-profit, headed by Jeffrey Ladish, has as its stated goal to "create concrete demonstrations of dangerous capabilities to advise policy makers and the public on AI risks." That is, they try to make LLMs do dangerous or scary things so politicians will do particular things for them.</p>
<p>Unsurprisingly, Ladish himself literally <a href="https://twitter.com/JeffLadish/status/1654319741501333504">called for government to stop the release of Llama 2, saying "we can prevent the release of a LLaMA 2! We need government action on this asap."</a></p>
<p>(H

United-arab-emirates , Jeffrey-ladish , National-telecommunications , Information-administration , Palisade-research , Google , Life-institute , Multi-task-language-understanding , Future-society ,

Expo 2025 Osaka, Kansai, Japan to have the theme "Designing Future Society for Our Lives

Expo 2025 Osaka, Kansai, Japan to have the theme "Designing Future Society for Our Lives
traveldailymedia.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from traveldailymedia.com Daily Mail and Mail on Sunday newspapers.

Kansai , Japan-general- , Japan , Osaka , Society-for-our , Japan-association , Future-society ,