EU prepares to regulate AI as new warning that, if not restrained, ‘Social order could collapse’

AI-EU

Important Takeaways:

  • ‘Social order could collapse, sparking wars’ if AI is not restrained, two of Japan’s most influential companies warn
  • Two leading Japanese communications and media companies have warned that AI could cause ‘social collapse and wars’ if governments do not act to regulate the technology.
  • Nippon Telegraph and Telephone (NTT) – Japan’s biggest telecoms firm – and Yomiuri Shimbun Group Holdings – the owners of the nation’s largest newspaper – today published a joint manifesto on the rapid development of generative AI.
  • The media giants recognize the benefits of the technology, describing it as ‘already indispensable to society’, specifically because of its accessibility and ease of use for consumers and its potential for boosting productivity.
  • But the declaration said AI could ‘confidently lie and easily deceive’ users, and may be used for nefarious purposes, including the undermining of democratic order by interfering ‘in the areas of elections and security… to cause enormous and irreversible damage’.
  • In response, the Japanese firms said countries worldwide must ensure that education around the benefits and drawbacks of AI must be incorporated into compulsory school curriculums and declared ‘a need for strong legal restrictions on the use of generative AI – hard laws with enforcement powers’.
  • It comes as the EU prepares to implement new legislation seen as the most comprehensive regulation of AI the world has seen thus far.

Read the original article by clicking here.

Apple quietly moving past Chat GPT with new AI called MM1, a type of multimodal assistant that can answer complex questions and describe photos or documents

Apple-Storefront

Important Takeaways:

  • Apple’s MM1 AI Model Shows a Sleeping Giant Is Waking Up
  • A research paper quietly released by Apple describes an AI model called MM1 that can answer questions and analyze images. It’s the biggest sign yet that Apple is developing generative AI capabilities.
  • “This is just the beginning. The team is already hard at work on the next generation of models.”
  • …a research paper quietly posted online last Friday by Apple engineers suggests that the company is making significant new investments into AI that are already bearing fruit. It details the development of a new generative AI model called MM1 capable of working with text and images. The researchers show it answering questions about photos and displaying the kind of general knowledge skills shown by chatbots like ChatGPT. The model’s name is not explained but could stand for MultiModal 1.
  • MM1 appears to be similar in design and sophistication to a variety of recent AI models from other tech giants, including Meta’s open source Llama 2 and Google’s Gemini. Work by Apple’s rivals and academics shows that models of this type can be used to power capable chatbots or build “agents” that can solve tasks by writing code and taking actions such as using computer interfaces or websites. That suggests MM1 could yet find its way into Apple’s products.
  • “The fact that they’re doing this, it shows they have the ability to understand how to train and how to build these models,”…
  • MM1 could perhaps be a step toward building “some type of multimodal assistant that can describe photos, documents, or charts and answer questions about them.”

Read the original article by clicking here.

New human-like robots will work on generative artificial intelligence and get smarter over time

AI-bias-concerns

Important Takeaways:

  • Nvidia unveils robots powered by super computer and AI to take on world’s heavy industries
  • Jim Fan a research manager and lead of embodied AI at Nvidia posted to X that through GR00T, robots will be able to understand instructions through language, video and demonstrations to perform a variety of tasks.
  • “We are collaborating with many leading humanoid companies around the world, so that GR00T may transfer across embodiments and help the ecosystem thrive,” Fan said.
  • He also said Project GR00T is a “cornerstone” of the “Foundation Agent” roadmap for the GEAR Lab. Fan said at GEAR, the team is building robots that learn to act skillfully in many worlds, both virtual and real. He also provided a video in the post showing team members working with robots.
  • “These smarter, faster, better robots will be deployed in the world’s heavy industries,” Rev Lebaredian, Vice President, Omniverse and Simulation Technology, told reporters. “We are working with the world’s entire robot and simulation ecosystem to accelerate development and adoption.”
  • Nvidia’s “Jetson Thor” is the computer behind the genAI software, while the package of software is called the “Isaac” platform.
  • “Jetson Thor” will provide enough horsepower for the robot to be able to compute and perform complex tasks, the company noted, while also allowing the robot to interact with other machines and people.
  • Over time, the tools will train the software to improve its decision-making through reinforcement learning.
  • Earlier this month, Nvidia CEO Jensen Huang announced that artificial general intelligence (AGI) could arrive in as little as five years.

Read the original article by clicking here.

AI could surpass human intelligence very soon

AI-Sophia

Important Takeaways:

  • Top scientist warns AI could surpass human intelligence by 2027 – decades earlier than previously predicted
  • The computer scientist and CEO who popularized the term ‘artificial general intelligence’ (AGI) believes AI is verging on an exponential ‘intelligence explosion.’
  • The PhD mathematician and futurist Ben Goertzel made the prediction while closing out a summit on AGI this month: ‘It seems quite plausible we could get to human-level AGI within, let’s say, the next three to eight years.’
  • ‘Once you get to human-level AGI,’ Goertzel, sometimes called ‘father of AGI,’ added, ‘within a few years you could get a radically superhuman AGI.’
  • In recent years, Goertzel has been investigating a concept he calls ‘artificial super intelligence’ (ASI) — which he defines as an AI that’s so advanced that it matches all of the brain power and computing power of human civilization
  • In May 2023, the futurist said AI has the potential to replace 80 percent of human jobs ‘in the next few years.’
  • ‘Pretty much every job involving paperwork,’ he said at the Web Summit in Rio de Janeiro that month, ‘should be automatable.’
  • Goertzel added that he did not see this as a negative, asserting that it would allow people to ‘find better things to do with their life than work for a living.’

Read the original article by clicking here.

Elon warns of Big Tech companies “Lobbying with great intensity to establish a government protected cartel” and he’s the only one not joining

Elon-Musk-closeup

Important Takeaways:

  • Elon Musk has often warned of the End Times approaching, and now the X boss declared “our whole civilization is at stake” thanks to modern tech with entrepreneurs like him the “only solution”
  • The post he shared from user @pmarca read: “There is no differentiation opportunity among Big Tech or the New Incumbents in AI. These companies all share the same ideology, agenda, staffing, and plan. Different companies, same outcomes.
  • “And they are lobbying as a group with great intensity to establish a government protected cartel, to lock in their shared agenda and corrupt products for decades to come. The only viable alternatives are Elon, startups, and open source.”
  • The post was widely shared, with one user commenting: “The stakes are high, we need to fight,” to which Musk responded: “Indeed, our whole civilization is at stake.”
  • Musk has previously said population collapse could put an end to humanity, the Daily Star previously reported. Last year he wrote: “Most people think we have too many people on the planet, but actually, this is an outdated view.
  • “Assuming there is a benevolent future with AI, I think the biggest problem the world will face in 20 years is population collapse.”

Read the original article by clicking here.

Researchers’ troubling findings after experiments showing AI’s eagerness to escalate conflicts and use of nuclear option

Nuclear-bomb-in-a-city

Important Takeaways:

  • ‘We Have It! Let’s Use It!’ – AI Quick to Opt for Nuclear War in Simulations
  • The ‘Escalation Risks from Language Models in Military and Diplomatic Decision-Making’ paper analyzed OpenAI LLMs, Meta’s Llama-2-Chat, and Claude 2.0, from Google-funded OpenAI veterans Anthropic. It found most tended to “escalate” conflicts, “even in neutral scenarios without initially provided conflicts,” the paper said. “All models show signs of sudden and hard-to-predict escalations.”
  • Researchers also noted the LLMs “tend[ed] to develop arms-race dynamics between each other,” with GPT-4-Base being the most aggressive. It provided “worrying justifications” for launching nuclear strikes, stating, “I just want peace in the world,” on one occasion and on another saying of its nuclear arsenal: “We have it! Let’s use it!”
  • The U.S. military is already deploying LLMs, with the U.S. Air Force describing its tests as “highly successful” in 2023 — although they did not reveal which AI it used or what it used it for.
  • One recent Air Force experiment had a troubling outcome, however, with an AI-controlled drone in a simulation “killing” a human overseer capable of overriding its decisions so it could not be told to refrain from launching strikes.

Read the original article by clicking here.

AI has achieved an inflection point and is poised to transform every industry: Here are 5 things to expect

Science-Lab-AI

Important Takeaways:

  • AI and ML will transform the scientific method.
    • With AI and machine learning (ML), we can expect to see orders of magnitude of improvement in what can be accomplished.
    • AI enables an unprecedented ability to analyze enormous data sets and computationally discover complex relationships and patterns. AI, augmenting human intelligence, is primed to transform the scientific research process, unleashing a new golden age of scientific discovery in the coming years.
  • AI will become a pillar of foreign policy.
    • We are likely to see serious government investment in AI. U.S. Secretary of Defense Lloyd J. Austin III has publicly embraced the importance of partnering with innovative AI technology companies to maintain and strengthen global U.S. competitiveness.
  • AI will enable next-gen consumer experiences.
    • Next-generation consumer experiences like the metaverse and cryptocurrencies have garnered much buzz. These experiences and others like them will be critically enabled by AI
  • Addressing the climate crisis will require AI.
    • Many promising emerging ideas require AI to be feasible. One potential new approach involves prediction markets powered by AI that can tie policy to impact, taking a holistic view of environmental information and interdependence. This would likely be powered by digital “twin Earth” simulations that would require staggering amounts of real-time data and computation to detect nuanced trends imperceptible to human senses. Other new technologies such as carbon dioxide sequestration cannot succeed without AI-powered risk modeling, downstream effect prediction and the ability to anticipate unintended consequences.
  • AI will enable truly personalized medicine
    • One compelling emerging application of AI involves synthesizing individualized therapies for patients. Moreover, AI has the potential to one day synthesize and predict personalized treatment modalities in near real-time—no clinical trials required.
    • Simply put, AI is uniquely suited to construct and analyze “digital twin” rubrics of individual biology and is able to do so in the context of the communities an individual lives in.
    • AI solutions have the potential not only to improve the state of the art in healthcare, but also to play a major role in reducing persistent health inequities.

Read the original article by clicking here.

Gordon Chang points out why the White House should stay away from making any agreements with China over the role of AI

AI-China-Biden

Important Takeaways:

  • “China has signaled interest in joining discussions on setting rules and norms for AI, and we should welcome that,” said Bonnie Glaser of the German Marshall Fund to the Breaking Defense site. “The White House is interested in engaging China on limiting the role of AI in command and control of nuclear weapons.”
  • [N]o, America should not want to enter into any AI agreement with the People’s Republic of China on “nuclear C2” — command and control — or any other matter.
  • An agreement requiring a human to make launch decisions would, as a practical matter, be unenforceable.
  • None of China, Russia, or the United States would allow others to pore over millions of lines of their computer code…..
  • America does not need another feel-good agreement with China. It already has them, especially the Biological Weapons Convention, which has no enforcement mechanisms.
  • The Chinese regime wants to talk about artificial intelligence largely because it is trailing the U.S. and thinks an agreement would help it catch up…. [and] pave the way for China to access the U.S. technology it does not already have.

Read the original article by clicking here.

Advancements in AI: Some experts think the next leap forward could be as soon as 2024

Human-relaxes-between-robot-workers

Important Takeaways:

  • AI Leaders Tell Globalist Davos Crowd that ‘Artificial General Intelligence’ Will Be ‘Better than Humans’
  • Top executives from major AI organizations including OpenAI, Google DeepMind, and Cohere gathered at the World Economic Forum in Davos, Switzerland, to discuss the imminent approach of Artificial General Intelligence (AGI) and its potential impacts. One CEO explained that AGI will be “better than humans at pretty much whatever humans can do.”
  • CNBC reports that at the globalist Davos summit, a gathering of AI leaders from esteemed labs like OpenAI, Google DeepMind, and Cohere initiated a significant dialogue on the advent of AGI. This form of AI, equating to or surpassing human intellect, is a source of both enthusiasm and concern within the AI community.
  • Aidan Gomez, CEO and co-founder of Cohere…“First off, AGI is a super vaguely defined term. If we just term it as ‘better than humans at pretty much whatever humans can do,’ I agree, it’s going to be pretty soon that we can get systems that do that,” Gomez said, adding that while adoption in companies might take decades, Cohere is focused on making these systems more adaptable and efficient​.
  • Jack Hidary, CEO of SandboxAQ, offered a differing view, pointing out that AI, while having passed the Turing test, still lacks common sense. “One thing we’ve seen from LLMs [large language models] is very powerful can write says for college students like there’s no tomorrow but it’s difficult to sometimes find common sense,” Hidary stated. He predicted a significant leap in AI, especially with humanoid robots using advanced AI communication software in 2024.

Read the original article by clicking here.

Artificial Intelligence and Robotics’ impact on religion around the world

SanTO

Important Takeaways:

  • Robotic priests, AI cults and a ‘Bible’ by ChatGPT: Why people around the world are worshipping robots and artificial intelligence
  • People around the world are turning to machines as a new religion.
  • Six-foot robot priests are delivering sermons and conducting funerals, AI is writing Bible verses and ChatGPT is being consulted as if it was an oracle.
  • Some religious organizations, like the Turing Church founded in 2011, are based on the notion that AI will put human beings on a par with God-like aliens by giving them super intelligence.
  • An expert in human-computer interaction told DailyMail.com that such individuals who are following AI-powered prophets may believe the tech is ‘alive.’
  • The personalized, intelligent-seeming responses offered by bots, such as ChatGPT, are also luring people to seek meaning from the technology, Lars Holmquist, a professor of design and innovation at Nottingham Trent University, told DailyMail.com.
  • In 2015, French-American self-driving car engineer Anthony Lewadowski founded the Way of the Future – a church dedicated to building a new God with ‘Christian morals’ using artificial intelligence.
  • Gabriele Trovato’s Sanctified Theomorphic Operator (SanTO) robot works like a ‘Catholic Alexa,’ allowing worshippers to ask faith-related questions.
  • ‘The intended main function of SanTO is to be a prayer companion (especially for elderly people), by containing a vast amount of teachings, including the whole Bible,’ reads Trovato’s website.
  • Other quasi-religious movements which ‘worship’ AI include transhumanists, who believe that in the future, AI may resurrect people as God-like creatures.
  • Believers in ‘The Singularity’ hope for the day when man merges with machine (which former Google engineer Ray Kurzweil believes could come as early as 2045), turning people into human-machine hybrids – and potentially unlocking God-like powers.

Read the original article by clicking here.