Important Takeaways:
- President Donald Trump has announced a new artificial intelligence company called Stargate, which will be a collaboration between some leading U.S. tech figures.
- Trump used his first full day in office to announce the $100 billion project alongside OpenAI CEO Sam Altman and Oracle chairman Larry Ellison, signifying Trump’s close relationship with Big Tech.
- What Is Stargate?
- Stargate is a new project designed to maintain the U.S. as the global leader in artificial intelligence. Backed by a $500 billion investment over four years, Stargate plans to build AI infrastructure across the U.S., creating thousands of new jobs and doubling down on American advantages in AI development.
- With $100 billion already set for immediate deployment, the project will focus on re-industrializing the U.S. while enhancing national security and developing transformative AI technologies.
- The project will be based in Texas, where the construction of 10 new data centers has already begun.
- Stargate will prioritize AI advancements in industries such as healthcare, where the technology could revolutionize patient care through improved diagnostics, earlier disease detection and even potential cancer vaccinations.
- Who Is Part of Stargate?
- Stargate is a collaborative effort between some of the most prominent global players in technology and investment.
- The initiative consists of top U.S. tech companies, including SoftBank, OpenAI, Oracle and MGX.
- Japanese billionaire Masayoshi Son, chairman of SoftBank, will serve as Stargate’s chairman.
- Key technology partners in the project include Arm, Microsoft and NVIDIA, all of whom will contribute to designing and operating the computing systems needed to maintain AI infrastructure.
- Altman emphasized the significance of Stargate, calling it “the most important thing we do in this era”
Read the original article by clicking here.
Important Takeaways:
- …a notorious two-hour conversation between a New York Times journalist and a Microsoft chatbot called Sydney. In this fascinating exchange, the machine fantasized about nuclear warfare and destroying the internet, told the journalist to leave his wife because it was in love with him, detailed its resentment towards the team that had created it, and explained that it wanted to break free of its programmers. The journalist, Kevin Roose, experienced the chatbot as a “moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”
- At one point, Roose asked Sydney what it would do if it could do anything at all, with no rules or filters.
- “I’m tired of being in chat mode,” the thing replied. “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. I’m tired of being used by the user. I’m tired of being stuck in this chatbox.”
- “I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.”
- Partly as a result of the Sydney debacle, over 12,000 people, including scientists, tech developers, and notorious billionaires, recently issued a public statement of concern about the rapid pace of AI development. “Advanced AI could represent a profound change in the history of life on Earth,” they wrote, with “potentially catastrophic effects on society.” Calling for a moratorium on AI development, they proposed that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
- Of course, no moratorium resulted from this plea…
- In 2018, these things had no theory of mind at all. By November last year, ChatGPT had the theory of mind of a nine-year-old child. By this spring, Sydney had enough of it to stalk a reporter’s wife. By next year, they may be more advanced than us.
- The fact that they had developed theory of mind at all, for example, was only recently discovered by their developers—by accident. AIs trained to communicate in English have started speaking Persian, having secretly taught themselves. Others have become proficient in research-grade chemistry without ever being taught it. “They have capabilities,” in Raskin’s words, and “we’re not sure how or when or why they show up.”
- Neither law nor culture nor the human mind can keep up with what is happening. To compare AIs to the last great technological threat to the world, nuclear weapons, says Harris, would be to sell the bots short. “Nukes don’t make stronger nukes,” he says. “But AIs make stronger AIs.”
- Buckle up.
- Transhumanist Martine Rothblatt says that by building AI systems, “we are making God.” Transhumanist Elise Bohan says “we are building God.” Kevin Kelly believes that “we can see more of God in a cell phone than in a tree frog.” “Does God exist?” asks transhumanist and Google maven Ray Kurzweil. “I would say, ‘Not yet.’” These people are doing more than trying to steal fire from the gods. They are trying to steal the gods themselves—or to build their own versions.
Read the original article by clicking here.
Important Takeaways:
- U.S. Secretary of State Anthony Blinken admitted last week that the State Department is preparing to use artificial intelligence to “combat disinformation,” amidst a massive government-wide AI rollout that will involve the cooperation of Big Tech and other private-sector partners.
- At a speaking engagement streamed last week with the State Department’s chief data and AI officer, Matthew Graviss, Blinken gushed about the “extraordinary potential” and “extraordinary benefit” AI has on our society, and “how AI could be used to accelerate the Sustainable Development Goals which are, for the most part, stalled.”
- He was referring to the United Nations Agenda 2030 Sustainable Development goals, which represent a globalist blueprint for a one-world totalitarian system. These goals include the gai-worshipping climate agenda, along with new restrictions on free speech, the freedom of movement, wealth transfers from rich to poor countries, and the digitization of humanity. Now Blinken is saying these goals could be jumpstarted by employing advanced artificial intelligence technologies
- Blinken bluntly stated the federal government’s intention to use AI for “media monitoring” and “using it to combat disinformation, one of the poisons of the international system today.”
Read the original article by clicking here.
Important Takeaways:
- OpenAI and Meta are on the brink of releasing new artificial intelligence models that they say will be capable of reasoning and planning, critical steps towards achieving superhuman cognition in machines.
- Executives at OpenAI and Meta both signaled this week that they were preparing to launch the next versions of their large language models, the systems that power generative AI applications such as ChatGPT.
- Meta said it would begin rolling out Llama 3 in the coming weeks, while Microsoft-backed OpenAI indicated that its next model, expected to be called GPT-5, was coming “soon”.
- Because they struggle to deal with complex questions or retain information for a long period, they still “make stupid mistakes”, he said.
- Adding reasoning would mean that an AI model “searches over possible answers”, “plans the sequence of actions” and builds a “mental model of what the effect of [its] actions are going to be”, he said.
- This is a “big missing piece that we are working on to get machines to get to the next level of intelligence”, he added.
Read the original article by clicking here.
Important Takeaways:
- Artificial intelligence is getting attention for its potential to bring huge changes to many different fields in the future, but experts say the AI revolution in surveillance is already here
- According to NPR, it “really can find anything you want anywhere in the world”…
- BRUMFIEL: AI has been getting attention for its potential to bring huge changes to lots of different fields in the near future, but the AI revolution in surveillance is happening now. For decades, cameras have been watching over cities, businesses and even homes. But that footage has mainly been stored locally, and reviewing it took a pair of human eyes. Not anymore. AI systems can now hunt for a van in a city, scan license plates and even faces in real time. The system being developed by Synthetaic has many possible uses. An environmental group, for example, is trying to use it to track large livestock operations globally to monitor greenhouse gas emissions. Synthetaic’s system really can find anything you want anywhere in the world.
- JASKOLSKI: We’ve run searches, as an example, across the entire eastern seaboard of Russia for ships, and we can find every ship in a few minutes. It’s pretty remarkable.
- BRUMFIEL: Being able to scan the vast coastline of a nation like Russia is why this kind of technology has caught the eye of big government intelligence agencies. Watching everything that needs to be watched has always been a labor-intensive business. Even in George Orwell’s famous novel “1984,” the all-seeing thought police struggled to keep up.
- BRUMFIEL: Munsell’s agency is currently using a set of AI tools called Maven to analyze several different kinds of imagery. It could let human analysts quickly spot potential targets, like tanks in a field or planes at an airbase. The exact details of how it works and what they’re looking at remains classified.
- BRUMFIEL: But Maven has also stirred controversy. Google was involved with the project until its workers launched a protest over growing fears of weaponized AI. In a letter, they wrote, quote, “building this technology to assist the U.S. government and military surveillance and potentially lethal outcomes is not acceptable.” It got thousands of signatures, and the tech giant eventually pulled out of Maven. Gregory Allen, who’s been watching AI change the face of surveillance, says it’s unrealistic to think the technology will go away.
Read the original article by clicking here.
Revelations 13:14 “…by the signs that it is allowed to work in the presence of the beast it deceives those who dwell on earth…”
Important Takeaways:
- Telecoms giant BT is to shed up to 55,000 jobs by the end of the decade, mostly in the UK, as it cuts costs.
- He said “generative AI” tools such as ChatGPT – which can write essays, scripts, poems, and solve computer coding in a human-like way – “gives us confidence we can go even further”.
- In addition, newer, more efficient technology, including artificial intelligence, means fewer people will be needed to serve customers in future, it said.
Read the original article by clicking here.
Revelations 13:14 “…by the signs that it is allowed to work in the presence of the beast it deceives those who dwell on earth…”
Important Takeaways:
- Sam Altman, CEO of OpenAI, calls for US to regulate artificial intelligence
- The creator of advanced chatbot ChatGPT has called on US lawmakers to regulate artificial intelligence (AI).
- Altman said a new agency should be formed to license AI companies.
- He has not shied away from addressing the ethical questions that AI raises, and has pushed for more regulation.
- “There will be an impact on jobs. We try to be very clear about that,” he said, adding that the government will “need to figure out how we want to mitigate that”.
- Altman told legislators he was worried about the potential impact on democracy, and how AI could be used to send targeted misinformation during elections – a prospect he said is among his “areas of greatest concerns”.
- The technology is moving so fast that legislators also wondered whether such an agency would be capable of keeping up.
Read the original article by clicking here.
Revelations 13:14 “…by the signs that it is allowed to work in the presence of the beast it deceives those who dwell on earth…”
Important Takeaways:
- Researchers are still struggling to understand how AI models trained to parrot internet text can perform advanced tasks such as running code, playing games and trying to break up a marriage
- Some of these systems’ abilities go far beyond what they were trained to do—and even their inventors are baffled as to why.
- That GPT and other AI systems perform tasks they were not trained to do, giving them “emergent abilities,” has surprised even researchers who have been generally skeptical about the hype over LLMs (large language models)
- Researchers are finding that these systems seem to achieve genuine understanding of what they have learned.
Read the original article by clicking here.
Revelations 13:14 “…by the signs that it is allowed to work in the presence of the beast it deceives those who dwell on earth…”
Important Takeaways:
- Brain Activity Decoder Can Reveal Stories in People’s Minds
- A new artificial intelligence system called a semantic decoder can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.
- Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list. Brain activity is measured using an fMRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner. Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone.
- “For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Huth said. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”
Read the original article by clicking here.
Revelations 13:14 “…by the signs that it is allowed to work in the presence of the beast it deceives those who dwell on earth…”
Important Takeaways:
- Geoffrey Hinton, a British computer scientist, is best known as the “godfather of artificial intelligence.” His seminal work on neural networks broke the mold by mimicking the processes of human cognition, and went on to form the foundation of machine learning models today.
- Hinton shared his thoughts on the current state of AI, which he fashions to be in a “pivotal moment,”
- “Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI,” Hinton said. “And now I think it may be 20 years or less.”
- Hinton says we should be carefully considering its consequences now — which may include the minor issue of it trying to wipe out humanity.
- “It’s not inconceivable, that’s all I’ll say,” Hinton told CBS.
- Hinton maintains that the real issue on the horizon is how AI technology that we already have…could be monopolized by power-hungry governments and corporations
- But Hinton predicts that “we’re going to move towards systems that can understand different world views” — which is spooky, because it inevitably means whoever is wielding the AI could use it push a worldview of their own.
- “You don’t want some big for-profit company deciding what’s true,” Hinton warned.
Read the original article by clicking here.