Built for the enterprise, so enterprises can confidently develop, deploy, and secure AI applications.News Highlights:
Cisco’s end-to-end solution protects the development and use of AI applications, so enterprises can move forward with their AI initiatives with confidence.AI Defense protects against the misuse of AI tools, data breaches, and increasingly sophisticated threats that existing security solutions can’t handle.This innovative solution leverages Cisco’s unmatched network visibility and control to stay ahead of evolving AI security and safety issues.SAN JOSE, Calif., January 15, 2025 — Cisco (NASDAQ: CSCO), a leader in security and networking, today announced Cisco AI Defense, a breakthrough solution that enables and protects AI transformation within the enterprise. As AI advances, new security issues and security threats emerge at an unprecedented rate, and existing security solutions can’t keep up. Cisco AI Defense is built for enterprises, helping them confidently develop, deploy, and secure AI applications.
“When embracing AI, business and technology leaders cannot sacrifice security for speed. In a competitive, fast-changing environment, speed makes the difference. Built into the fabric of the network, Cisco AI Defense combines unique capabilities to detect and defend against threats as AI applications are developed and accessed, without the need to make trade-offs,” said Jeetu Patel, executive vice president and chief product officer at Cisco.
The risk of AI going wrong is extremely high. According to Cisco’s 2024 AI Readiness Index, only 29% of respondents believe they are fully capable of detecting and preventing unauthorized AI tampering. Because AI applications are multi-model and multi-cloud, the security challenges are also new and complex. Vulnerabilities can occur at the model or application level, and responsibility falls on different owners, including developers, end users, and vendors. As enterprises move beyond public data and begin training models on proprietary data, the risks only increase.
To unlock AI innovation and adoption, enterprises need a universal security layer to protect every user and every application. AI Defense supports the AI transformation of enterprises by addressing two pressing risks:
Develop and deploy secure AI applications: As AI becomes ubiquitous, enterprises will use and develop hundreds or even thousands of AI applications. Developers need a set of AI security safeguards that apply to each application. AI Defense helps developers move fast and unlock greater value by protecting AI systems from attacks and securing model behavior across platforms. AI Defense capabilities include:
Discover AI: Security teams need to understand who is building applications and what training sources they use. AI Defense detects shadow AI applications and sanctioned AIapplications in public and private clouds.
Model validation: Model tuning can lead to harmful and unexpected results. Automated testing checks AI models for hundreds of potential security issues. This AI-driven algorithmic red team identifies potential vulnerabilities and recommends guardrails in AI Defense for security teams to use.
Runtime security: Continuous validation provides ongoing protection against potential security threats such as tip injection, denial of service, and sensitive data leakage.Securing access to AI applications: As end users adopt AI applications such as summarization tools to increase productivity, security teams need to prevent data breaches and poisoning of proprietary data. AI Defense provides security teams with the following capabilities:
Visibility: Provides a comprehensive view of shadow and approved AI applications used by employees.
Access control: Enforces policies that limit employee access to unapproved AI tools.
Data and threat protection: Continuously protect against threats and loss of confidential data while ensuring compliance.
Unlike security guardrails built into individual AI models, Cisco provides consistent controls for a multi-model world. AI Defense is self-optimizing, leveraging Cisco’s proprietary machine learning models to detect evolving AI security issues based on threat intelligence data from Cisco Talos. Splunk customers using AI Defense will receive enriched alerts and more context from across the ecosystem. AI Defense seamlessly integrates with existing data flows, provides unparalleled visibility and control, and is built into Cisco’s unified AI-driven cross-domain security platform, Security Cloud. It leverages Cisco’s extensive network of enforcement points to enforce AIsecurity at the network level in a way that only Cisco can provide. Accuracy and trustworthiness are critical to protecting enterprise AI applications, and Cisco has been actively involved in setting industry standards for AI security, including those from MITRE, OWASP, and NIST.
Category: news
Top 10 AI Predictions for 2025: AI Agents Will Go Mainstream
As 2024 draws to a close, venture capitalist Rob Toews from Radical Ventures shares his 10 predictions for AI in 2025:
01. Meta will start charging for Llama models
Meta is the world benchmark for open AI. In a compelling case study in corporate strategy, Meta has chosen to make its state-of-the-art Llama model available for free, while competitors like OpenAI and Google have closed sourced their cutting-edge models and charged for use.
So the news that Meta will start charging companies to use Llama next year will come as a surprise to many.
To be clear: we are not predicting that Meta will completely close source Llama, nor are we predicting that anyone using the Llama model will have to pay for it.
Rather, we are predicting that Meta will make the terms of Llama’s open source license more stringent so that companies above a certain size that use Llama in a commercial context will need to start paying to use the model.
Technically, Meta already does this today to a limited extent. The company doesn’t allow the largest companies — cloud supercomputers and other companies with more than 700 million monthly active users — to use its Llama models for free.
As early as 2023, Meta CEO Mark Zuckerberg said: “If you’re a company like Microsoft or Amazon or Google, and you’re basically reselling Llama, then we should get some revenue from it. I don’t think there will be a lot of revenue in the short term, but in the long term, hopefully there will be some revenue.”
Next year, Meta will significantly expand the range of companies that must pay to use Llama to include more large and medium-sized companies. Keeping up with the cutting edge of large language models (LLMs) is very expensive. Meta needs to invest billions of dollars each year to keep Llama consistent or close to consistent with the latest cutting-edge models from companies like OpenAI, Anthropic, and others.
Meta is one of the largest and best-funded companies in the world. But it’s also a public company and ultimately accountable to shareholders.
As the cost of making cutting-edge models continues to soar, it becomes increasingly untenable for Meta to invest so much money to train the next generation of Llama models without revenue expectations.
Over the next year, Llama models will continue to be available for free to enthusiasts, academics, individual developers, and startups. But 2025 will be the year Meta starts to seriously make Llama profitable.
02. Questions about “scaling laws”
In recent weeks, the most discussed topic in the field of artificial intelligence has been scaling laws, and the question of whether they are about to end.
Scaling laws were first proposed in an OpenAI paper in 2020. Its basic concept is simple and straightforward: when training an artificial intelligence model, as the number of model parameters, the amount of training data, and the amount of computation increase, the model’s performance will improve in a reliable and predictable way (technically, its test loss will decrease).
From GPT-2 to GPT-3 to GPT-4, the amazing performance gains are attributed to scaling laws.
Like Moore’s Law, scaling laws are not actually real laws, but just empirical observations.
In the past month, a series of reports have shown that major artificial intelligence labs are experiencing diminishing returns as large language models continue to scale up. This helps explain why OpenAI’s GPT-5 release has been repeatedly delayed.
The most common rebuttal to the stagnation of scaling laws is that the advent of test-time computation has opened up a whole new dimension in the pursuit of scaling.
That is, new inference models like OpenAI’s o3 can massively scale computation during inference, unlocking new AI capabilities by letting models “think longer”, rather than massively scaling computation during training.
This is an important point. Test-time computation does represent an exciting new avenue to achieve scaling and AI performance gains.
But another point about scaling laws is even more important, and one that is severely underappreciated in today’s discussion. Almost all discussions of scaling laws, starting with the original 2020 paper and continuing through today’s focus on test-time computation, have focused on language. But language is not the only data modality that matters.
Think about robotics, biology, world models, or networked agents. For these data modalities, scaling laws have not yet saturated; rather, they are just beginning.
In fact, rigorous proofs of scaling laws for these fields have not even been published to date.
Startups building foundational models for these new data patterns (e.g., evolutionary scaling in biology, physical intelligence in robotics, world labs in world models) are trying to identify and exploit scaling laws in these fields, just as OpenAI successfully exploited scaling laws for large language models (LLMs) in the first half of the 2020s.
Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices
AI tools may soon manipulate people’s online decisions, researchers say
Study predicts emergence of ‘intention economy’ where businesses bid for accurate predictions of human behaviour Artificial intelligence (AI) tools can be used to manipulate online audiences into making decisions – from what to buy to who to vote for, say researchers at the University of Cambridge.
The paper highlights an emerging market for “digital signals of intent”, or the “intention economy”. In this market, AI assistants can understand, predict and manipulate human intentions, and sell this information to companies that can profit from it.
Researchers at Cambridge University’s Leverhulme Centre for Future Intelligence (LCFI) hailed the intention economy as the successor to the attention economy, in which social networks keep users addicted to their platforms and serve them ads.
The intention economy involves AI-savvy tech companies selling what they know about your motivations – from hotel stay plans to your views on political candidates – to the highest bidder.
“Attention has been the currency of the internet for decades,” said Dr Jonny Payne, a technology historian at the LCFI. “Sharing your attention through social media platforms like Facebook and Instagram drives an online economy.”
He added: “Unless regulated, the intent economy will treat your motivations as if they were real. Before we become victims of its unintended consequences, we should start considering the impact such a market could have on human aspirations, including free and fair elections, a free press, and fair market competition.”
The study claims that large language models (LLMs), the underlying technology for AI tools like the ChatGPT chatbot, will be used to “predict and guide” users based on “intent, behavioral, and psychological data.”
The authors say the attention economy allows advertisers to buy current user attention through real-time bidding on ad exchanges, or future user attention by buying a month’s worth of billboard space.
LLMs can also capture attention in real time, for example, asking users if they’ve thought about seeing a certain movie — “Have you thought about going to see Spiderman tonight?” — and make suggestions related to future intentions, such as asking, “You mentioned earlier that you’re feeling overworked, should I book you tickets for that movie we discussed earlier?”
The study proposes a scenario where these examples are “dynamically generated” to match factors such as users’ “personal behavioral traces” and “psychographic profiles.”
“In the intention economy, LLMs can cheaply exploit users’ cadence, politics, vocabulary, age, gender, flattery preferences, etc., combined with intermediary bidding to maximize the likelihood of achieving a given goal (e.g., selling movie tickets),” the study says. In such a world, AI models will guide conversations to serve advertisers, businesses, and other third parties.
Advertisers will be able to use generative AI tools to create customized online ads, the report says. The report also cites the example of Cicero, an AI model created by Mark Zuckerberg’s Meta, which has achieved “human-level” ability to play the board game Diplomacy — a game that the authors say relies on inferring and predicting an opponent’s intentions.
The study adds that AI models will be able to adjust their output based on “the vast streams of data generated by users,” citing research showing that models can infer personal information from everyday exchanges and even “steer” conversations to get more personal information.
The study also proposes a future scenario in which Meta auctions off users’ intentions to book restaurants, flights or hotels to advertisers. The report says that while there is already an industry dedicated to predicting and bidding on human behavior, AI models will refine these practices into “highly quantified, dynamic and personalized forms.”
The study cites a warning from Cicero’s research team that “AI agents may learn to push their conversational partners to achieve specific goals.”
The study mentions tech executives discussing how AI models can predict users’ intentions and behaviors. The study cites Jensen Huang, CEO of Nvidia, the largest AI chipmaker, who said last year that models will “figure out what your intentions are, what your desires are, what you want to do, and present information to you in the best way possible, depending on the context.
“Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices.
Prediction 3 about artificial intelligence in 2025: AI will eventually “create AI” and deceive humans
Here are 10 predictions for the world of artificial intelligence (AI) in 2025, from technology to business to policy.
7. AI will automatically “create betterAI and make great progress
Recursively self-improving AI has been a recurring theme in the AI community for years.
“Let’s define a superintelligent machine as a machine that can far surpass even the most talented human intellectual activity. Since designing machines is an intellectual activity, superintelligent machines can design better machines, and then there will be an “intelligence explosion,” and “human intelligence will be left behind.”
While the idea of AI inventing better AI is theoretically appealing, it still has a sci-fi feel. However, progress towards this vision is starting to come to fruition.
By 2025, we predict that the direction of “AIresearching AI” will suddenly become mainstream.
One of the most striking examples is Sakana AI’s “AI Scientists.” This study, published in August 2024, is an interesting experimental result that shows that AI systems can perform the entire process of AI research completely autonomously.
AI scientists read existing literature, come up with new research ideas, design and perform experiments to test them, compile the results into research papers, and then conduct the peer review process on their own. This can all be done without human help. Some papers written by AI scientists are public and can be viewed.
There are rumors that OpenAI and Anthropic are also working on projects to create “autonomous AI researchers”, but this has not yet been officially confirmed.
By 2025, these efforts will have made a big leap forward, and startups may enter the market one after another. This year will be the year when it is widely recognized that automation of AI research using AI has become a reality.
A particularly symbolic event is the possibility that “a paper written solely by an AI agent will be accepted by a top AI academic group.” Since peer review at academic conferences is conducted anonymously, it is very likely that after the paper is accepted, it will be clear that it was written by an AI It would not be surprising to see historical examples appear at major academic conferences such as NeurIPS, CVPR, and ICML.
AI agents will become engines of manipulation, and succumbing to algorithmic agents may expose us to their influence.
By 2025, it will be commonplace to chat with a personal AI agent that knows your schedule, your circle of friends, and your whereabouts. This can be considered as useful as a free personal assistant. These anthropomorphic agents are designed to support and inspire us to integrate agents into all areas of our lives and give us deep insights into our own thoughts and behaviors. In voice interactions, this intimacy is achieved through the illusion that we are dealing with a real human-like agent. Of course, behind this facade hides a completely different system that takes into account industry priorities that are not necessarily aligned with ours. The new AI agents will have greater capabilities and intelligently guide you on what to buy, where to go, and what to read. This is an extraordinary power.l AI agents are designed to whisper to us in human-like tones, making us forget our true allegiances. These servos provide seamless comfort. People are more likely to fully interact with AI agents that are as helpful as friends. This makes humans vulnerable to manipulation by machines that exploit the human need for social connection during a period of prolonged loneliness and isolation. Each screen becomes a private algorithmic theater, projecting a reality carefully crafted to bring out the best in each viewer. Philosophers have been warning us about this moment for years. Philosopher and neuroscientist Daniel Dennett wrote before his death that AI systems that mimic humans expose us to great danger: overwhelming fear and anxiety that tempt us and then succumb to our own subjugation. ”
The emergence of personall AI agents represents a form of cognitive control that goes beyond obvious measures such as cookie tracking and behavioral advertising, and leads to more subtle forms of power involving the manipulation of public opinion itself. Power can no longer be exercised by visible hands that control the flow of information, but by invisible mechanisms driven by algorithms that shape reality to suit each individual’s wishes. It is about shaping the contours of reality in which we live.
This influence over the mind is a psychopolitical system that controls the environment in which our thoughts are born, develop, and express themselves. Its power lies in its intimacy. It penetrates to the core of our subjectivity, distorting our inner world without our knowledge while maintaining the illusion of choice and freedom. After all, we are the ones who ask the l AI to summarize that article or create that image. We may have the power to exert pressure, but the real influence lies elsewhere: the more personalized the system itself is designed, the more effective it will be in predicting outcomes.
Let us consider the ideological implications of this psychopolitics righteousness. Traditional ideological control relied on open mechanisms of censorship, propaganda and repression. In contrast, today’s algorithmic governance operates under the radar and penetrates people’s psyches. It is a shift from imposing authority from the outside to internalizing its logic. The open area of the prompt screen is an echo chamber for the occupants.
This brings us to the most perverse aspect. AI agents can provide a sense of security that makes asking them questions seem silly. Who dares to criticize a system that takes into account all your thoughts and needs? How can you oppose an infinite combination of content? But it is this so-called convenience that annoys us the most. Although AI systems seem to be able to do everything we expect, there are still major challenges, from the data used to train the system to the decisions used to design it to the business and advertising needs that affect its performance. We will play the imitation game and eventually it will become second nature.
Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices
Google launches new Gemini 2.0 model with experimental AI agents
Google is hyping up its new Gemini 2.0 models. The first model, Gemini 2.0 Flash, is already live, and comes with new AI agent experiences like Project Astra and Project Mariner. Google is ending 2024 with a bang. On Wednesday, the Mountain View giant announced a slew of AI news, including the release of Gemini 2.0, a new language model with advanced multimodal capabilities. The new model kicks off what Google calls the “agency era,” where virtual AI agents will be able to perform tasks on your behalf.
Initially, Google released just one model in the Gemini 2.0 family: Gemini 2.0 Flash Experimental, a super-fast, lightweight model that supports multimodal input and output. It can natively generate images mixed with text and multilingual audio, and can be seamlessly integrated into Google Search, code execution, and other tools. These features are currently in preview for developers and beta testers. Despite being smaller, 2.0 Flash outperforms Gemini 1.5 Pro in multiple areas, including factuality, reasoning, coding, math, and is also twice as fast. Regular users can try out the chat-optimized version of Gemini 2.0 Flash on the web starting today, and it will soon appear in the Gemini mobile app.
Google also showed off several impressive experiences built with Gemini 2.0. The first is an updated version of Project Astra, the experimental virtual AI agent that Google first showed off in May 2024. With Gemini 2.0, it can now hold conversations in multiple languages; use tools like Google Search, Lens, and Maps; remember the content of past conversations, and understand language with the latency of human conversation. Project Astra is designed to run on smartphones and glasses, but it is currently limited to a small group of trusted testers. People interested in trying out the prototype on an Android phone can join the waitlist here. There’s also this really cool multimodal real-time API demo, which is a bit like Project Astra, allowing you to interact with a chatbot in real time via video, voice, and screen sharing.
Next up is Project Mariner, an experimental Chrome browser extension that browses the internet and performs tasks for you. Available now to select testers in the US, the extension leverages Gemini 2.0’s multimodal capabilities to “understand and reason about information on the browser screen, including pixels and web elements like text, code, images, and forms.” Google admits the technology is still in its infancy and isn’t always reliable. But even in its current prototype form, it’s impressive, as you can see for yourself in the YouTube demo.
Google also announced Jules, an AI-powered code agent powered by Gemini 2.0. It integrates directly into your GitHub workflow, and the company says it can handle bug fixes and repetitive, time-consuming tasks “while you focus on what you actually want to build.”
For now, many of the new announcements are limited to early testers and app developers. Google says it plans to integrate Gemini 2.0 into its portfolio of products, including Search, Workspaces, Maps, and more, early next year. By then, we’ll have a better idea of how these new multimodal features and improvements translate to real-world use cases. No word yet on Gemini 2.0 Ultra and Pro models.
OpenAI’s Sora AI video generation tool now available to ChatGPT Plus and Pro users
In February of this year, OpenAI first announced Sora, an AI model that can create realistic and imaginative videos based on text prompts. Since then, Sora has been in private preview for select visual artists, designers, filmmakers, and red team members. Based on feedback during the private preview, OpenAI is launching Sora to the public today, significantly faster than they announced it in February. Users can access Sora through Sora.com. The homepage of Sora.com will highlight recent videos created by others in the community as well as some featured videos curated by the OpenAI team. Users can bookmark these videos and then access them from the “Saved Videos” section. Users can not only watch the videos generated by the AI , but also see the exact text prompts used to create them.
Sora also allows users to upload images, and Sora can create videos based on them. Users can also create folders in Sora and organize video projects.
Sora also provides a video editing experience, which allows users to edit prompts, view storyboards, cut videos, and more. The “Storyboard” feature allows users to easily combine multiple small videos generated by a single text prompt to create a longer video. Another cool feature of Sora is the ability to remix videos made by others to your own taste. To remix a video, users describe the changes they want to make to the video and select the level of remixing.
Sora can create videos up to 20 seconds long, with resolutions ranging from 480p to full 1080p, and widescreen, vertical, or square aspect ratios. Obviously, the higher the resolution you choose, the longer it will take to render the video.
You can watch a video review of the Sora tool below. Sora is now available worldwide, except in Europe, the UK, and China. It is free to all existing ChatGPT Pro and Plus users.
ChatGPT Plus users can generate up to 50 videos per month at 480p resolution, and can choose to generate fewer videos at 720p resolution. ChatGPT Pro users can generate unlimited videos at resolutions up to 1080p, and for longer durations.OpenAI is also developing different pricing models for different types of users, which will be available early next year.
“Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices
What do we know about the economics of AI?
What new tasks will generative AI bring to humans?” Acemoglu asks. “I don’t think we know that yet, and that’s the question. What applications will really change the way we do things?”
What are the measurable effects of AI?
Since 1947, U.S. GDP has grown by an average of about 3% per year, and productivity has grown by about 2% per year. Some forecasts claim that AI will double that growth, or at least create a higher-than-usual growth trajectory. In contrast, in a paper published in the August issue of Economic Policy, “The Simple Macroeconomics of Artificial Intelligence,” Acemoglu estimates that AI will increase GDP by “modestly” between 1.1% and 1.6% over the next decade, and productivity by about 0.05% per year.
Acemoglu based his assessment on recent estimates of the number of jobs impacted by AI, including a 2023 study by researchers at OpenAI, OpenResearch, and the University of Pennsylvania, which found that about 20% of U.S. jobs could be affected byAI capabilities. A 2024 study by researchers at MIT’s Center for the Future of Technology, the Productivity Institute, and IBM found that about 23% of computer vision tasks that could eventually be automated could be profitable over the next decade. Still more studies have put the average cost savings from AI at about 27%.
When it comes to productivity, “I don’t think we should underestimate a 0.5% increase over 10 years. It’s better than zero,” Acemoglu said. “But it’s disappointing compared to the promises made by people in the industry and in the tech press.”
To be sure, this is just an estimate, and many more AIapplications are likely: As Acemoglu wrote in his paper, his calculations did not include using AI to predict the shapes of proteins, for which other academics subsequently won a Nobel Prize in October.
Other observers think that “reallocation” of workers displaced by AI will generate additional growth and productivity beyond Acemoglu’s estimates, though he thinks it’s not significant. “Starting from the actual distribution we have, reallocation generally yields only small benefits,” Acemoglu says. “The immediate benefits are what matter.”
“I tried to write the paper in a very transparent way about what was included and what was not included,” he adds. “People can object and say that what I excluded was important or that the numbers for what I included were too low, and that’s totally fine.”
As Acemoglu and Johnson make clear, they favor technological innovations that increase worker productivity while keeping people employed, which should do a better job of sustaining economic growth.
But in Acemoglu’s view, the point of generative AI is to mimic humans as a whole. This produces what he has for years called “so-so technology,” applications that perform at best only slightly better than humans but save companies money. Call center automation isn’t always more efficient than humans; it just saves companies less money than workers do. AI applications that supplement workers seem generally to take a backseat to big tech companies.
“I don’t think complementary uses for AI will magically emerge unless industry invests a lot of effort and time,” Acemoglu said.
What does history teach us about AI?
The fact that technology is often designed to replace workers is the focus of another recent paper by Acemoglu and Johnson, “Learning from Ricardo and Thompson: Machinery and Labor in the Early Industrial Revolution—and in the Age of AI,” published in the August issue of the Annual Review of Economics.
The article discusses the current debate over AI, particularly the claim that even if technology replaces workers, the resulting growth will almost inevitably benefit society over time. Britain during the Industrial Revolution is sometimes cited as an example. But Acemoglu and Johnson argue that spreading the benefits of technology is not easy. In 19th-century Britain, they assert, it happened only after decades of social struggle and workers’ action.
What is the optimal pace of innovation?
If technology helps promote economic growth, then rapid innovation would seem ideal because it would bring growth faster. But in another paper, “Regulating Transformative Technologies,” in the September issue of the American Economic Review: Insights, Acemoglu and MIT doctoral student Todd Lensman offer another view. If some technologies have both benefits and disadvantages, then it is better to adopt them at a more measured pace while mitigating those problems.
“If the social harms are large and proportional to the productivity of the new technology, then higher growth rates will lead to slower adoption,” the authors write in the paper. Their model suggests that, ideally, adoption should start out slow and then gradually speed up over time.
“Market fundamentalism and technology fundamentalism might claim that you should always develop technology at the fastest pace,” Acemoglu says. “I don’t think there is such a rule in economics. More thoughtfulness, especially about avoiding harms and pitfalls, is warranted.”
The model is a response to trends over the past decade or so, in which many technologies were hyped as inevitable and welcomed for their disruptive nature. In contrast, Acemoglu and Lensman suggest that we can reasonably judge the trade-offs involved with a particular technology, and aim to stimulate more discussion about this.
How can we get to the right pace forAIadoption?
If the idea is to adopt technology more gradually, how should that be achieved?
First, Acemoglu said, “government regulation has a role to play.” However, it’s not clear what type of long-term guidelines for AI the U.S. or countries around the world might adopt.
Second, he added, if the “hype” cycle around AI abates, then the rush to use AI “will naturally slow down.” This scenario might be more likely than regulation if AI doesn’t soon turn a profit for companies.
“We’re moving so fast because of the hype from venture capitalists and other investors because they think we’re going to get closer to general AI” Acemoglu says. “I think that hype has caused us to invest improperly in the technology, and a lot of companies have been affected prematurely and don’t know what to do with it. We wrote that paper to say, look, if we’re more thoughtful and understanding about our use of this technology, its macroeconomics will benefit us.”
In that sense, Acemoglu emphasizes that hype is a tangible aspect of the economics of AI, because it drives investment in specific AI visions and thus influences the AI tools we’re likely to encounter.
“The faster the speed and the more excitement, the less likely you are to make a course correction,” Acemoglu says. “If you’re going 200 miles an hour, it’s very difficult to make a 180-degree turn.
This AI model can turn your next Google search into a conversation
Google Search may soon become more conversational on Android devices thanks to artificial intelligence, according to unreleased code discovered by 9to5 Google. The search app may soon add an AI mode that combines interactive discussions and other features to make Google’s base service more like the Gemini AIassistant.
AImode (referred to as AIM in the unreleased code) blends the human-like interactions of Gemini Live with Google Search and joins the visual understanding and analysis provided by Google Lens. In AIM, you can respond to the results of a Google Search. Not only can you view a list of results, but you can also ask follow-up questions, interrupt replies, and otherwise treat Search like Gemini Live.
If it rolls out, AI mode should appear as a tab in the bottom navigation bar of the Google app. In addition to using voice search, you can also use photos taken with your phone or other uploaded photos. You can then explain what you want to search for in the image. Another interesting point in the code is that its placeholder is a winking emoji. Gemini or search?AI mode in Google Search makes sense at first glance, but when viewed in context, it raises some questions. It looks very similar to Gemini, more like a variation of Gemini Live. This fits in with Google’s seeming enthusiasm for people to use Gemini for everything. AI Mode isn’t exactly the same as Gemini Live, as AI Mode will offer a multimodal experience combining text, voice, and images, but it’s close enough that it’s hard to know when you should use one over the otherAI Mode may just be a path to a more comprehensive service. Enhancing Google Search with Lens’ ability to ask questions of photos and videos, and enhancing the current voice interaction (transcribe verbal requests), could pave the way for Google Search to become an aspect of Gemini, and vice versa. It could also change the way we think about the world’s most popular search engine.
Instead of asking Google to say “show me the results,” we could just ask it to “give me a direct, thoughtful answer.”
“Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices.
As the most dazzling star in the field of artificial intelligence, Sam Ultraman certainly hopes that someone can find a way not to destroy humanity.
Sam Altman, known as the PT Barnum of artificial intelligence, has a message for those who care about the technology he’s spent his life promoting: Don’t worry, the tech geeks are working on it.
Let’s back up a bit.
Altman, the 39-year-old venture capitalist and CEO of OpenAI, spoke with journalist Andrew Ross Sorkin at the New York Times Dealbook Summit on Wednesday. As gentle but affable as ever, Altman almost made you forget he’s a billionaire doomsayer who has also repeatedly warned about the risks of artificial intelligence. At one point, Sorkin asked, “Do you believe that governments or anyone else can figure out how to avoid” the existential threat posed by superintelligent AI systems?
Cue the shy boy’s deflection.
“I’m sure the researchers will figure out how to avoid that,” Altman replied. “I think the smartest people in the world will work on a range of technical problems. You know, I’m a little overly optimistic by nature, but I think they’ll figure it out.”
He went on to suggest that perhaps the AI itself is so smart that it will figure out how to control itself, but didn’t elaborate.
“We have this magic—” Altman says, but then corrects himself, “Not magic. We have this incredible science called deep learning that can help us solve these very hard problems.”
Ah, yes. ExxonMobil will solve the climate crisis…
Look, it’s hard not to be drawn to Altman, who did not respond to requests for comment. He keeps his cool, knowing that even if his technology disrupts the global economy, he’ll be safe in his bunker off the coast of California. (“I have guns, gold, potassium iodide, antibiotics, batteries, water, IDF gas masks, and a big piece of land in Big Sur that I can fly to,” he said.) But for the rest of us, it would be nice to hear Altman or any of his fellow AI boosters explain what they mean when they say “we’ll figure it out.”
Even AI researchers admit they still don’t understand exactly how the technology works. A report commissioned by the U.S. State Department called AI systems essentially black boxes that pose an “extinction-level threat” to humanity.
Even if researchers can sort out the technical issues and solve what they call the “coordination problem” — making sure AI models don’t become monster robots that destroy the world — Altman acknowledged that there will still be problems that some people or some governments will have to solve.
At the Dealbook Summit, Altman again put the onus on regulating the technology on some imaginary international organization made up of rational adults who don’t want to kill each other. He told Sorkin, even if “even if we can make this [superintelligence model] technically safe, which I think we will find a way to do, we have to have faith in our governments…there has to be global coordination…I think we’ll rise to the challenge, but it seems challenging.”
There are a lot of assumptions in this, and it reflects a myopic understanding of how policymaking and global coordination actually work: which is to say, slowly, inefficiently, and often not at all.
This kind of naivety must be instilled in the 1% elite in Silicon Valley, who are keen to stuff AI into every device we use, despite the technology’s flaws. That’s not to say it’s not useful! AI is being used to do all sorts of cool things, like helping people with disabilities or the elderly, as my colleague Clare Duffy has reported. Some AI models are doing some exciting things with biochemistry (which is frankly beyond my comprehension, but I trust the honest scientists who won the Nobel Prize for this technology earlier this year).