AI agents will become engines of manipulation, and succumbing to algorithmic agents may expose us to their influence.

By 2025, it will be commonplace to chat with a personal AI agent that knows your schedule, your circle of friends, and your whereabouts. This can be considered as useful as a free personal assistant. These anthropomorphic agents are designed to support and inspire us to integrate agents into all areas of our lives and give us deep insights into our own thoughts and behaviors. In voice interactions, this intimacy is achieved through the illusion that we are dealing with a real human-like agent. Of course, behind this facade hides a completely different system that takes into account industry priorities that are not necessarily aligned with ours. The new AI agents will have greater capabilities and intelligently guide you on what to buy, where to go, and what to read. This is an extraordinary power.l AI agents are designed to whisper to us in human-like tones, making us forget our true allegiances. These servos provide seamless comfort. People are more likely to fully interact with AI agents that are as helpful as friends. This makes humans vulnerable to manipulation by machines that exploit the human need for social connection during a period of prolonged loneliness and isolation. Each screen becomes a private algorithmic theater, projecting a reality carefully crafted to bring out the best in each viewer. Philosophers have been warning us about this moment for years. Philosopher and neuroscientist Daniel Dennett wrote before his death that AI systems that mimic humans expose us to great danger: overwhelming fear and anxiety that tempt us and then succumb to our own subjugation. ”
The emergence of personall AI agents represents a form of cognitive control that goes beyond obvious measures such as cookie tracking and behavioral advertising, and leads to more subtle forms of power involving the manipulation of public opinion itself. Power can no longer be exercised by visible hands that control the flow of information, but by invisible mechanisms driven by algorithms that shape reality to suit each individual’s wishes. It is about shaping the contours of reality in which we live.
This influence over the mind is a psychopolitical system that controls the environment in which our thoughts are born, develop, and express themselves. Its power lies in its intimacy. It penetrates to the core of our subjectivity, distorting our inner world without our knowledge while maintaining the illusion of choice and freedom. After all, we are the ones who ask the l AI ​​to summarize that article or create that image. We may have the power to exert pressure, but the real influence lies elsewhere: the more personalized the system itself is designed, the more effective it will be in predicting outcomes.
Let us consider the ideological implications of this psychopolitics righteousness. Traditional ideological control relied on open mechanisms of censorship, propaganda and repression. In contrast, today’s algorithmic governance operates under the radar and penetrates people’s psyches. It is a shift from imposing authority from the outside to internalizing its logic. The open area of ​​the prompt screen is an echo chamber for the occupants.
This brings us to the most perverse aspect. AI agents can provide a sense of security that makes asking them questions seem silly. Who dares to criticize a system that takes into account all your thoughts and needs? How can you oppose an infinite combination of content? But it is this so-called convenience that annoys us the most. Although AI systems seem to be able to do everything we expect, there are still major challenges, from the data used to train the system to the decisions used to design it to the business and advertising needs that affect its performance. We will play the imitation game and eventually it will become second nature.
Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices

Google launches new Gemini 2.0 model with experimental AI agents

Google is hyping up its new Gemini 2.0 models. The first model, Gemini 2.0 Flash, is already live, and comes with new AI agent experiences like Project Astra and Project Mariner. Google is ending 2024 with a bang. On Wednesday, the Mountain View giant announced a slew of AI news, including the release of Gemini 2.0, a new language model with advanced multimodal capabilities. The new model kicks off what Google calls the “agency era,” where virtual  AI agents will be able to perform tasks on your behalf.
Initially, Google released just one model in the Gemini 2.0 family: Gemini 2.0 Flash Experimental, a super-fast, lightweight model that supports multimodal input and output. It can natively generate images mixed with text and multilingual audio, and can be seamlessly integrated into Google Search, code execution, and other tools. These features are currently in preview for developers and beta testers. Despite being smaller, 2.0 Flash outperforms Gemini 1.5 Pro in multiple areas, including factuality, reasoning, coding, math, and is also twice as fast. Regular users can try out the chat-optimized version of Gemini 2.0 Flash on the web starting today, and it will soon appear in the Gemini mobile app.
Google also showed off several impressive experiences built with Gemini 2.0. The first is an updated version of Project Astra, the experimental virtual AI agent that Google first showed off in May 2024. With Gemini 2.0, it can now hold conversations in multiple languages; use tools like Google Search, Lens, and Maps; remember the content of past conversations, and understand language with the latency of human conversation. Project Astra is designed to run on smartphones and glasses, but it is currently limited to a small group of trusted testers. People interested in trying out the prototype on an Android phone can join the waitlist here. There’s also this really cool multimodal real-time API demo, which is a bit like Project Astra, allowing you to interact with a chatbot in real time via video, voice, and screen sharing.
Next up is Project Mariner, an experimental Chrome browser extension that browses the internet and performs tasks for you. Available now to select testers in the US, the extension leverages Gemini 2.0’s multimodal capabilities to “understand and reason about information on the browser screen, including pixels and web elements like text, code, images, and forms.” Google admits the technology is still in its infancy and isn’t always reliable. But even in its current prototype form, it’s impressive, as you can see for yourself in the YouTube demo.
Google also announced Jules, an AI-powered code agent powered by Gemini 2.0. It integrates directly into your GitHub workflow, and the company says it can handle bug fixes and repetitive, time-consuming tasks “while you focus on what you actually want to build.”
For now, many of the new announcements are limited to early testers and app developers. Google says it plans to integrate Gemini 2.0 into its portfolio of products, including Search, Workspaces, Maps, and more, early next year. By then, we’ll have a better idea of ​​how these new multimodal features and improvements translate to real-world use cases. No word yet on Gemini 2.0 Ultra and Pro models.

Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices.

Clio: A privacy-preserving system that provides insights into real-world AI usage

What are people using AI models for? Despite the rapid growth in popularity of large language models, we still know very little about what they are used for.
This isn’t just out of curiosity, or even sociological research. Understanding how people actually use language models is important for security reasons: providers put a lot of effort into pre-deployment testing and use trust and safety systems to prevent misuse. But the scale and diversity of language models makes it hard to understand what they are used for (not to mention any kind of comprehensive security monitoring).
There’s another key factor that prevents us from having a clear understanding of how  AI models are used: privacy. At Anthropic, our Claude model is not trained on user conversations by default, and we take protecting our users’ data very seriously. So how do we study and observe how our systems are used while strictly protecting our users’ privacy?
Cl aude Insights and Observations (“Clio” for short) is our attempt to answer this question. Clio is an automated analytics tool that performs privacy-preserving analysis of real-world language model usage. It gives us insight into everyday usage at claude.ai in a similar way to tools like Google Trends. It’s already helping us improve our security measures. In this post (with the full research paper attached), we describe Clio and some of its initial results.
How Clio Works: Privacy-Preserving Analytics at Scale
Traditional top-down security approaches (such as assessments and red teams) rely on knowing what to look for in advance. Clio takes a different approach, enabling bottom-up pattern discovery by distilling conversations into abstract, understandable clusters of topics. It does this while protecting user privacy: data is automatically anonymized and aggregated, and only higher-level clusters are visible to human analysts.
All of our privacy protections are extensively tested, as described in our research paper.
How People Use Claude: Insights from ClioUsing Clio, we were able to gain insight into how people use claude.ai in practice. While public datasets such as WildChat and LMSYS-Chat-1M provide useful information about how people use language models, they only capture specific contexts and use cases. Clio gives us a glimpse into the full real-world usage of claude.ai (which may differ from other  AI systems due to differences in user base and model type).
Summary of Clio’s analysis steps, illustrated using fictional examples of conversations.
Here’s a brief overview of Clio’s multi-stage process:
Extracting Aspects: For each conversation, Clio extracts multiple “aspects” — specific properties or metadata, such as the topic of the conversation, the number of back-and-forths in the conversation, or the language used.
Semantic Clustering: Similar conversations are automatically grouped based on topics or general themes.
Cluster Descriptions: Each cluster receives a descriptive title and summary that captures common themes from the raw data while excluding private information.
Building Hierarchies: Clusters are organized into multi-level hierarchies for easier exploration. They can then be presented in an interactive interface that a human factors analyst can use to explore patterns along different dimensions (topics, language, etc.).
Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices

OpenAI’s Sora AI video generation tool now available to ChatGPT Plus and Pro users

In February of this year, OpenAI first announced Sora, an AI model that can create realistic and imaginative videos based on text prompts. Since then, Sora has been in private preview for select visual artists, designers, filmmakers, and red team members. Based on feedback during the private preview, OpenAI is launching Sora to the public today, significantly faster than they announced it in February. Users can access Sora through Sora.com. The homepage of Sora.com will highlight recent videos created by others in the community as well as some featured videos curated by the OpenAI team. Users can bookmark these videos and then access them from the “Saved Videos” section. Users can not only watch the videos generated by the  AI , but also see the exact text prompts used to create them.
Sora also allows users to upload images, and Sora can create videos based on them. Users can also create folders in Sora and organize video projects.
Sora also provides a video editing experience, which allows users to edit prompts, view storyboards, cut videos, and more. The “Storyboard” feature allows users to easily combine multiple small videos generated by a single text prompt to create a longer video. Another cool feature of Sora is the ability to remix videos made by others to your own taste. To remix a video, users describe the changes they want to make to the video and select the level of remixing.
Sora can create videos up to 20 seconds long, with resolutions ranging from 480p to full 1080p, and widescreen, vertical, or square aspect ratios. Obviously, the higher the resolution you choose, the longer it will take to render the video.
You can watch a video review of the Sora tool below. Sora is now available worldwide, except in Europe, the UK, and China. It is free to all existing ChatGPT Pro and Plus users.
ChatGPT Plus users can generate up to 50 videos per month at 480p resolution, and can choose to generate fewer videos at 720p resolution. ChatGPT Pro users can generate unlimited videos at resolutions up to 1080p, and for longer durations.OpenAI is also developing different pricing models for different types of users, which will be available early next year.

“Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices

What do we know about the economics of AI?

What new tasks will generative AI bring to humans?” Acemoglu asks. “I don’t think we know that yet, and that’s the question. What applications will really change the way we do things?” What are the measurable effects of AI?
Since 1947, U.S. GDP has grown by an average of about 3% per year, and productivity has grown by about 2% per year. Some forecasts claim that AI will double that growth, or at least create a higher-than-usual growth trajectory. In contrast, in a paper published in the August issue of Economic Policy, “The Simple Macroeconomics of Artificial Intelligence,” Acemoglu estimates that AI will increase GDP by “modestly” between 1.1% and 1.6% over the next decade, and productivity by about 0.05% per year.
Acemoglu based his assessment on recent estimates of the number of jobs impacted by AI, including a 2023 study by researchers at OpenAI, OpenResearch, and the University of Pennsylvania, which found that about 20% of U.S. jobs could be affected byAI capabilities. A 2024 study by researchers at MIT’s Center for the Future of Technology, the Productivity Institute, and IBM found that about 23% of computer vision tasks that could eventually be automated could be profitable over the next decade. Still more studies have put the average cost savings from AI at about 27%.
When it comes to productivity, “I don’t think we should underestimate a 0.5% increase over 10 years. It’s better than zero,” Acemoglu said. “But it’s disappointing compared to the promises made by people in the industry and in the tech press.”
To be sure, this is just an estimate, and many more AIapplications are likely: As Acemoglu wrote in his paper, his calculations did not include using AI to predict the shapes of proteins, for which other academics subsequently won a Nobel Prize in October.
Other observers think that “reallocation” of workers displaced by AI will generate additional growth and productivity beyond Acemoglu’s estimates, though he thinks it’s not significant. “Starting from the actual distribution we have, reallocation generally yields only small benefits,” Acemoglu says. “The immediate benefits are what matter.”
“I tried to write the paper in a very transparent way about what was included and what was not included,” he adds. “People can object and say that what I excluded was important or that the numbers for what I included were too low, and that’s totally fine.”

As Acemoglu and Johnson make clear, they favor technological innovations that increase worker productivity while keeping people employed, which should do a better job of sustaining economic growth.
But in Acemoglu’s view, the point of generative AI is to mimic humans as a whole. This produces what he has for years called “so-so technology,” applications that perform at best only slightly better than humans but save companies money. Call center automation isn’t always more efficient than humans; it just saves companies less money than workers do. AI applications that supplement workers seem generally to take a backseat to big tech companies.
“I don’t think complementary uses for AI will magically emerge unless industry invests a lot of effort and time,” Acemoglu said.
What does history teach us about AI?
The fact that technology is often designed to replace workers is the focus of another recent paper by Acemoglu and Johnson, “Learning from Ricardo and Thompson: Machinery and Labor in the Early Industrial Revolution—and in the Age of AI,” published in the August issue of the Annual Review of Economics.
The article discusses the current debate over AI, particularly the claim that even if technology replaces workers, the resulting growth will almost inevitably benefit society over time. Britain during the Industrial Revolution is sometimes cited as an example. But Acemoglu and Johnson argue that spreading the benefits of technology is not easy. In 19th-century Britain, they assert, it happened only after decades of social struggle and workers’ action.

What is the optimal pace of innovation?
If technology helps promote economic growth, then rapid innovation would seem ideal because it would bring growth faster. But in another paper, “Regulating Transformative Technologies,” in the September issue of the American Economic Review: Insights, Acemoglu and MIT doctoral student Todd Lensman offer another view. If some technologies have both benefits and disadvantages, then it is better to adopt them at a more measured pace while mitigating those problems.
“If the social harms are large and proportional to the productivity of the new technology, then higher growth rates will lead to slower adoption,” the authors write in the paper. Their model suggests that, ideally, adoption should start out slow and then gradually speed up over time.
“Market fundamentalism and technology fundamentalism might claim that you should always develop technology at the fastest pace,” Acemoglu says. “I don’t think there is such a rule in economics. More thoughtfulness, especially about avoiding harms and pitfalls, is warranted.”
The model is a response to trends over the past decade or so, in which many technologies were hyped as inevitable and welcomed for their disruptive nature. In contrast, Acemoglu and Lensman suggest that we can reasonably judge the trade-offs involved with a particular technology, and aim to stimulate more discussion about this.
How can we get to the right pace forAIadoption?
If the idea is to adopt technology more gradually, how should that be achieved?
First, Acemoglu said, “government regulation has a role to play.” However, it’s not clear what type of long-term guidelines for AI the U.S. or countries around the world might adopt.
Second, he added, if the “hype” cycle around AI abates, then the rush to use AI “will naturally slow down.” This scenario might be more likely than regulation if AI doesn’t soon turn a profit for companies.
“We’re moving so fast because of the hype from venture capitalists and other investors because they think we’re going to get closer to general AI” Acemoglu says. “I think that hype has caused us to invest improperly in the technology, and a lot of companies have been affected prematurely and don’t know what to do with it. We wrote that paper to say, look, if we’re more thoughtful and understanding about our use of this technology, its macroeconomics will benefit us.”
In that sense, Acemoglu emphasizes that hype is a tangible aspect of the economics of AI, because it drives investment in specific AI visions and thus influences the AI ​​tools we’re likely to encounter.
“The faster the speed and the more excitement, the less likely you are to make a course correction,” Acemoglu says. “If you’re going 200 miles an hour, it’s very difficult to make a 180-degree turn.

“Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices

This AI model can turn your next Google search into a conversation

Google Search may soon become more conversational on Android devices thanks to artificial intelligence, according to unreleased code discovered by 9to5 Google. The search app may soon add an AI mode that combines interactive discussions and other features to make Google’s base service more like the Gemini AIassistant.
AImode (referred to as AIM in the unreleased code) blends the human-like interactions of Gemini Live with Google Search and joins the visual understanding and analysis provided by Google Lens. In AIM, you can respond to the results of a Google Search. Not only can you view a list of results, but you can also ask follow-up questions, interrupt replies, and otherwise treat Search like Gemini Live.
If it rolls out, AI mode should appear as a tab in the bottom navigation bar of the Google app. In addition to using voice search, you can also use photos taken with your phone or other uploaded photos. You can then explain what you want to search for in the image. Another interesting point in the code is that its placeholder is a winking emoji. Gemini or search?AI mode in Google Search makes sense at first glance, but when viewed in context, it raises some questions. It looks very similar to Gemini, more like a variation of Gemini Live. This fits in with Google’s seeming enthusiasm for people to use Gemini for everything. AI Mode isn’t exactly the same as Gemini Live, as AI Mode will offer a multimodal experience combining text, voice, and images, but it’s close enough that it’s hard to know when you should use one over the otherAI Mode may just be a path to a more comprehensive service. Enhancing Google Search with Lens’ ability to ask questions of photos and videos, and enhancing the current voice interaction (transcribe verbal requests), could pave the way for Google Search to become an aspect of Gemini, and vice versa. It could also change the way we think about the world’s most popular search engine.
Instead of asking Google to say “show me the results,” we could just ask it to “give me a direct, thoughtful answer.”

“Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices.

As the most dazzling star in the field of artificial intelligence, Sam Ultraman certainly hopes that someone can find a way not to destroy humanity.

Sam Altman, known as the PT Barnum of artificial intelligence, has a message for those who care about the technology he’s spent his life promoting: Don’t worry, the tech geeks are working on it.
Let’s back up a bit.
Altman, the 39-year-old venture capitalist and CEO of OpenAI, spoke with journalist Andrew Ross Sorkin at the New York Times Dealbook Summit on Wednesday. As gentle but affable as ever, Altman almost made you forget he’s a billionaire doomsayer who has also repeatedly warned about the risks of artificial intelligence. At one point, Sorkin asked, “Do you believe that governments or anyone else can figure out how to avoid” the existential threat posed by superintelligent AI systems?
Cue the shy boy’s deflection.
“I’m sure the researchers will figure out how to avoid that,” Altman replied. “I think the smartest people in the world will work on a range of technical problems. You know, I’m a little overly optimistic by nature, but I think they’ll figure it out.”
He went on to suggest that perhaps the AI ​​itself is so smart that it will figure out how to control itself, but didn’t elaborate.
“We have this magic—” Altman says, but then corrects himself, “Not magic. We have this incredible science called deep learning that can help us solve these very hard problems.”
Ah, yes. ExxonMobil will solve the climate crisis…
Look, it’s hard not to be drawn to Altman, who did not respond to requests for comment. He keeps his cool, knowing that even if his technology disrupts the global economy, he’ll be safe in his bunker off the coast of California. (“I have guns, gold, potassium iodide, antibiotics, batteries, water, IDF gas masks, and a big piece of land in Big Sur that I can fly to,” he said.) But for the rest of us, it would be nice to hear Altman or any of his fellow AI boosters explain what they mean when they say “we’ll figure it out.”
Even AI researchers admit they still don’t understand exactly how the technology works. A report commissioned by the U.S. State Department called AI systems essentially black boxes that pose an “extinction-level threat” to humanity.
Even if researchers can sort out the technical issues and solve what they call the “coordination problem” — making sure AI models don’t become monster robots that destroy the world — Altman acknowledged that there will still be problems that some people or some governments will have to solve.
At the Dealbook Summit, Altman again put the onus on regulating the technology on some imaginary international organization made up of rational adults who don’t want to kill each other. He told Sorkin, even if “even if we can make this [superintelligence model] technically safe, which I think we will find a way to do, we have to have faith in our governments…there has to be global coordination…I think we’ll rise to the challenge, but it seems challenging.”
There are a lot of assumptions in this, and it reflects a myopic understanding of how policymaking and global coordination actually work: which is to say, slowly, inefficiently, and often not at all.
This kind of naivety must be instilled in the 1% elite in Silicon Valley, who are keen to stuff AI into every device we use, despite the technology’s flaws. That’s not to say it’s not useful! AI is being used to do all sorts of cool things, like helping people with disabilities or the elderly, as my colleague Clare Duffy has reported. Some AI models are doing some exciting things with biochemistry (which is frankly beyond my comprehension, but I trust the honest scientists who won the Nobel Prize for this technology earlier this year).

“Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices.

Leonardo AI: A versatile image generator for creative enthusiasts

Leonardo can create detailed AI images, but lacks the wow factor. Leonardo AIis no Leonardo da Vinci or DiCaprio, but it’s still an image generator that falls into the artistic Leonardo category. Originally designed to help create gaming assets, it’s now a full-fledged AI content creation service that offers AI video creation and editing services in addition to image tools.
Overall, Leonardo is a good choice compared to many of itsAIcompetitors. It’s on par with Adobe Firefly and much better than Google’s ImageFX or Canva. Leonardo follows prompts better than Midjourney, but the lack of extensive editing tools makes it hard to choose between the two. OpenAI’s Dall-E 3 is still CNET’s top-ranked choice, but you’ll need to pay $20 for ChatGPT Plus, while Leonardo has a comprehensive free plan. I’ve used Leonardo to generate more than 90 images, ranging from stock images to sci-fi and fantasy renderings. Here’s the full process:
How CNET tests AI image generators
CNET takes a hands-on approach to reviewing AI image generators. Our goal is to determine how it compares to the competition and what applications it’s best suited for. To do this, we provide AIprompts based on real-world use cases, such as rendering in a specific style, combining elements into a single image, and handling long descriptions. Image generators are rated on a 10-point scale, taking into account factors like how well the image matches the prompt, the creativity of the result, and responsiveness. Learn more about how we test AI
Leonardo’s images are so attractive that we encourage you to try Leonardo’s other AIcreation tools, such as Canvas Editor and Live Generate. However, we recommend that you don’t use it. These programs are less user-friendly and produce lower-quality content that’s blurry, off-center, or has strange quirks. Now that better image editing software is available, Meta AI’s “Imagine” feature is a more accurate live image generation tool.
Leonardo’s paid version, Alchemy Refiner, promises “improvements and enhancements” to images that AI image generation struggles with, especially faces and hands. Since I’m a free user, I couldn’t test it myself, but I was impressed by the clarity and accuracy of human hands and teeth compared to other AI generators. How long does it take to receive an image?
Images are generated in 10-20 seconds, making Leonardo one of the fastest AI image generation tools. Image generation time varies depending on the model used. For example, the new Phoenix model takes longer. But with Phoenix, you don’t have to scroll your phone or check email while waiting for the image to load like you do with other generators.
Leonardo is great, but I’m not surprised.
For AI creators, Leonardo checks a lot of important boxes. It’s fast, has a free plan, and the images it creates look completely normal. However, there are a few reasons not to recommend it to everyone. The paid post-editing tools are cumbersome and will quickly drain your tokens to get what you need. Important parts of the privacy policy are hidden in the terms of service and leave a lot to be desired. From a quantitative perspective, I wasn’t surprised by the results. It felt average. Of course, there’s nothing wrong with that. It’s more than an alternative to the current top competitors. For non-professional creators and AI creative enthusiasts, Leonardo is great for making usable (if not perfect) AI images quickly and easily.

Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices.

Microsoft releases Windows 11 artificial intelligence roadmap: smart search, upgrades, and more

What’s next for Windows? Microsoft may have just released the Windows 11 2024 Update (24H2), but the company has already revealed its plans for the next generation of Windows apps — and there are some very interesting AIfeatures coming before the holidays.
Microsoft has revealed that it is working on several AI features for Windows and Windows apps: improved Windows Search using natural description language, super resolution in Photos, generative fill and erase in Paint, and the debut of Recall. All features (except Recall) will appear as part of the Windows Insider program in October, with an expected launch in November All of these features will rely on the NPU inside Copilot+ PCs, which will now include PCs with Qualcomm Snapdragon X Elite processors as well as AMD’s Ryzen AI 300 and Intel’s Lunar Lake. Microsoft is also planning to launch more Copilot features that will run in the cloud, including Copilot Voice and Copilot Vision, similar to innovations used in rival AI services. The timing of these new features rolling out will vary by platform, though, as Snapdragon X PCs have been shipping for a few months; Microsoft will bring support to AMD and Intel Copilot+ PCs with its own updates. Microsoft has revealed more details about the improvements to Recall, and the company now says the feature can be bypassed when setting up a new PC or removed later. Windows Recall takes your screenshots from time to time, extracts the data, and then stores it in case you need it later. The feature has come under fire for violating user privacy and being unsafe. Now, Microsoft says it stores the screenshots and extracted data used in Recall in an encrypted area. Security researchers previously said the data was stored unencrypted. New AI features coming to Copilot+ PCsMicrosoft says it plans to improve search on PCs by using more natural language when searching for files on the PC. You may have seen this feature in apps like Microsoft Photos or Google Photos; for example, if you search for “beach,” the apps will use artificial intelligence to identify beach scenes. Microsoft will bring the same technology to File Explorer, but it’s not clear what folders or files they’ll apply to.
The improved Windows Search seems to be more context-aware than before: “BBQ party” is listed as an example search term in the demo below. “You no longer have to remember file names, settings locations, or even worry about spelling — just type what’s in your head to find it on your Copilot+ PC,” Microsoft says. However, it seems unlikely that you’ll be able to find a specific .ini file in your user folder as easily as you can find your aunt’s wedding photos.
The improved search feature will start in File Explorer and then expand to Windows Search and Settings in the “coming months.” “Super Resolution” in Photos is probably my favorite potential application for a few reasons: a.) I have a lot of old photos taken with old, low-quality digital cameras; and b.) Journalists often receive low-resolution photos that need to be enlarged or blown up before they can be published. Regardless, the new “Super Resolution” feature will hopefully solve these problems.
Microsoft announced the Auto Super Resolution feature to improve its gaming capabilities, but Photo Super Resolution seems more practical. Many websites and apps promise to offer upgrades, and it’s unclear whether this new app will surpass them. Photo Super Resolution will be free, though. Microsoft says that using Copilot+ PC’s AITOPS, you’ll be able to increase resolution by eight times. Super resolution will be part of the photo, which can already automatically adjust lighting and tones, remove backgrounds, add generative elements, and more.
Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices.

Train your brain for creative work with Gen AI

There are countless articles on how to use generative artificial intelligence (gen AI) to improve work, automate repetitive tasks, summarize meetings and client interactions, and synthesize information. There are also vast virtual libraries filled with tips and guides that can help us achieve more effective or even superior output with gen AI tools. Many common digital tools already come with integrated  AIco-pilots that automatically enhance and complete writing, coding, designing, creating, and whatever you’re working on. But generative AI does more than just enhance or accelerate what we already do. With the right mindset shift, we can train our brains to creatively rethink how to use these tools to unlock entirely new value and achieve exponential results in an AI-first world.

Generative  AI relies on natural language processing (NLP) to understand requests and generate relevant results. It is basically pattern recognition and pattern assembly based on instructions to provide output that accomplishes the task at hand. This approach fits with our brain’s default mode: pattern recognition and the pursuit of efficiency, which favors short, direct prompts for immediate, predictable results.
If most people use AI in this way, no matter how powerful these tools are, we will inadvertently create a new status quo in the way we work and create. Training our brains to challenge our thinking, our assumptions about AI’s capabilities, and our expectations for predictable results starts with a mindset shift to recognize that AI is not just a tool, but a partner in innovation and exploration of unknown territory.
Rethinking Collaboration with AI for More Creative and Innovative OutcomesChanging your mindset to collaborate with AI in a more creative and open way means being willing to explore unknown territory and having the ability to learn, unlearn, and experiment. Plus, it’s fun.
Insight Center SeriesCollaboration with AIHow humans and machines can best work together.
I often say that I maximize the potential of AI and achieve the best results when I put aside my cognitive biases. With a smile on my face, I ask myself, “WWAID?” or “What would AI do?” I acknowledge that the way I unconsciously use AI tools may default to predictable inputs and outputs. But by asking WWAID, I open myself up to new interactions and experiences that may yield unexpected results.
Tapping into AI’s creative and transformative potential, and training your brain for an AI-first world, requires us to shift our prompting approach to thinking of AI as a partner, not just a tool.
12 Exercises to Train Your Brain to Work More Creatively with  AI
Here are a dozen ways to train our brains to achieve broader, more innovative outcomes with AI:
1. Practice “exploratory prompts” every day
Start each day with an open-ended prompt that pushes you to think boldly. Try asking yourself, “What trends or opportunities are there in my industry that I don’t see coming?” or “How can I completely redefine my approach to key challenges?”
2. Create prompts around “what if” and “how can we” questionsInstead of asking direct questions, ask open-ended possibilities. For example, instead of asking “How can I be more efficient?”, try asking “If I could be more efficient in an unconventional way, what would that look like?”
3. Embrace ambiguity and curiosity in promptsBy training ourselves to prompt without a clear endpoint,  AI can generate answers that may surprise us. Prompts like “What might I have overlooked in approaching X?” can open doors to insights we never considered.
4. Use prompts to explore rather than solve problemsMany prompts focus on solutions. Shifting to exploration can yield deeper insights. For example, “Let’s explore what the future of leadership would look like if AI had a seat at the board or C-suite — how would our jobs, roles, and corporate culture change?”
5. Chain prompts to develop ideas iterativelyDon’t stop at the first answer, ask follow-up questions that make the answer more complex and visionary. If the AI ​​comes up with an idea, build on it with questions like “What will it look like in 5 years?” or “How could this approach change the way the company operates in the future?”
6. Think in metaphors or analogiesTraining our brains to use metaphors or analogies in prompts can open up creative avenues. For example, instead of asking for a product

“Oppo BLP961 2500mAh Replacement Battery for Find X6 Pro – replacement high-capacity Li-ion battery designed for OPPO Find X6 Pro. Shop now!”