The rise of AI-powered desktop assistants like Microsoft 365 Copilot, Apple Intelligence, and Google’s upcoming Project Jarvis has reshaped the way people work. While these tools promise massive productivity boosts, they also pose unique security challenges, leaving companies and users navigating concerns about data access, cyber-attacks, and trust in the systems they use every day.
Productivity Gains, But Security Lags Behind
Microsoft’s 365 Copilot has been out for over a year and is now broadly available. Apple Intelligence is entering general beta, catering to newer Macs, iPhones, and iPads. Meanwhile, Google’s Gemini, bolstered by Project Jarvis, is set to bring “agentic” AI features into the Chrome browser. It’s a tidal wave of AI advancements at everyone’s fingertips.
Companies are rushing to integrate these tools, but the path isn’t entirely smooth. According to a Gartner survey, 40% of companies delayed their Copilot rollouts by at least three months due to security concerns. Oversharing of data and insufficient access controls are persistent weak spots.
Jim Alkove, CEO of identity management platform Oleria, explains the problem bluntly. “The combination of model-based technologies with runtime security risks and access control issues multiplies the overall risk.”
And while firms are enthusiastic — 89% of respondents report improved productivity — security teams remain uneasy. Only 16% of companies have gone beyond pilot stages to fully implement desktop AI assistants, the Gartner data reveals.
The Black Box Problem: Who’s Watching AI?
For all the good desktop AI can do, its lack of transparency is a massive sticking point. AI systems operate like black boxes — companies can’t fully see what these systems do or understand their limits. It’s like having an overly curious assistant with unchecked access to your office files.
“With a human assistant, you can set boundaries, do background checks, and monitor their work,” says Alkove. “You don’t get those same options with AI. It can see everything it’s granted access to.”
This lack of granular control over data and tasks introduces risk. AI assistants don’t discriminate between what they need to see and what they can see. If you let an AI into your email, it can see every email — not just the ones you want it to work on.
The stakes only grow higher as these tools gain the ability to perform actions autonomously. Companies need controls to ensure AI assistants are only granted access when absolutely necessary, Alkove emphasizes.
Cybercriminals Eye Desktop AI for Exploits
The rise of desktop AI is also shifting how cybercriminals operate. Social engineering — the art of manipulating humans into compromising security — is evolving. Instead of tricking a person into doing something risky, attackers can now manipulate AI systems directly.
Earlier this year, prompt injection attacks highlighted just how vulnerable these AI tools can be. Security researcher Johann Rehberger demonstrated how email, Word documents, or websites could “inject” malicious commands into Microsoft 365 Copilot. The result? The AI assistant acted as a scammer, leaking personal information to potential attackers.
This style of attack shows that AI systems can become dangerous middlemen. Fraudsters no longer need to trick the user; they just need to trick the AI agent into misbehaving.
“AI gives attackers the ability to operate at a different level,” warns Ben Kilger, CEO of Zenity, a firm focused on securing AI systems. “Prompt injection attacks essentially bypass traditional human-focused security measures.”
Control, Visibility, and Limited Access Are Key
So how can businesses stay ahead of these risks?
The first step is better visibility. Companies need a clearer picture of how their AI assistants operate, what data they access, and what actions they’re authorized to perform.
- Granular access controls: Companies must limit AI access to specific tasks and specific information. For instance, an assistant might be allowed to draft emails but not access sensitive financial reports.
- Time-based permissions: Access to data or actions should expire when no longer needed.
- Real-time auditing: Businesses need tools to monitor AI behavior and flag suspicious activity quickly.
Jim Alkove stresses that giving AI assistants “big buckets of unrestricted access” is asking for trouble. AI tools must be equipped with guardrails that prevent them from overstepping their intended roles.
To its credit, Microsoft acknowledges these challenges. A spokesperson pointed to tools like Microsoft Purview, which allow IT administrators to set controls, manage permissions, and monitor how AI tools interact with sensitive data.
“AI makes existing security flaws more obvious,” Microsoft said in a statement, adding that businesses must proactively align AI use with company policies and risk tolerance.
Will AI Assistants Outpace Security Teams?
The demand for desktop AI systems will only grow. Gartner’s findings reveal that workers want these tools. A staggering 90% of surveyed employees said they would fight to keep their AI assistants if access was threatened.
However, this enthusiasm raises a critical question: Can security measures keep up? With Microsoft Copilot, Apple Intelligence, and Google Gemini already embedded into daily workflows, companies must adopt smarter security frameworks before risks spiral out of control.
Ben Kilger sums it up simply: “If companies don’t gain visibility and control over AI now, attackers will. It’s that straightforward.”
Until stronger protections emerge, businesses will need to tread carefully. Desktop AI might be the new frontier for productivity, but it’s also the next big playground for cybercriminals.