Analysts say the measures Apple has implemented to prevent customer data from being stolen and misused by AI will have a significant impact on device security, especially as AI becomes more prevalent on customer devices.
Apple emphasized customer privacy in the new AI initiatives it announced at its Worldwide Developers Conference a few weeks ago. It has built a vast infrastructure of proprietary hardware and software to support its AI suite.
Apple has complete control over its AI infrastructure, making it difficult for adversaries to break into the systems. Analysts say the company’s black-box approach also provides a blueprint for rival chipmakers and cloud providers to infer AI on devices and servers.
“Apple can leverage the capabilities of a large language model without having any visibility into the data being processed, which is excellent from a customer privacy and corporate accountability standpoint,” says James Sanders, an analyst at TechInsights.
Apple's AI Approach
The AI backend includes new underlying models, servers, and Apple Silicon server chips. AI queries issued by Apple devices are packaged in a secure lockbox, decrypted in Apple’s Private Compute Cloud, and verified as coming from the authorized user and device; the answers are sent to the devices and can only be accessed by authorized users. The data is not visible to Apple or other companies and is deleted once the query is complete.
Apple has built security features directly into its hardware and server chips, enabling users to secure AI queries and protecting them. Data is kept safe while on the device and in transit through features like secure boot, file encryption, user authentication, and secure connections over the Internet via TLS (Transport Layer Security).
Apple is its own customer with its own infrastructure, which is a big advantage, Sanders says, while rival cloud providers and chip makers work with partners that use different technologies for security, hardware and software.
“The ways to do this are different for each cloud,” Sanders says. “There’s no one way to do it, and not having one way to do it adds complexity. And I think the difficulty of doing this at scale becomes even more difficult when you’re dealing with millions of client devices.”
Microsoft's Pluto Approach
But Apple’s main competitor, Microsoft, is already well on its way to providing full privacy for AI with security features built into its chips and its Azure cloud. Last month, the company announced a class of AI computers called CoPilot+ that require a Microsoft security chip called Pluton. The first AI computers shipped this month with chips from Qualcomm, with Pluton turned on by default. Intel and AMD will also ship computers with Pluton chips.
Pluton ensures that data in secure areas is protected and only available to authorized users. The chip is now ready to protect AI customer data, says David Weston, Microsoft’s vice president of enterprise security and operating systems.
“We have a vision for AI mobility between Azure and the customer, and Pluton will be at the heart of that,” he says.
Google declined to comment on its strategy for moving from chip to cloud.
Intel, AMD and Nvidia are also building black boxes into devices that keep AI data safe from hackers. Intel did not respond to requests for comment on its strategy for moving chips to the cloud, but in previous interviews the company has said it prioritizes securing chips for AI.
Security through obfuscation may be effective.
But analysts say the mass marketing approach by chipmakers could leave more room for attackers to intercept data or compromise workflows.
Intel and AMD have a documented history of security vulnerabilities, including Spectre, Meltdown and their derivatives, says Dylan Patel, founder of chip consulting firm SemiAnalysis.
“Anyone can get their hands on Intel chips and try to find attack vectors, but that doesn’t apply to Apple chips and servers,” he says.
By contrast, Apple is a relatively new chip designer and could take a new approach to chip design. Patel says the closed set of chips helps “provide security through obscurity.”
Microsoft has three different confidential computing technologies in preview in the Azure cloud: AMD’s SEP-SNV offering, Intel’s TDX, and Nvidia’s GPU. Nvidia’s GPUs are now a target for hackers with the growing popularity of AI, and the company recently released patches for high-risk vulnerabilities.
Intel and AMD are working with hardware and software partners to deliver their own technologies, creating a longer supply chain to secure them, says Alex Matrosov, CEO of hardware security firm Binarly. That gives hackers more opportunities to poison or steal data used in AI and creates problems in patching vulnerabilities as hardware and software vendors work on their own timelines, he says.
“The technology was not really built from a perspective of seamless integration to focus on actually solving the problem,” Matrosov says. “This introduced many layers of complexity.”
Intel and AMD chips are not inherently designed for confidential computing, and firmware-based rootkits could intercept AI operations.
“The silicon stack has layers of legacy,” Matrosov says. “And then we want discreet computing. It’s not like it’s integrated.”