Samsung has tried hard. But sometimes the best you can do isn’t enough. And right now, it seems Samsung hasn’t quite nailed the AI in its phones. Apple has pulled off a pretty neat trick when it comes to privacy and security at the expense of its premium rival. If it works, it could be the reason millions of Samsung users switch to iPhones.
It’s been clear for a year that the next generation of smartphone sales will be driven by innovative new AI capabilities, with marketing and sales teams rejoicing that the newer the devices, the more games they can play. This is quickly becoming the biggest and most widespread reason to upgrade smartphones in years.
But there’s a huge battle over privacy and security here, too — which in the simplest terms can be phrased as on-device versus in-the-cloud. At least until this week. Now Apple has completely changed the game — and no one saw it coming.
Google may have been the quickest to get out of the trap of smartphone AI, outpacing its competitors by adding one after another to its apps and services. But the challenge for Google will always be its awkward appearance when it comes to security and privacy, especially at the premium end of the market where Samsung is Apple’s only real competitor in Android.
Samsung has been playing a smart role this year. Galaxy AI introduced us to the concept of “hybrid AI,” where sensitive data is processed by AI on the device only, reducing the privacy risks for users who share data with cloud models. It’s still early days, but for most of the year it looked like this could steal some of Apple’s thunder.
By the time Apple’s first leaks in the spring suggested that the iPhone’s AI would be on-device only, it looked like a straight-up competition between Apple and Samsung, despite Google’s increasing focus on its “nano” AI on-device. Could Samsung’s more flexible approach beat Apple if the iPhone maker were to limit itself to on-device silicon? And what about its potential Faustian pacts with ChatGPT, Gemini, or both?
The problem for Apple, as many of us have commented, was how to offer a competitive AI offering while preserving data privacy and security. This seemed like a big challenge — especially if we believe that it was quick to respond to unexpected advances from Google and Samsung.
But what we did get at WWDC was a remarkable feat from Apple. If it succeeds as expected, it could redefine smartphone AI and set a bar for its competitors that may be nearly impossible to overcome. A closed ecosystem of hardware and cloud silicon, with a near-end-to-end encryption philosophy applied to any AI queries or data leaving a user’s device, so that it is anonymous, protected, and secure enough that an outside researcher can provide third-party credentials.
Samsung doesn’t have an answer to this question—its approach to hybrid AI suddenly seems primitive and disappointing. Apple offers the best of both worlds, willing to promise its users that “your data will not be stored or made available to[even]Apple” even in the cloud, while offering the best and brightest generative AI that can’t be done on-device alone. Private cloud computing, at least in theory, is redefining the field.
Ironically, this new AI architecture relies on this kind of blanket control to function, while the Justice Department and others are targeting Apple, alleging that its walled garden harms users. Apple needs to be able to verify software and hardware on the device and in the cloud, and it needs to provide a tight interface between the two. This extends to the silicon itself, as its custom processors at both ends are designed with this in mind.
As Johns Hopkins University cryptography expert Matthew Green explains, while Apple has always favored hardware, “the problem is that while modern ‘neural’ phones are getting better, they’re not getting better fast enough… and that essentially requires servers.”
“For the first time ever, private cloud computing technology extends the industry-leading security and privacy of Apple devices to the cloud, ensuring that no one but the user has access to the user’s personal data sent to the PCC — not even Apple,” Apple says. “Using custom Apple silicon and an enhanced operating system designed for privacy, we believe PCC is the most advanced security architecture ever deployed for AI-powered cloud computing at scale.”
This approach isn’t easy, Green says. “Building trustworthy computers is actually the hardest problem in computer security. Frankly, it’s almost the only problem in computer security. But while it’s still a hard problem, we’ve made a lot of progress. Apple uses almost all of these techniques.”
That’s how important this is. Forget the playground of images and emojis, colorful app icons and more fluid home screens. The game-changing update at WWDC was architectural and represents Apple’s biggest innovation in years. Every link in the new chain is guaranteed and verified by the others — hardware and software alike.
As MIT’s Technology Review explains, this new architecture “offers an implicit contrast to the likes of Alphabet, Amazon, or Meta, which collect and store vast amounts of personal data. Apple says any personal data passed to the cloud will be used only for the AI task at hand and will not be retained or made available to the company, even for correction or quality control purposes, after the model completes the request… Apple says people can trust it to analyze incredibly sensitive data—photos, messages, and emails that contain intimate details about our lives—and provide automated services based on what it finds there, without storing the data online or exposing any of it to risk.”
“Processing AI securely and privately in the cloud presents a huge new challenge,” Apple says. “Powerful AI machines in the data center can serve user requests with large, complex machine learning models — but they require unencrypted access to the user request and the accompanying personal data. This precludes the use of end-to-end encryption, so cloud AI applications have so far used traditional cloud security approaches.” This opens cloud servers and the data they contain up to attack. Apple is careful to stress that this entire approach is a response to a red team exercise on how to attack a fully sophisticated adversary.
So, it’s not just about a new approach to cloud privacy, it’s also about cloud security. “Our threat model for private cloud computing involves an attacker with physical access to the computing node and a high level of sophistication—that is, an attacker with the resources and expertise to undermine some of the hardware security properties of the system and extract data that is actively processed by the computing node.” That means the high end of the private market or the nation-state level.
With the awkward timing that comes with the edge, Samsung has introduced its latest hybrid AI products ahead of its WWDC unveiling. “We believe our hybrid approach is the most practical and reliable solution to address all of these needs and puts Samsung ahead of the curve,” the company said. “We’re providing users with a balance between the instant responsiveness and added privacy assurance of on-device AI and the flexibility of cloud-based AI through open collaboration with industry-leading partners to deliver a variety of features they need in everyday life.”
But the stark reality for Samsung is that Apple has, in theory at least, moved far beyond this hybrid hardware-cloud balance, and it may be as compelling as its early leadership in end-to-end encryption. For companies that trust their employees to use generative AI to perform critical tasks, this represents a new model.
We’ll have to see how the transparency Apple has promised when data or queries go from a device to the cloud, and the details provided about the AI model being used, will work. But — again, at least in theory — this shifts the discussion away from specific LLMs and concerns about Gemini vs. ChatGPT’s security credentials. Instead, Apple is doing the hard work of security and privacy by creating this framework. Clearly, if a user actively chooses to use a different model for a different task than a specific app, that can be specifically flagged.
The Apple Intelligence use cases showcased at WWDC are exactly what we can imagine today — the reality is that AI should become so seamless on the device and across the cloud that it will be less clear when and where it is applied. “Private cloud computing continues Apple’s deep commitment to user privacy,” Apple says. “With advanced technologies to meet our requirements for stateless computing, enforceable safeguards, no privileged access, no ability to target, and verifiable transparency, we believe private cloud computing is nothing short of the world’s leading security architecture for AI cloud computing at scale.”
This has also given Apple a new market lead over its competitors, giving it a serious choice—especially Samsung. Will the leading Android OEM continue to offer a proprietary offering on the device and a fundamentally open cloud, or will it look to fill this new gap? Apple has surprised the market, and if it delivers as promised, it could redefine this space. “Your phone may look like it’s in your pocket, but part of it lives 2,000 miles away in a data center,” Green says.
The question now is which data center to trust — especially for Samsung users considering buying new AI-enabled smartphones worth over $1,000.