Apple’s Worldwide Developer Conference (WWDC) is just around the corner, and this edition is going to be its most important in recent years. There is increasing adoption of GenAI in smartphones and several of Apple’s competitors have gained a head start. There is a lot of expectation riding on what Apple will unveil in its own GenAI strategy. Apple has the power to drive awareness of AI-mediated features in a way that few other players can, but its own AI capabilities are weak, so it has work to do.
Apple will highlight future GenAI use cases
Apple must make a strong statement about GenAI to stay relevant. GenAI is subject of massive hype right now, even if real use cases are thin on the ground. Apple has the opportunity to recapture some of initiative it has lost to others. But it’s not going to be easy.
Apple is expected to announce how it will use GenAI models. It will explain how it goes beyond user-interface-level use cases and focus on application-level use cases. We therefore expect Apple will create a foundation for developers to integrate LLM’s into applications.
Siri is likely to get a significant update. Siri has been a laggard in voice assistants – a position that has become more and more noticeable as GenAI momentum has gathered pace. Future scenarios for how GenAI and AI more broadly, can deliver real value to consumers often focus on the evolving role for digital assistants that are based on software agents. In theory, these assistants will be capable of more human-like interactions, learn from users’ behaviour, and take actions on a user’s behalf. For example, users will be able to ask Siri to book hotels or flights for them without having to open individual applications. This type of intent-based or action-based approach is central to things like the Rabbit R1, the failed Humane AI Pin and seen in concepts such as Deutsche Telekom’s demo during MWC24. But there a lot of questions about where data resides, how privacy in managed, the role of applications and more. We expect that Apple has been thinking about these aspects and will be ready to address them.
To achieve any of this, the Apple will need to share information with third party applications to allow for natural interactions via Siri. This will likely raise as many questions as it answers. But Apple is one of the few companies that just might be able to pull this off.
180 million iPhone users to experience Apple’s native on-device GenAI capabilities by 2025
One of the challenges of GenAI is how to market its benefits to end consumers. Leveraging GenAI capabilities, smartphones will slowly learn and align more to their users’ habits and needs. This is a challenging aspect to market and can be best understood only when a user experiences it while using the devices over time. This is where Apple’s scale comes into play.
We expect the iPhone 16 Pro and Pro Max and will be powered by Apple’s native on-device Gen AI capabilities. In addition to on-device AI, Apple is also likely to have cloud-based GenAI use cases, which can potentially be made available to broader iPhone installed base of users.
Privacy to be a key narrative
The future of AI is likely to be hybrid, with some use cases running on device while others use cloud resources. This will depend on the compute requirements and the criticality of the use cases. We would be surprised if Apple let’s any third-party models run natively on iPhones. Apple may have to negotiate a way to preserve user data while benefiting from aspects of the third-party models. Alternatively, use cases involving third party models could run in the cloud with Apple harvesting the results for use on device.
Either way, privacy will be a primary consideration given Apple’s strong narrative of “what happens on your iPhone, stays on your iPhone”, which has resonated with Apple’s customer base.