If you have upgraded to a newer iPhone model recently, you have probably noticed Apple Intelligence appearing in some of your most-used apps like Messages, Mail, and Notes. Apple Intelligence, also abbreviated to AI, arrived in Apple’s ecosystem in October 2024. It represents Apple’s effort to compete with other leading AI developers like Google, OpenAI, and Anthropic in building the best AI tools.
Apple marketing executives have branded Apple Intelligence as “AI for the rest of us.” The platform is designed to leverage the strengths of generative AI, such as text and image generation, to improve upon existing features. Like other platforms including ChatGPT and Google Gemini, Apple Intelligence was trained on large information models. These systems use deep learning to form connections across text, images, video, and music.
The text offering, powered by a large language model, presents itself as Writing Tools. This feature is available across various Apple apps including Mail, Messages, Pages, and Notifications. It can be used to provide summaries of long text, proofread, and even write messages for you based on content and tone prompts.
Image generation has also been integrated in a similar fashion, though a bit less seamlessly. Users can prompt Apple Intelligence to generate custom emojis, known as Genmojis, in a distinctive Apple house style. Image Playground is a standalone image generation app that uses prompts to create visual content for use in Messages, Keynote, or sharing on social media.
Apple Intelligence also marks a long-awaited update for Siri. The smart assistant was an early pioneer but has been mostly neglected in recent years. Siri is now integrated more deeply into Apple’s operating systems. Instead of the familiar icon, users will see a glowing light around the edge of their iPhone screen when Siri is active.
More importantly, the new Siri works across apps. This means you can ask Siri to edit a photo and then insert it directly into a text message, creating a frictionless experience the assistant previously lacked. Onscreen awareness allows Siri to use the context of the content you are currently engaged with to provide an appropriate answer.
Leading up to WWDC 2025, many expected Apple to introduce an even more advanced version of Siri, but that announcement has been delayed. According to Apple SVP of Software Engineering Craig Federighi, more time is needed to reach the company’s high-quality bar for the features that will make Siri even more personal.
This yet-to-be-released, more personalized version of Siri is intended to understand personal context, such as your relationships, communications, and routine. However, a Bloomberg report indicated that the in-development version is too error-ridden to ship, hence the delay.
At WWDC 2025, Apple unveiled a new AI feature called Visual Intelligence, which helps you perform an image search for things you see as you browse. Apple also introduced a Live Translation feature that can translate conversations in real time within Messages, FaceTime, and Phone apps. Visual Intelligence and Live Translation are expected to be available later in 2025 with the public launch of iOS 26.
After months of speculation, Apple Intelligence took center stage at WWDC 2024. The platform was announced following a wave of generative AI news from companies like Google and OpenAI, which had caused concern that Apple had missed the boat on the latest tech trend. Contrary to that speculation, Apple had a team working on what proved to be a very Apple approach to artificial intelligence.
Apple Intelligence is not a standalone feature but is instead integrated into existing offerings. While it is a branding exercise, the large language model driven technology operates behind the scenes. For the consumer, the technology mostly presents itself as new features for existing apps.
More was revealed during Apple’s iPhone 16 event in September 2024. The event showcased several AI-powered features coming to Apple devices, including translation on the Apple Watch Series 10, visual search on iPhones, and various tweaks to Siri’s capabilities. The first wave of Apple Intelligence arrived at the end of October as part of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1.
The features launched first in U.S. English, with Australian, Canadian, New Zealand, South African, and U.K. English localizations added later. Support for Chinese, English for India, English for Singapore, French, German, Italian, Japanese, Korean, Portuguese, Spanish, and Vietnamese is scheduled to arrive in 2025.
The first wave of Apple Intelligence arrived in October 2024 via updates to iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. These updates included integrated writing tools, image cleanup, article summaries, and a typing input for the redesigned Siri experience. A second wave of features became available as part of iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2, including Genmoji, Image Playground, Visual Intelligence, Image Wand, and ChatGPT integration.
These offerings are free to use for owners of compatible hardware. This includes all iPhone 16 models, iPhone 15 Pro Max, iPhone 15 Pro, iPad Pro with M1 and later, iPad Air with M1 and later, iPad mini with A17 or later, MacBook Air with M1 and later, MacBook Pro with M1 and later, iMac with M1 and later, Mac mini with M1 and later, Mac Studio with M1 Max and later, and Mac Pro with M2 Ultra.
Notably, only the Pro versions of the iPhone 15 are compatible due to limitations of the standard model’s chipset. Presumably, the entire iPhone 16 line is capable of running Apple Intelligence.
When you ask other AI platforms a question, your query is typically sent to external servers, requiring an internet connection. Apple has taken a small-model, bespoke approach to training. The benefit is that many tasks become less resource-intensive and can be performed directly on the device. This is because the company compiled specific datasets for tasks like composing an email.
However, more complex queries utilize the new Private Cloud Compute offering. Apple operates remote servers running on Apple Silicon, which it claims offers the same level of privacy as its consumer devices. Whether an action is performed locally or via the cloud is invisible to the user unless the device is offline, at which point remote queries will generate an error.
A lot of noise was made about Apple’s pending partnership with OpenAI ahead of the launch of Apple Intelligence. Ultimately, the deal was less about powering Apple Intelligence and more about offering an alternative platform for tasks it is not built for, acknowledging the limitations of a small-model system.
Apple Intelligence is free, and access to ChatGPT is also free. However, users with paid ChatGPT accounts will have access to premium features, including unlimited queries. ChatGPT integration debuted on iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2. It has two primary roles: supplementing Siri’s knowledge base and adding to the existing Writing Tools options.
With the service enabled, certain questions will prompt Siri to ask for approval to access ChatGPT. Queries about recipes or travel planning may surface this option. Users can also directly prompt Siri to ask ChatGPT. Compose is the other primary ChatGPT feature available through Apple Intelligence, accessible in any app that supports Writing Tools. It adds the ability to write content based on a prompt, joining existing tools like Style and Summary.
Apple has confirmed plans to partner with additional generative AI services and has all but stated that Google Gemini is next on the list.
At WWDC 2025, Apple announced the Foundation Models framework, which allows developers to tap into its AI models while offline. This enables developers to build AI features into their third-party apps that leverage Apple’s existing systems. For example, an app like Kahoot could create a personalized quiz from your notes for studying. Because it uses on-device models, this happens without cloud API costs. Apple expressed excitement about how developers can build on Apple Intelligence to create smart, offline, and privacy-protecting experiences.
Apple is expected to unveil a new-and-improved Siri experience in 2026, which is already later than competitors. To speed up development, Apple may have to partner with an outside company to power the new Siri. Apple has been rumored to be in advanced talks with Google, its primary smartphone hardware competitor, for this purpose.

