Apple Intelligence is the name of Apple’s effort for Artificial Intelligence. The company says it “draws on your personal context while setting a brand-new standard for privacy in AI.” It was introduced during the WWDC 2024 keynote, and it’s a central part of Apple’s iPhone, iPad, and Mac devices, starting with iOS 18, iPadOS 18, and macOS Sequoia.
Release date
Apple Intelligence has been available for Apple devices since October 28, 2024, with iOS 18.1, iPadOS 18.1, and macOS 15.1. So far, the company is limiting these features in beta. Apple teased more functions coming at a later date during the iPhone 16 event.
Features
These are some of the Apple Intelligence features we’ll see on iPhone, iPad, and Mac:
- Writing Tools: Users can rewrite, proofread, and summarize text nearly everywhere they write, including Mail, Notes, Pages, and third-party apps;
- Image Playground: Users can create playful images in seconds, choosing from Animation, Illustration, or Sketch. This app is built right into apps like Messages and is also available in a dedicated app;
- Memories in Photos: Users can create stories they want to see just by typing a description. Apple Intelligence will pick out the best photos and videos based on the description, craft a storyline with chapters based on themes identified from the photos, and arrange them into a movie with its own narrative arc;
- Clean Up tool: This Photos app feature can identify and remove distracting objects in the background of a photo without accidentally altering the subject;
- Siri: Users type to Siri and switch between text and voice to communicate with Siri in whatever way feels right for the moment.
- ChatGPT integration: When you feel Apple Intelligence isn’t enough, you can allow ChatGPT to access Writing Tools and other features for a better response.
iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1 start Apple Intelligence revolution
In July, Apple released the first beta of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. Exclusive to iPhone 15 Pro and M1 (or newer) devices, Apple released these operating system updates by the end of October with these functions:
- Writing Tools: Proofread your text, rewrite different versions until the tone and wording are right, and summarize the selected text with a tap.
- Improved Siri: With a new design, Siri can maintain context between requests. Even if you stumble over words or shift what you’re saying mid-sentence, Siri can understand what you actually want.
- Priority notifications: They appear at the top of the stack, letting you know what to pay attention to at a glance. Notifications are summarized, so you can scan them faster.
- Priority messages in Mail: Elevate time-sensitive messages to the top of your inbox, like an invitation that has a deadline today or a check-in reminder for your flight this afternoon.
- Record and transcribe calls in the Notes app: Just hit record in the Notes or Phone apps to capture audio recordings and transcripts. Apple Intelligence generates summaries of your transcripts, so you can get to the most important information at a glance.
- Reduce interruptions: With iOS 18.1 beta 1, an all-new Focus Mode understands the content of your notifications and shows you the ones that might need immediate attention, like a text about picking up your child from daycare later today.
- Smart Reply in Mail: Quickly draft an email response with all the right details. Apple Intelligence can identify the question you were asked in an email and offer relevant selections to include in your response.
- Clean Up: This Photos app feature can identify and remove distracting objects in the background of a photo without accidentally altering the subject.
- Summarization: Apple Intelligence can now summarize more than just Messages and Mail notifications.
iOS 18.2 beta features
iOS 18.2 is expected to be released in December 2024. These are some of the newest Apple Intelligence features:
- Genmoji support: Create custom emojis by combining two figures; users can type a prompt like “monkey with pink hat” to make an entirely new emoji — here’s how to use Genmoji on your iPhone
- Image Playground: Users can create playful images in seconds, choosing from Animation, Illustration, or Sketch. This app is built right into apps like Messages and is also available in a dedicated app
- ChatGPT integration: When you feel Apple Intelligence isn’t enough, you can allow ChatGPT to access Writing Tools and other features for a better response
- Visual Intelligence: It helps users learn about objects and places faster than ever. Users can click and hold Camera Control to pull up the hours or ratings for a restaurant they pass, add an event from a flyer to their calendar, quickly identify a dog by breed, and more
- Image Wand: “Rough sketches can be turned into delightful images, and users can even select empty space to create an image using context from the surrounding area” in the Notes app
- Create Images expansion: Another Apple Intelligence feature available with iOS 18.2 beta 2 is the ability to create an image when you highlight text in the Notes app.
Here’s how to use Apple Intelligence.
Apple Intelligence schedule
After the iOS 18.1 release, Apple will continue to work on Apple Intelligence features. This is what we expect for each major iOS 18 launch:
- iOS 18.2: Expected to launch later this year, it should get Image Playground, Genmoji, and ChatGPT integration;
- iOS 18.3: Expected to launch early next year, it should be available with Siri upgrades;
- iOS 18.4: Expected to launch in March, Apple will finally revamp Siri, letting it control and find information from apps, and more.
For iOS 19, Apple will keep improving Apple Intelligence features. However, the most impressive features might take longer to launch:
As Mark Gurman explains in his latest Power On newsletter, the first version of Apple Intelligence will make “cosmetic” changes. It will have a new user interface and support for typing questions to Siri instead of speaking them.
More interesting ar Siri’s new abilities to access user data for contextual information about the user and control third-party apps via the new App Intents software. However, Gurman points out that Siri isn’t getting a ChatGPT-like model for these tasks:
These upcoming upgrades will make Siri easier to use on a day-to-day basis, but it’s not the brain transplant that the service really needs. Siri is still based on an outdated infrastructure — AI models that have been overtaken by the technology used by ChatGPT and Google’s Gemini. Siri hasn’t yet been rebuilt for the generative AI age, even if Apple is trying to create the impression that it has.
iPhone 16 to feature exclusive Apple Intelligence features
While we first thought all Apple Intelligence features would be available to all Apple compatible devices, the company revealed that some features will be exclusive to iPhone 16 users thanks to the new Camera Control.
Cupertino explains that later this year, Camera Control will unlock visual intelligence to help users learn about objects and places faster than ever before. Users can click and hold Camera Control to pull up the hours or ratings for a restaurant they pass, add an event from a flyer to their calendar, quickly identify a dog by breed, and more.
Camera Control will also serve as a gateway into third-party tools with specific domain expertise, like when users want to search on Google to find where they can buy an item, or to benefit from ChatGPT’s problem-solving skills.
Apple Intelligence set to expand in 2025 to more languages
During the iPhone 16 event, Apple announced it would expand Apple Intelligence to more languages in 2025.
According to the company, Apple Intelligence will first launch in U.S. English and will quickly expand to include localized English in Australia, Canada, New Zealand, South Africa, and the U.K. in December.
In 2025, Apple will add the following languages:
- Chinese
- French
- Japanese
- Spanish
Tim Cook explains Apple and OpenAI’s ChatGPT partnership
Rumors were true, and Apple has partnered with OpenAI. According to the company, these two projects work seamlessly, but they have core features that separate them.
With Apple AI, the company ensures that all data is private through Private Cloud Compute, while OpenAI’s ChatGPT usually collects user data. In an interview with YouTuber Marques Brownlee, Apple’s CEO Tim Cook explained the core difference between Apple Intelligence and ChatGPT partnership.
“There’s Private Cloud Computing, and there’s the arrangement with OpenAI,” says Tim Cook. “These two things are different. So, if you look at Private Cloud Compute, we’re utilizing the same basic architecture as the silicon that’s in the iPhone 15. We’re using the same software, and so we believe that we’ve done it in such a way that it’s as safe and secure and private in the Private Cloud Compute as in the device.”
That means Apple won’t collect user’s data, won’t make a profile of the user, or take this data to sell it elsewhere. Cupertino aimed to extend the iPhone’s on-device processing to the next level with a level of security that people are used to with their iPhones.
Tim Cook continues: “So we really, we really worked on this on a lot and put a lot of work behind that arrow to be sure that if you’re working on something that requires world knowledge, so you’re out of the domain of personal context and so forth, then you may want to go and use one of the large language models that are on the market, and we will be selected what we feel is the best one with OpenAI and ChatGPT.”
That said, all personal requests related to Apple’s built-in apps, such as Messages, Mail, Calendar, and more, will use the company’s intelligence. In contrast, “world knowledge” can be requested for OpenAI ChatGPT and later for other large language models.
New LLMs can join the party later
While Apple will first integrate with OpenAI, the company plans to work with other LLms as well. For example, Cupertino is in talks with Google to license Gemini.
A report also claims Apple will use Baidu for its generative AI functions in China. Baidu’s Ernie Bot is a ChatGPT rival and one of the more than 40 AI models from China that local regulators have approved. A partnership with Apple would be a big win for Baidu, considering the growing competition in the region.
Apple Intelligence compatible devices
During the WWDC 2024 keynote, Apple announced which devices will be compatible with its Intelligence:
- iPhone 15 Pro models or newer
- M1 iPad models or newer (such as the M4 iPad Pro) and the new iPad mini (A17 Pro)
- Apple Silicon Macs running macOS Sequoia
Apple papers suggest where its AI efforts are at
AI model for instruction-based image editing
In February, Apple released a revolutionary AI model for instruction-based image editing. According to a paper published by Apple researchers, instruction-based image editing improves the controllability and flexibility of image manipulation via natural commands without elaborate descriptions or regional masks. The study shows “promising capabilities in cross-modal understanding and visual-aware response generation via LM” as they investigated how MLLMs facilitate edit instructions and MLLM-guided image editing.
This image editing AI model made by Apple can produce concise and clear instructions for the editing process, create Photoshop-style modifications, optimize photo quality, and edit specific elements of a picture, such as faces, eyes, hair, clothes, and accessories.
MM1: Apple’s AI model
In March, Apple researchers published a paper highlighting how they’re training a new large language model (LLM).
Called MM1, this LLM can integrate text and visual information simultaneously. The paper offers an interesting look at the importance of various architectural components and data choices. The researchers say they were able to “demonstrate that for large-scale multimodal pre-training using a careful mix of image-caption, interleaved image-text, and text-only data is crucial for achieving state-of-the-art (SOTA) few-shot results across multiple benchmarks, compared to other published pre-training results.”
In addition, they showed that “the image encoder together with image resolution and the image token count has a substantial impact, while the vision-language connector design is of comparatively negligible importance.”
Apple’s MM1 AI model uses a family of multimodal models with up to 30 billion parameters, consisting of both dense models and mixture-of-experts (MoE) variants, that are state-of-the-art in pre-training metrics and achieve competitive performance after supervised fine-tuning on a range of established multimodal benchmarks.
ReALM could be better than OpenAI’s GPT-4
Apple researchers have published a paper about a new AI model. According to the company, ReALM is a language model that can understand and successfully handle contexts of different kinds. With that, users can ask about something on the screen or run in the background, and the language model can still understand the context and give the proper answer.
This is the third paper regarding AI that Apple has published in the past few months. These studies only tease the upcoming AI features of iOS 18, macOS 15, and Apple’s newest operating systems. In the paper, Apple researchers say, “Reference resolution is an important problem, one that is essential to understand and successfully handle context of different kinds.
One example is a user asking for pharmacies near them. After a list is presented, something Siri could do, the user could ask, “Call the one on Rainbow Rd.,” “Call the bottom one,” or “Call this number (present onscreen).” Siri can’t perform this second part, but with ReALM, this language model could understand the context by analyzing on-device data and completing the query.
Ferret LLM
This paper explains how a multimodal large language model can understand user interfaces of mobile displays. The researchers say they have advanced in MLLM usage but still “fall short in their ability to comprehend and interact effectively with user interface (UI) screens.”
This assistive assistant is still far from being released. But once Apple masters it, it could be integrated alongside ReALM model.
BGR will update this guide as we learn more about Apple’s AI efforts.