Apple’s Greatest AI Challenge: Ensuring Ethical and Reliable Behavior

Will Knight

Apple has a history of succeeding despite being late to market so many times before: the iPhone, the Apple Watch, AirPods, to name a few cases. Now the company hopes to show that the same approach will work with generative artificial intelligence, announcing today an Apple Intelligence initiative that bakes the technology into just about every device and application Apple offers.

Apple unveiled its long-awaited AI strategy at the company’s Worldwide Developer Conference (WWDC) today. “This is a moment we’ve been working towards for a long time,” said Apple CEO Tim Cook at the event. “We’re tremendously excited about the power of generating models.”

That may be so, but Apple also seems to understand that generative AI must be handled with care since the technology is notoriously data hungry and error prone. The company showed Apple Intelligence infusing its apps with new capabilities including a more capable Siri voice assistant, a version of Mail that generates complex email responses, and Safari summarizing web information. The trick will be doing those things while minimizing hallucinations, potentially offensive content, and other classic pitfalls of generative AI—while also protecting user privacy.

Apple Intelligence continues to access private user data to enhance the functionality of its models by understanding individual preferences, behaviors, and schedules, though it seeks to minimize privacy compromises. Craig Federighi, Apple’s senior vice president of software engineering, highlighted at a post-WWDC briefing the importance of privacy-centric intelligence, saying, “To really serve you, intelligence must revolve around you, and this requires profound considerations of privacy.”

Unlike many AI platforms like ChatGPT that operate primarily in the cloud, Apple asserts that its Apple Intelligence will mainly use AI models that function locally on devices. It has devised methods to discern if a query should be escalated to a more advanced cloud-based AI model. Additionally, Apple has introduced a feature known as Private Cloud Compute to ensure the security of personal data if it must be transferred externally.

In a blog post, Apple explains that the Private Cloud Compute aims to avert the retention of query data by any model or device storage, blocking developers and Apple from accessing sensitive information. The technology involves utilizing new server hardware powered by Apple silicon to securely store data and implementing end-to-end encryption to protect against unauthorized data access.

“I think it solves a necessary, profound challenge,” Federighi said. “Cloud computing typically comes with some real compromises when it comes to privacy assurances. Even if a company makes some promise, ‘We’re not going to do anything with your data,’ you have no way to verify that.”

Keeping that data private shouldn’t compromise Apple Intelligence’s capabilities, said John Giannandrea, senior vice president of machine learning and AI strategy, at the same briefing. In another blog post, Apple revealed that it has developed its own AI models using a framework called AXLearn, which it made open source in 2023. It said that it has employed several techniques to reduce the latency and boost the efficiency of its models.

By Matt Burgess

By Sachi Mulkey

By David Robson

By Joseph Cox

Giannandrea mentioned that Apple’s focus on reducing hallucinations in its models is achieved by utilizing curated data. He explained, “We have put considerable energy into training these models very carefully. So we’re pretty confident that we’re applying this technology responsibly.”

The cautious approach to AI is standard across Apple’s products. If effective as expected, it should ensure that Apple’s AI solutions are less likely to produce or suggest inappropriate content. Apple’s blog post highlighted that testers have found its models to be more beneficial and less problematic than the competing on-device models from companies like OpenAI, Microsoft, and Google. Federighi humorously noted, “We’re not taking this teenager and sort of telling him to go fly an airplane.”

Apple’s latest integration with OpenAI is set to position ChatGPT at a cautious distance. Siri, alongside a new feature dubbed Writing Tools, will engage ChatGPT for only the more complex queries and will do so with user consent. “We’ll ask you before you go to ChatGPT,” stated Federighi. He added that users will maintain total privacy control and transparency when leaving Apple’s secure environment to utilize this external model.

Collaborating with OpenAI was once an unlikely scenario for Apple, given the startup’s swift ascent spurred by its advanced chatbots, as well as its history of controversies including legal disputes, board upheavals, and aggressive promotion of an occasionally unreliable technology. Federighi hinted at a potential future inclusion of Google’s advanced Gemini model into Apple’s AI suite, though no definitive plans were disclosed.

Despite criticism for its slower pace in adopting generative AI technologies compared to rivals, Apple remains a player in the AI field with its own significant research outputs, including proprietary multimodal models that operate directly on devices.

Apple appeared to pioneer AI-driven personal computing when it debuted Siri in 2011. The assistant was a frontrunner in using then-recent AI advancements to enhance speech recognition and convert a variety of voice commands into actionable tasks on the iPhone.

Competitors such as Amazon, Google, and Microsoft introduced voice assistants, but their effectiveness was limited by the complexity and ambiguity of language. Advances in large language models, like those driving ChatGPI, significantly enhance the capability of machines to process spoken language. Companies like Apple aim to use AI to enhance functionalities of personal assistants. Improved understanding of complex instructions and the ability to engage in more meaningful conversations are potential upgrades. Furthermore, these models could enable on-the-fly coding through virtual assistants.

“They came through with a commitment to personal, private, and context-aware AI,” stated Tom Gruber, an AI entrepreneur and co-founder of the company that developed Siri, which was acquired by Apple in 2010. Gruber expressed satisfaction with demonstrations focusing on these features.

Some believe that Apple’s recent updates aim to keep pace with competitors without taking significant risks. “What Apple excels at is introducing new capabilities and presenting new ways of doing things,” mentioned David Yoffie, a professor at Harvard Business School. However, he noted that the latest announcements appeared to be about catching up.

Yoffie highlighted Apple’s emphasis on data privacy and security, a critical issue given public concerns about data sharing with technologies like ChatGPT. “Generative AI serves as a complement for the iPhone,” he explained. He believed that the presentation was crucial for Apple to demonstrate that they were not lagging behind competitors like Android.

Nonetheless, generative AI inherently bears elements of unpredictability. While Apple Intelligence may have functioned as intended during trials, predicting every outcome once released to millions of iOS and macOS users is impossible. To fulfill the expectations set at WWDC, Apple must integrate a capability into its AI that remains unprecedented: ensuring consistent and reliable behavior.

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Gear Up: Official Call of Duty: Black Ops 6 Merchandise Now Available!

Next Article

Netskope Enhances SaaS App Security Using Generative AI Capabilities

Related Posts