Revolutionary Health Screening Unveiled: From Restroom Entry to Instant Diagnosis - Insights

Revolutionary Health Screening Unveiled: From Restroom Entry to Instant Diagnosis - Insights

Brian Lv13

Revolutionizing AI: Apple’s Progress and Areas Needing Improvement - Insights

Tim Cook WWDC 2024 AI

Jason Hiner/ZDNET

During WWDC 2024, Apple introduced the Apple Intelligence platform, which brings generative artificial intelligence (AI) and machine learning to the forefront. This platform utilizes large language and generative models to handle text, images, and in-app actions.

This initiative integrates advanced AI capabilities across the Apple ecosystem to transform device interaction. However, current iPhone and iPad users might need to upgrade their devices to take full advantage of these benefits.

Also: Everything Apple announced at WWDC 2024, including iOS 18, Siri, AI, and more

In a previous article , I recommended several key steps for Apple to stay competitive in the AI race. Let’s see how Apple’s announcements measure up to these recommendations and where there is room for improvement.

Disclaimer: This post includes affiliate links

If you click on a link and make a purchase, I may receive a commission at no extra cost to you.

What Apple Intelligence will bring to the company’s operating system platforms

AI on the device and in the cloud

Apple Intelligence brings powerful generative models to iPhone, iPad, and Mac . On-device capabilities require an A17 Pro chip, limiting them to iPhone 15 Pro and Pro Max users for enhanced security and privacy. Similarly, only iPads with M-series chips (like the latest iPad Air and iPad Pro) and Macs running Apple Silicon will be compatible. Many users with older devices or non-Pro models will miss these advanced features.

Newsletters

ZDNET Tech Today

ZDNET’s Tech Today newsletter is a daily briefing of the newest, most talked about stories, five days a week.

Subscribe

See all

For more demanding tasks, Apple introduced Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed for private AI processing. PCC extends the industry-leading security and privacy of Apple devices into the cloud, ensuring that personal user data sent to PCC isn’t accessible to anyone other than the user – not even Apple. Built with custom Apple Silicon and a hardened operating system designed for privacy, PCC represents a generational leap in cloud AI compute security.

In terms of AI infrastructure, Apple also introduced its Foundation Models , including a ~3 billion parameter on-device language model and a larger server-based model running on Apple Silicon servers within the company’s data centers. These models are fine-tuned for specialized tasks and optimized for speed and efficiency.

Also: Every iPhone model that will get Apple’s iOS 18 (and which ones won’t)

Room for Improvement: Apple fell short in AI infrastructure leadership by not announcing AI-accelerated server appliances at the edge, which would allow less capable devices, like the base iPhone 15 and earlier iOS 18-supported models, to use Apple Intelligence’s more advanced features. While the hybrid AI model with on-device and PCC is a step in the right direction, AI-accelerated edge network devices were not mentioned to enhance performance and reduce latency. Apple is typically not transparent about deploying resources in its data centers, so it may plan to deploy these appliances at the edge without disclosing specifics. While the short list of Responsible AI Principles that the company has documented here is a good start, an AI ethical disclosure statement along the lines of what Adobe is doing would further bolster trust and transparency.

Embracing third-party AI providers

Apple has dipped its toes into ChatGPT integration, indicating a willingness to integrate third-party services and partner with multiple AI providers. During the keynote , Apple said it would partner to allow third-party large language models (LLM) in addition to OpenAI ChatGPT (free, Plus , and presumably Enterprise) but did not name those models. Potential models include Microsoft Copilot , Google Gemini , Meta Llama 3, Amazon Titan, and Hugging Face, among many others.

Also: How to install iOS 18 developer beta (and which models support it)

Room for improvement: While Apple’s intention to be LLM-agnostic is a positive sign for the company’s AI strategy, I had hoped for a broader embrace of third-party platforms, particularly health, finance, and education, with AI integration. However, this shift will have to come with developers embracing the new SiriKit, App Intents, Core ML, Create ML, and other APIs . Deeper integration with specialized AI providers could significantly enhance Apple Intelligence’s functionality and versatility.

Smart notifications and writing tools

Smart notifications in Apple’s operating systems will leverage on-device LLMs to sift through the noise and ensure that only the most important alerts make it through. This is part of the new Reduce Interruptions Focus, which shows users key details for each notification. System-wide writing tools can write, proofread, and summarize text for users, from short messages to long blog posts, with the Rewrite feature providing multiple versions of text based on the intended audience.

Also: You can finally schedule messages on the iPhone. Here’s what to know

Room for improvement: Building on the Reduce Interruptions Focus, further development in proactive assistance features that anticipate user needs based on past behavior and context would be beneficial.

AI image generation and Genmoji

Apple has opened up a world of creative possibilities by integrating the Image Playground API into all apps. Users can create AI-generated images in three styles: Sketch, Animation, and Realism. Imagine creating and sharing these images directly within Messages or Pages – it’s a game-changer. In Notes, a new Image Wand tool can generate images based on the current page content. Genmoji allows users to create custom emojis, adding a personalized touch to communications.

Room for improvement: Providing more granular controls and customization options for the generated images and Genmojis, such as fine-tuning styles and attributes, could cater to more specific user preferences. Additionally, implementing features that suggest image enhancements or emoji creations based on user activity and context could further streamline the creative process.

Enhanced Siri and task automation

Siri, the voice assistant we’ve come to know and tolerate, is finally getting a much-needed upgrade. With advanced natural language processing (NLP), Siri can understand users even if they stutter and maintain conversational context, making interactions more seamless and intuitive. You can now type requests to Siri, a feature bound to be a hit in noisy environments. Siri’s new look, with a light wrapping around the screen edges when tapped, adds a modern touch.

Siri’s improved contextual awareness allows it to handle tasks like finding specific photos, playing podcasts, and retrieving shared files based on user commands. The assistant can pull driver’s license information from a photo and input it into a form. In Photos, the AI can use NLP to search for specific photos or video clips and remove distracting objects with the new Clean Up tool.

Also: Here’s how Apple’s keeping your cloud-processed AI data safe

The new Reduce Interruptions feature ensures that only the most important notifications get through based on your activity. On the iPad, handwriting optimization (Smart Script) and mathematical interpretation capabilities make it easier to write equations with the Apple Pencil and have them solved by the Calculator app . In Notes, the Image Wand transforms rough sketches into polished images, and you can record and transcribe audio with text summaries generated by Apple Intelligence. A clean-up tool removes unwanted objects in Photos, and Search in Videos helps find specific snippets.

Apple Intelligence also performs actions within apps on behalf of the user. It can open Photos and show images of specific groups based on a request. In Mail, priority messages are highlighted with summaries for quick insight. Notes users can record, transcribe, and summarize audio, creating summary transcripts of calls with automatic notifications to participants.

Room for improvement: While Apple has made significant progress, future updates could further enhance Siri’s capabilities, automate more complex tasks, and provide deeper personalization across the Apple ecosystem.

AI capabilities across Apple products

Lastly, enhancing AI capabilities across all Apple products, including Siri, Apple Music, Apple News, Health, Fitness+, TV, and HomeKit, was a major recommendation. While Apple’s AI features are integrated across devices, the specific enhancements for services like Apple Music and HomeKit were limited, at least as addressed in the WWDC keynote.

Also: What is Apple Intelligence? How the iPhone’s on-device and cloud-based AI works

Room for improvement: We also haven’t heard anything about HomePod or Apple TV with Apple Intelligence, although neither of these products has the computational power to perform on-device generative AI. Similarly, there were no mentions of new AI capabilities in WatchOS . While these devices might be able to use some of the cloud capabilities of Apple Intelligence, this was not brought up in the keynote. Additionally, with its M2 chip, the Vision Pro is powerful enough to handle Apple Intelligence on-device features. Still, the keynote did not discuss what would be coming to that device specifically.

The developer story

At WWDC 2024, Apple is doubling down on empowering developers with the tools and APIs they need to unlock Apple Intelligence’s full potential through an extensive lineup of developer sessions , highlighting Apple’s commitment to fostering a vibrant AI development ecosystem.

These sessions will offer deep dives into optimizing and implementing machine-learning models on iOS, iPadOS, and MacOS. The goal is to equip developers with the knowledge to harness Apple’s advanced AI capabilities.

One of the standout features is, of course, the enhanced Siri. Developers will learn how to integrate their apps with SiriKit, using its improved NLP to create more seamless and intuitive user interactions. App Intents will also be a key focus, allowing developers to bring their app’s core features directly to users through Siri and other system services.

Also: Apple coders, rejoice! Your programming tools just got a big, free AI boost

With Apple Silicon leading the charge, sessions will guide on optimizing machine learning and AI models specifically for these powerful chips. This content includes deploying models with Core ML and supporting real-time ML inference on the CPU. Updates to Create ML will also be covered, focusing on training models more efficiently and effectively.

Another major highlight will be Apple’s new writing tools, which can proofread, summarize, and rewrite text. Developers will be shown how to incorporate these tools into their apps, offering users advanced text manipulation features.

The creative potential of Genmoji will also be explored, with sessions on how to generate custom emojis to enhance user engagement and personalization.

Apple is pushing the boundaries of performance with sessions on accelerating machine-learning tasks using Metal, Apple’s graphics framework. Developers will also discover new capabilities within Swift and the Vision framework, crucial for integrating advanced image recognition features.

Finally, the new Translation API will be unveiled. It will help developers build apps that seamlessly translate text and speech, making applications more inclusive and accessible.

Also: Apple unveils an on-device AI image generator for iPhone, iPad, and Mac

By equipping developers with these resources, Apple is ensuring that the potential of Apple Intelligence can be fully realized across its ecosystem, driving innovation and enhancing user experiences.

Did Apple go far enough with AI improvements?

Despite the exciting announcements, there are still some gaps. Apple introduced new APIs and enhancements, and the upcoming developer sessions will provide the necessary tools, frameworks, and training. However, there was a missed opportunity for broader third-party integration, especially in key areas such as health and finance. After developers kick the tires on Apple Intelligence this fall, these integrations may be expected later, post-iOS 18 release.

While enhancements across Apple services like Apple Music, News, Health, Fitness+, and HomeKit were implied, they were not extensively covered. We expect these details to emerge with later iOS 18 betas.

Apple’s WWDC 2024 announcements align with several key recommendations but fall short in broader third-party integration, proactive assistance, and ethical AI practices. However, the extensive developer sessions planned for the conference suggest that Apple is serious about equipping developers with the tools and knowledge they need to use these new AI capabilities.

Addressing the remaining gaps could enhance Apple’s competitive position in the AI race, providing a more robust and user-centric AI ecosystem. By continuing to innovate and improve in these areas, Apple can set new benchmarks and lead the future of AI-driven technology.

Apple

iPhone 16 Pro upgrade: If you have a 3 year-old iPhone, here are all the new features you’ll get

My biggest regret with upgrading my iPhone to iOS 18 (and I’m not alone)

We’ve used every iPhone 16 model and here’s our best buying advice for 2024

6 iOS 18 settings I changed immediately - and why you should too

Also read:

  • Title: Revolutionary Health Screening Unveiled: From Restroom Entry to Instant Diagnosis - Insights
  • Author: Brian
  • Created at : 2024-12-22 18:34:19
  • Updated at : 2024-12-27 16:20:57
  • Link: https://tech-savvy.techidaily.com/revolutionary-health-screening-unveiled-from-restroom-entry-to-instant-diagnosis-insights/
  • License: This work is licensed under CC BY-NC-SA 4.0.