WWDC 2025: Apple Intelligence Model Extended to Developers, Live Translation feature unveiled

Apple on Monday made several Artificial Intelligence (AI)-Branded several Artificial Intelligence (AI)-Branded as Apple Intelligence in the Worldwide Developers Conference (WWDC) 2025. During the main session, the company reproduced the existing AI features, AMD unveiled new features that are now available for testing, and will be rolled out to users later this year. These new features include live translation, workout large in Apple Watch, Chatgate Integration in Visual Intelligence, update to Genmoji and AI capabilities on shortcuts along with image playground experiences.

Apple brings the foundation model framework for developers

Senior Vice President of Software Engineering (SVP) Craig Federighi at Apple announced that the tech veterans are now opening access to third-party app developers till their on-device foundation model. These AI models also provide strength to many apple intelligence features. Developers in AI proprietary models can use completely new apps to build new features within their apps, or through Foundation Model Framework.

Apple highlighted that since these are on-device models, these AI capabilities will work even when a device is offline. In particular, it will also ensure that the user never leaves the data device. For developers, they will not have to pay the cost of any application programming interface (API) for on-cloud invention. Framework basically supports Swift, allowing developers to original the AI ​​model. Additionally, the framework also supports guided generation, tool calling and more.

New Apple Intelligence Features

Federighi mentioned that Siri will not get advanced AI features in the WWDC last year’s WWDC, when Apple will share more information about it. However, this year, cupertino-based tech giants are planning to ship some more Apple intelligence facilities.

Live translation

The biggest new arrival is live translation. The AI-mangoing feature is being integrated into the message app, facetime and phone apps to allow users to easily communicate with those who speak a different language. This is an on-device feature, which means that conversations will not leave users’ equipment.

Live translation will automatically translate messages into the message app. Users will see an option to translate their messages automatically as they type, and then they can send it to their friends and colleagues in the language they speak and understand. Similarly, when the user receives a new message in a different language, the feature will immediately translate it.

On facetime call, the feature will automatically add live caption to the language of the users, so that they can be followed with. During the phone call, live translation will translate that a person says in real time and speaks it loudly.

Visible intelligence

Apart from live translation, Apple is also updating visual intelligence. The iPhone users can now ask Chatgate questions looking at their device’s camera. Openai Chatbot will know what the user is looking and the user understands the reference to answer questions. It can also find apps like Google and Etsy to find similar images and products. Additionally, users can only look for a product online by exposing it in their camera.

Apple says that visual intelligence may also identify that when a user is looking at an event, and automatically shows suggestions to add it to your calendar.

In addition, by pressing the same button used to take screenshots, users can now share the image with convenience and ask questions about it.

Workout friend

Apple Watch is also getting AI feature. Dubbed workouts take up the user’s workout data and fitness history to generate personal motivational insights while exercising a new workout, new workout experience. This feature collects and analyzes data such as heart rate, speed, distance, personal fitness milestones, and more.

Then, the company’s new Text-to-Spich (TTS) model translates these insight into a voice-based output. Apple says that voices were created using their fitness+ trainers’ data to provide the right energy, style and tonity for a workout.

Workout will be available on Apple Watch with big Bluetooth headphones. This also requires an apple intelligence-supported iPhone device. This feature will first be available for selected workouts such as outdoor and indoor running and walking, outdoor cycling, high intensity interval training (HIIT), and functional and traditional strength training in English.

Jeanmoji and Image Playground

Genmji and image playgrounds are also being updated this year. In Genmoji, users will now be able to mix emoji together and add a text prompt to create new variations. Users can also change manifestations and individual characteristics (eg hairstyle) when creating family and friends inspired images using genmji and image playground.

Image playgrounds are also being integrated with chatgpt to offer new image styles. Users will be able to tap Any style And describe what they are seeing. The details are then sent to the chat that creates the image. Users must agree to share this data with Openai Chatbot.

Leave a Comment