NVIDIA introduced production microservices for the NVIDIA Avatar Cloud Engine (ACE) that allow developers of games, tools and middleware to integrate generative AI models into the digital avatars in their games and applications. The new ACE microservices let developers build interactive avatars using AI models such as NVIDIA Omniverse Audio2Face? (A2F), which creates expressive facial animations from audio sources, and NVIDIA Riva automatic speech recognition (ASR), for building customizable multilingual speech and translation applications using generative AI.

Developers embracing ACE include Charisma.AI, Convai, Inworld, miHoYo, NetEase Games, Ourpalm, Tencent, Ubisoft and UneeQ. Top game and interactive avatar developers are pioneering ways ACE and generative AI technologies can be used to transform interactions between players and non-playable characters (NPCs) in games and applications. NPCs have historically been designed with predetermined responses and facial animations.

This limited player interactions, which tended to be transactional, short-lived and, as a result, skipped by a majority of players. To showcase how ACE can transform NPC interactions, NVIDIA worked with Convaito expand the NVIDIA Kairos demo, which debuted at Computex, with a number of new features and inclusion of ACE microservices. In the latest version of Kairos, Riva ASR and A2F are used extensively, improving NPC interactivity.

Convai?s new framework allows NPCs to converse among themselves and gives them awareness of objects, enabling them to pick up and deliver items to desired areas. Furthermore, NPCs gain the ability to lead players to objectives and traverse worlds. The Audio2Face and Riva automatic speech recognition microservices are available now.

Interactive avatar developers can incorporate the models individually into their development pipelines.