The Greta platform simulates virtual agents able to communicate verbally and nonverbally with human users and/or other virtual agents. Given a set of intentions and emotions to be communicated, the platform instantiates them into sequences of synchronized nonverbal behaviours. It can be used to compute these multimodal behaviours when the virtual agent acts as a speaker or as a listener.
The Greta system allows a virtual or physical (e.g. robotic) embodied conversational agent to communicate with a human user (Ochs et al., 2013; Niewiadomski et al., 2011). The global architecture of the system is depicted in Figure 1. It is a SAIBA compliant architecture (SAIBA is a common framework for the autonomous generation of multimodal communicative behavior in Embodied conversational agents (Kopp et al., 2006)). The main three components are: (1) an Intent Planner that produces the communicative intentions and handles the emotional state of the agent; (2) a Behavior Planner that transforms the communicative intents received in input into multimodal signals and (3) a Behavior Realizer that produces the movements and rotations for the joints of the ECA.
Greta is embedded within the platforms: SEMAINE and ARIA-VALUSPA that can be downloaded from:
Databases are available after contacting me and signing an EULA.