Biography
Mehdi Khamassi is a research director employed by the Centre National de la Recherche Scientifique (CNRS), and working at the Institute of Intelligent Systems and Robotics (ISIR), on the campus of Sorbonne Université, Paris, France. He has a double background in Computer Science (Engineering diploma in 2003 from Ecole Nationale Supérieure d’Informatique pour l’Industrie et l’Entreprise, Evry with specialization in Artificial Intelligence and Statistical Modeling) and Cognitive Sciences (Cogmaster in 2003 from Université Pierre et Marie Curie (UPMC), Paris). Then he pursued a PhD in Cognitive Neuroscience at UPMC and Collège de France between 2003 and 2007. He has been recruted in 2010 by the CNRS as permanent researcher. He has been co-organizer of the Symposium of Biology of Decision-Making (SBDM) conference since 2012. Since 2015, he has also been a member of the pedagogical council, formerly co-director of studies, and now co-responsible of the modeling major, for the CogMaster program at Ecole Normale Supérieure (PSL) / EHESS / University of Paris Cité. Since 2024, he has been co-director of the master’s program in cognitive science of Sorbonne Université and Université Paris Cité. He is editor-in-chief for Intellectica and serves as editor for ReScience X, and Neurons, Behavior, Data analysis and Theory. His main topics of research include decision-making and reinforcement learning in robots and humans, the role of social and non-social rewards in learning, and ethical questions raised by machine autonomous decision-making. His main methods are computational modeling, design of new neuroscience experiments to test model predictions, analysis of experimental data, design of AI algorithms for robots, and behavioral experimentation with humans, non-human animals and robots.
Research activities
My work is at the interface between Cognitive Science (understanding the human mind), Neuroscience (understanding how the brain works), Artificial Intelligence (designing algorithms enabling an agent to make sense of its perception, to act and to learn), and Robotics (designing bio-inspired robots that can interact more naturally with humans, especially for healthcare applications).
The goal of my research is twofold: (1) To better understand how decision making and reinforcement learning processes are organized in the mammals’ brain: What are the underlying neural mechanisms in the prefrontal cortex, basal ganglia, hippocampus, and dopamine system? How do they enable humans to adapt so flexibly to new situations? Why and how are they impaired in some neurodegenerative diseases or some psychiatric conditions? (2) To take inspiration from biology to improve current robots’ flexibility and autonomy in decision-making. Among our current healthcare applications, we use small social robots as assistive tools for therapies with children with autism, where the robot is playful and interactive, permitting to better engage the child in the therapy and to mediate and encourage his/her interactions with other children.
One of our current central research questions of interest is whether similar learning mechanisms and similar reward processing principles apply to both social and non-social contexts. This is key on the one hand to better understand what is so special about the social dimension of learning mechanisms in the brain, and on the other hand to establish more adaptive and efficient human-robot interactions.
Keywords: reinforcement learning; decision-making; set-shifting; auto-evaluation; structure learning; navigation; prefrontal cortex; basal ganglia; dopamine; hippocampus; machine learning; computational neuroscience; autonomous robotics; social robotics; autism; cognitive architectures; artificial intelligence.