Home      Log In      Contacts      FAQs      INSTICC Portal

Keynote Lectures

Socially Intelligent Robotics
Vanessa Evers, University of Twente, Netherlands

Efficient Reasoning with Rules and Ontologies
João Leite, Universidade Nova de Lisboa, Portugal

Learning Tasks in Robotics: Problems and Solutions
Nuno Lau, Universidade de Aveiro, Portugal

Ethical Embodied Decision Making
Francesca Rossi, IBM, United States and University of Padova, Italy


Socially Intelligent Robotics

Vanessa Evers
University of Twente

Brief Bio
Vanessa Evers is a full Professor of Human Media Interaction at the University of Twente.
Her research focuses on Human Interaction with Autonomous Agents such as robots or machine learning systems and cultural aspects of Human Computer Interaction. She specifically likes to take theories on human behaviour from social psychology and see if similar processes occur when we interact with technology. She is best known for her work on social robotics such as the FROG robot (fun robotic outdoor guide), that can interpret human behaviour automatically and respond to people in a socially acceptable way.
She received a M.SC. in Information Systems from the University of Amsterdam, and a Ph. D. from the Open University, UK. During her Master studies she spent two years at the Institute of Management Information Studies of the University of New South Wales, Sydney. After her Ph.D. she has worked for the Boston Consulting Group, London and later became an assistant professor at the University of Amsterdam’s Institute of Informatics. She was a visiting researcher at Stanford University (2005-2007). She has published over 80 peer reviewed publications, many of which in high quality journals and conferences in human computer interaction and human robot interaction. She serves on Program Committees of HRI, CHI, CSCW and ACM Multimedia. 

The classic image in the psychology of Human-Robot Interaction is that of a person who is focused and eager to learn how to work with or control a robot. The job of the roboticist then is primarily to avoid mistakes in accuracy of detection, manipulation, navigation, decision making, planning and so on to optimize human robot collaboration. 

In this talk I will argue that social norms embedded in people, robots and the context in which the robots are used make this approach obsolete. Specifically, I will address the following questions:
  • How do people understand robot behaviours?
  • What do we know about people and robots collaborating?
  • Can a robot understand human social behaviours?
  • How does knowledge about human social relationships necessitate a change in our thinking about how humans should be modelled?
  • How can the design of robots and their behaviour improve acceptance of robots in everyday environments such as our homes, airports, museums, schools, roads, and hospitals? 

Through examples of practical deployment of robots, I will explore the fundamentally social relationship people have with autonomous robots and offer essential rules for effective human-robot collaboration.



Efficient Reasoning with Rules and Ontologies

João Leite
Universidade Nova de Lisboa

Brief Bio

João Leite is Associate Professor at the Computer Science Department of the Universidade Nova de Lisboa, Portugal, and member of the Nova Laboratory for Computer Science and Informatics (NOVA LINCS). João's main research interests include Knowledge Representation and Non-Monotonic Reasoning, Multi-Agent Systems, Semantic Web, and Argumentation for the Social Web. He has authored one book, edited several books and journal special issues, co-authored more than 100 papers, and presented more than 10 courses and tutorials in Conferences and Summer Schools. He was Conference Chair of JELIA-2004, Program Committee Co-Chair of JELIA-2014, and Co-Chair of several editions of the CLIMA, LADS and DALT workshops. He regularly serves in the Program Committees of major international conferences (IJCAI, AAAI, KR, AAMAS, ECAI, ICLP,...).

Ontology languages based on Description Logics and reasoning rules based on Logic Programming are both well-known formalisms in knowledge representation and reasoning, each with its own distinct benefits and features, which are quite orthogonal to each other. Both appear in the Semantic Web stack in distinct standards – OWL and RIF – and over the last decade a considerable research effort has been put into trying to provide a framework that integrates the two, which would provide the grounds for the development of modern real-world applications that need to integrate and efficiently reason with hybrid knowledge bases written in these distinct formalisms. This has proved to be quite challenging, not only because of both their semantical mismatch, but also because of complexity issues. Additionally, these new applications require mechanisms for keeping such hybrid knowledge base up to date, by incorporating new and possibly conflicting information, which poses yet another significant challenge, mostly because of the fundamental mismatch between existing belief revision methods employed in Description Logics and Logic Programming which make them seem irreconcilable.
In this talk I will overview recent developments that provide the formal foundation for the efficient integration of rules and ontologies, present NoHR (http://nohr.di.fct.unl.pt/) - a tool that automates query answering over knowledge bases composed of both an Ontology written in OWL 2 EL or QL and a set of Reasoning Rules - and discus some recent promising results towards the development of automated update mechanisms for these hybrid knowledge bases.



Learning Tasks in Robotics: Problems and Solutions

Nuno Lau
Universidade de Aveiro

Brief Bio

Nuno Lau is Assistant Professor at Aveiro University, Portugal and Researcher at the Institute of Electrical and Informatics Engineering of Aveiro (IEETA), where he leads the Intelligent Robotics and Systems group (IRIS). He got is Electrical Engineering Degree from Oporto University in 1993, a DEA degree in Biomedical Engineering from Claude Bernard University, France, in 1994 and the PhD from Aveiro University in 2003. His research interests are focused on Intelligent Robotics, Artificial Intelligence, Multi-Agent Systems and Simulation.
Nuno Lau participated in more than 15 international and national research projects, having the tasks of general or local coordinator in about half of them. Nuno Lau won more than 50 scientific awards in robotic competitions, conferences (best papers) and education. He has lectured courses at Phd and MSc levels on Intelligent Robotics, Distributed Artificial Intelligence, Computer Architecture, Programming, etc.  Nuno Lau is the author of more than one 180 publications in international conferences and journals. He is currently the President of Portuguese Robotics Society.

Machine Learning and Optimization techniques are now quite widely used in many scientific disciplines. Joining Robotics and Artificial Intelligence provides the framework to actually perform challenging tasks in real non-structured environments. Using Machine Learning in Robotics is a very challenging task as robots may be quite expensive and fragile, and the time and effort to collect data is, in general, quite high. Considering these premises, we have developed several techniques that through the use of simulators, and adapted learning/optimization algorithms, that use the data very efficiently, make the use of Learning in Robotics an effective option for hand-coded approaches.
This talk will present some of these techniques, namely those related to Q-Batch update rule, model-based learning, black-box optimization for several contexts and adapted interfaces. Although the focus will be on the application of these techniques to develop skills/interaction for robotic agents, these techniques can also be used in other types of agents.



Ethical Embodied Decision Making

Francesca Rossi
IBM, United States and University of Padova

Brief Bio
Francesca Rossi is a research scientist at the IBM T.J. Watson Research Centre, and an professor of computer science at the University of Padova, Italy, currently on leave.
Her research interests focus on artificial intelligence, specifically they include constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behaviour of AI systems, in particular for decision support systems for group decision making. She has published over 170 scientific articles in journals and conference proceedings, and as book chapters. She has co-authored a book. She has edited 17 volumes, between conference proceedings, collections of contributions, special issues of journals, as well as the Handbook of Constraint Programming.
She is a AAAI and a EurAI fellow, and a Radcliffe fellow 2015. She has been president of IJCAI and an executive councillor of AAAI. She is Associate Editor in Chief of JAIR and a member of the editorial board of Constraints, Artificial Intelligence, AMAI, and KAIS. She co-chairs the AAAI committee on AI and ethics and she is a member of the scientific advisory board of the Future of Life Institute. She is in the executive committee of the IEEE global initiative on ethical considerations on the development of autonomous and intelligent systems and she belongs to the World Economic Forum Council on AI and robotics.
She has given several media interviews about the future of AI and AI ethics (including to the Wall Street Journal, the Washington Post, Motherboard, Science, The Economist, CNBC, Eurovision, Corriere della Sera, and Repubblica) and she has delivered three TEDx talks on these topics.

Decision making is an ubiquitous task in our life. However, most of the times humans are not very good at it, because of cognitive biases and data handling difficulties. Intelligent decision support systems are intended to help us in this respect.
To build an effective human-machine symbiotic system with the capability to make optimal decisions, we advocate for an embodied environment where humans are immersed. Moreover, we need humans to trust such systems.
To build the right level of trust, we need to be sure that they act in a morally acceptable way. Therefore, we need to be able to embed ethical principles (as well as social norms, professional codes, etc.) into these systems.
Existing preference modelling and reasoning framework can be a starting point, since they define priorities over actions, just like an ethical theory does. However, much work is still needed to understand how to mix preferences (that are at the core of decision making) and morality, both at the individual level and in a social context.