Keynote Speakers



Been Kim - Google Brain

Interpretability for everyone

In this talk, I will share some of my reflections on the progress made in the field of interpretable machine learning. We will reflect on where we are going as a field, and what are the things that we need to be aware of to make progress. With that perspective, I will then discuss some of my work on 1) sanity checking popular methods and 2) developing more lay person-friendly interpretability methods.



Doina Precup - McGill University & DeepMind Montreal

Building Knowledge For AI Agents With Reinforcement Learning

Reinforcement learning allows autonomous agents to learn how to act in a stochastic, unknown environment, with which they can interact. Deep reinforcement learning, in particular, has achieved great success in well-defined application domains, such as Go or chess, in which an agent has to learn how to act and there is a clear success criterion. In this talk, I will focus on the potential role of reinforcement learning as a tool for building knowledge representations in AI agents whose goal is to perform continual learning. I will examine a key concept in reinforcement learning, the value function, and discuss its generalization to support various forms of predictive knowledge. I will also discuss the role of temporally extended actions, and their associated predictive models, in learning procedural knowledge. In order to tame the possible complexity of exploring in order to build knowledge, reinforcement learning agents can use the concepts of intents (ie intended consequences of actions) and affordances (which capture knowlege about the stats in which an action is applicable). Finally, I will discuss the challenge of how to evaluate reinforcement learning agents whose goal is not just to control their environment, but also to build knowledge about their world.



Gemma Galdon-Clavell - Eticas Research & Consulting

Algorithmic Auditing: how to open the black-box of ML

As algorithms proliferate, so do concerns over how they work and their impacts. In the last few years, laws and regulations have been drafted calling for AI to be transparent, accountable, and explainable. However, it is still unclear how these principles can be translated into practices, and the standards that will guide the algorithms of tomorrow are still in the making. In this presentation, Gemma Galdon, founder of Eticas Consulting, will share the experience of auditing algorithms as a way to open the black box of AI processes. Through the discussion around this specific tool and the experiences of the Eticas team in auditing algorithms, the many sources of bias and inefficiencies in ML systems will be presented.



Max Welling - University of Amsterdam

Amortized and Neural Augmented Inference

Amortized inference is the process of learning to perform inference from many related tasks. The Variational Autoencoder (VAE) employs an amortized inference network, otherwise known as the encoder. Amortization is a powerful concept that also applies to learning itself (learning to learn or meta learning) and to optimization (learning to optimize with reinforcement learning). In this talk we will develop hybrid amortized methods that combine classical learning, inference and optimization algorithms with learned neural networks. The learned neural network augments or corrects the classical solution, or reversely, the neural network is fed useful information computed by a classical method. We further extend these ideas to amortized causal inference where we learn from data to discover the causal relations in data. Neural augmentation is applied to problems in MRI image reconstruction, LDPC decoding and MIMO detection.



Stephan Günnemann - Technical University of Munich

Can you trust your GNN? -- Certifiable Robustness of Machine Learning Models for Graphs

Graph neural networks have achieved impressive results in various graph learning tasks and they have found their way into many applications such as molecular property prediction, cancer classification, fraud detection, or knowledge graph reasoning. Despite their proliferation, studies of their robustness properties are still very limited -- yet, in domains where graph learning methods are often used the data is rarely perfect and adversaries (e.g. in the web) are common. Specifically, in safety-critical environments and decision-making contexts involving humans, it is crucial to ensure the GNNs reliability. In my talk, I will shed light on the aspect of robustness for state-of-the art graph-based learning techniques. I will highlight the unique challenges and opportunities that come along with the graph setting and I will showcase the methods vulnerabilities. Based on these insights, I will discuss different principles allowing us to certify robustness, giving us provable guarantees about the GNNs behavior, and ways to improve their reliability.