PhD Position in Explainable AI

Deep Learning methods for perception and control are highly effective and now widely used for many tasks. However, in many application areas, we would like to be able to validate their decision-making process, and their lack of human interpretability is a problem in this regard. This PhD project will therefore study explainable models and decision making in deep learning. Potential approaches could include distilling deep models onto interpretable approximations such as decision trees or learning modular deep networks where individual modules have interpretable functionality.

Eligibility: UK residents. Start Date: September 2019 or ASAP thereafter.

Informal Enquiries: t dot hospedales at ed.ac.uk, lsevilla at exseed dot ed dot ac dot uk

PhD Position in Reliable Deep Learning

Deep Learning methods for perception and control are highly effective for many tasks. However, in many applications, it is unavoidable that practical deployments will involve data of different statistics to the training data, or even adversarial attack; and performance significant degrades as a result. This project will investigate guaranteeing the deep network’s performance in these kind of situations. Potential approaches could include meta-learning for robustness to domain-shift and adaptation to domain shift, or theoretical analysis of neural network robustness to adversarial attack through learning theory. Potential application domains include both deep learning for computer vision, as well as deep reinforcement learning for robot control.

Eligibility: EU/UK residents. Start Date: September 2019 or ASAP thereafter.

Informal Enquiries: t dot hospedales at ed.ac.uk, lsevilla at exseed dot ed dot ac dot uk



Interns, Undergraduates

Unfortunately we do not usually have capacity for undergraduate interns.