Deep learning
We study effective and efficient ways for end-to-end learning with minimal human supervision.
Differentiable machine learning (DML), also known as deep learning in recent years, achieves goals through compositional neural networks, iterative estimation, and differentiable programming. Our sub-areas include representation learning, deep reinforcement learning, generative models, modelling of graphs and relations, designing memory and attention mechanisms, continual learning, learning to reason, and knowledge-based inference.
We apply DML to solve important real-world problems where data is abundant. The problem space spans across living sciences (health, drug design and genomics), physical sciences (molecules and materials) and digital domains (computer vision, NLP, cybersecurity, software and recommender systems).
Differentiable machine learning
DML is learning through multiple steps of data transformation in a compositional fashion. It enables end-to-end learning from raw data to tasks of interests. We target the following areas:
Unsupervised learning
Learning without explicit labels is the hallmark of human intelligence. Machine learning thus far has relied heavily on lots of labels for training signals. Leveraging unlabelled data, either through existing datasets, or through self-exploration, will be critical to the next DML generation. We investigate the following sub-areas:
- Representation learning: Learning starts with representation. This is because raw data may lie on hidden manifolds and contain noise and thus it may not be appropriate for tasks at hand. A goal of representation learning is to discover latent factors in the data which are invariant to small changes and insensitive of noise. The learning curve for the later stages will be much easier (e.g. better linearity and pre-conditioning) and the final performance will be improved (e.g. due to noise reduction and invariance promotion)
- Generative models: The ability to model high-dimensional worlds and to imagine the future is fundamental to AI. Deep generative models offer great promise. We investigate fundamental issues including stability, generalisation and catastrophic forgetting in Generative Adversarial Networks, as well as disentanglement in Variational Auto-Encoders. We design new generative models for complex tasks, including spatio-temporal or discrete domains.
Prior engineering
Successes in machine learning depend critically on having good priors. In deep learning, the strongest prior thus far has been neural architectures built on a small set of operators (signal filtering, convolution, recurrence, gating, memory and attention). Thus it is important to do proper architecture engineering, i.e., designing a neural network that best fits the problems at hand, and at the same time, enables faster learning. In particular, we derive modular networks for regular data such as matrix and tensor as well as irregular data such as graphs, sets, temporal data with irregular timing, multiscale composition and ranking. We draw certain architectural inspiration from cognitive neuroscience, including the columnar structure of the neocortex for distributed processing, the thalamus structure for information routing, working memory for short tasks, and episodic memory for integrating information over time. We expect the priors to play important roles in deep reinforcement learning, a field of study in which an agent perceives the world, acts on it, interacts with others, builds theory of mind, imagines the future and receives feedback. Equipped with deep nets for perception, memory, statistical relational learning, and reasoning capabilities, we aim to bring reinforcement learning to a new level. The ultimate long-term prior is a unified cognitive architecture that can guide learning and reasoning across scales in time.
Real world applications of DML
We want to use DML to make the world a better place. Here are a few things we’re passionate about:
Accelerating living sciences
Healthcare: Modern hospitals and medical centres have collected huge amounts of clinical data for hundreds of millions of patients over the past decades. However, how to make the best out of the data for improving clinical services remains the major question. This research aims at characterising the data using statistical models and applying state-of-the-art deep learning techniques for representation, clustering and prediction both at the patient and the cohort levels. The long-term goals include acquiring and reasoning about established medical knowledge; having a meaningful dialogue with patients; recommending the personalised course of medical actions; and supporting doctors and hospital managers to improve their precision and efficiency.
Drug design: We aim to characterise the drug-like space, compute bioactivity, predict binding to proteins, and best of all, generate new drugs with a given set of desirable properties.
Genomics: We aim to leverage the growing databases to unlock the one of the biggest mysteries of nature – our genomes. We want to map genotype-phenotype and answer any genomic queries for a given sequence (DNA, MtDNA, RNA, etc), predict drug-protein interactions, and learn to generate DNA.
Accelerating physical sciences
In our current focus on molecular and materials sciences, we use DML to characterise the chemical space, to replace hard physical computation and experiments such as computing quantum chemical properties, predicting molecular properties and interactions, predicting chemical reactions, understanding the structure and characteristics of materials, searching for new alloys, and generating molecules and crystals. Part of this work is accelerating new product development by finding the best configurations for producing new products.
Cracking problems in digital domains
This spans several sub-areas:
- Computer vision, in which we want to understand and reason about videos, talk about movies, read children stories, monitor streets and detect unusual baby movements
- Natural language processing, in which we build agents to read text, answer multi-turn questions, hold a long conversation, share human values and talk about health. We also leverage NLP techniques for scientific extraction from text and ideas emergence in which we learn to read the scientific texts, extract machine computable knowledge, model the emergence and dynamics of ideas over time, and support scientific reasoning and question answering
- Smart homes, in which we model human activities and sensing systems/IoT within a home to assist/empower the tenants in their everyday life. An important AI goal is to build a situated conversational agent that can hold meaningful conversations with tenants. A high impact environment will be aged care homes.
- Automated software engineering, in which we want to build a machine that can read the code, detect defects, fix the bugs, synthesise programs, translate between languages, automate the programming process, and understand developer and support team management
- Cybersecurity through anomaly detection, in which we detect unusual behaviours without supervision
- Recommender systems, in which we automate the delivery of personalised content to the end-user