Bayesian optimisation

The far-reaching impact of Bayesian optimisation lies in its potential to accelerate the process of scientific discovery.


Traditional global optimisers are sample hungry and are not suitable for optimising expensive functions, where each function observation is costly either in time, or resources, or both.

Hyper-parameter tuning of ML models

Bayesian optimisation is the go-to method for global optimisation of expensive black-box functions. A case in point is the hyper-parameter tuning problem of a large neural network for which training can take hours, making each query of the network’s performance on a validation set a costly exercise. Hence, Bayesian optimisation has become a de-facto standard in hyper-parameter tuning problems, especially for large and complex machine learning models with many hyper-parameters.

Experimental optimisation

Whilst hyper-parameter tuning provides a familiar example for a machine learning scientist, the far-reaching impact of Bayesian optimisation lies in its potential to accelerate the process of scientific discovery. Experimental exploration of a new system often serves as a vehicle of scientific discovery. We believe that Bayesian optimisation can provide a sample-efficient and mathematically sound approach to interrogate complex, unknown systems.

The future promise

We believe the time is not far off when Bayesian optimisation will start to mark its achievement by accelerating the discovery of a new alloy with outstanding mechanical properties, or a new synthetic fabric with amazing properties or a new optimised printing process of stem cells for large scale adoption of regenerative medicine.

Be a part of this journey

In this project, we take an ambitious pursuit to make Bayesian optimisation a scalable, flexible and favourite tool for scientific experimentation. A large part of this pursuit is currently being funded by the Australian Research Council through its Laureate Fellowship to Alfred Deakin Professor Svetha Venkatesh, Co-Director of A²I².

Researchers and collaborators across the world are welcome to join us in this endeavour.

Our key focus areas in Bayesian optimisation include, but are not limited to:

  1. High dimensional optimisation: High dimensional Bayesian optimisation is challenging because Gaussian process based modelling loses sensitivity in high dimension when distance based kernels are used. Additionally, optimisation of acquisition function becomes difficult in a high dimensional space. Current work shows promise but more needs to be done to make Bayesian optimisation scalable to hundreds of dimensions with minimum assumptions.
  2. Multi-objective optimisation: Most experiments have multiple objectives and it may not be easy to define some kind of ‘exchange rate’ between objectives to make it a single objective problem. Pareto-frontier discovery is an alternative but difficult to scale in a many-objectives problem. Challenges stem from both computational reason and an exponential need for observation to cover Pareto-frontier in a high dimensional output space. More research needs to be done to address these issues.
  3. Using prior knowledge: When scientists embark on a study it is generally true that some fundamental behaviours of the new system are already well known. The ability to incorporate such information, available from existing mathematical relations or from natural language description will be crucial for wider adoption of Bayesian optimisation by the scientific community. Current work can incorporate simple trends and shape information about the objective functions, but more needs to be done to widen the spectrum of usable prior information.
  4. Reasoning: The next leap in Bayesian optimisation will come when it can be used to reason about the system under interrogation in a way that a human can understand. This will open up new areas of knowledge distillation and even open up avenues for rich human-AI cooperation in experimental science.

Selected research projects

Product and process optimisation for advanced material synthesis

Short polymer fibre is an interesting class of new materials that have the potential to create textiles with exotic properties. We are fortunate to partner with Dr Alessandra Sutti and her team from Deakin University’s Institute of Frontier Materials (IFM) to work on the computational aspects of producing this class of materials. Through our portfolio of Bayesian optimisation algorithms we helped the team to navigate large and complex design spaces efficiently. Notable projects that have been successfully concluded include:

a) rapid fibre synthesis with both quantitative and qualitative measure

b) multi-objective anti-pilling treatment optimisation for wool

c) functional optimisation for re-circulation based yield improvement process

Several other projects are currently underway.

Product and process optimisation for new alloy development

Alloy design is a time-consuming process, as even a minuscule change in the mixture formula or slight adjustment to the heat treatment process can result in  different mechanical properties. Our suite of Bayesian optimisation algorithms have been successfully used to find new alloy compositions and the optimised production process that demonstrates outstanding mechanical properties. Our partners are using the power of Bayesian optimisation to pursue new research questions and design alloys with exotic properties, especially in relation to high-entropy alloys and the 3D printing process. Our partners include Professor Matthew Barnett, Director IFM; Professor Paul Sanders, Michigan Tech University; A/Prof Dan Fabijanic, IFM; and Dr Thomas Dorin, IFM.