With increasing capability of machine learning, data-driven algorithms are now increasingly making decisions that were traditionally made by humans.
Algorithms are already being used for short-listing resumes, granting paroles and discharging patients from hospitals. It is therefore important to ensure that the decisions made by algorithms are in congruence with its human users. We have embarked on this research area, which we call Algorithmic Assurance.
Through our research in Algorithmic Assurance, we are working to:
- Develop a computational framework that is flexible to address various aspects of the assurance problem
- Efficiently ensure if a given algorithm conforms to user expectations or a set gold standard
- Reveal the scenario space where an algorithm is guaranteed to be assurable
- Fine-tune an algorithm with actively acquired additional data if it is not assurable in its current form
- Assuring algorithms against their stipulated use-case. Partner: Defence Science and Technology (DST), Australia.
- Assuring ML models against expensive simulators. Partner: Institute of Frontier Materials (IFM), Deakin University, Australia.