Prashant Singh

Key publications

P. Singh, F. Wrede, and A. Hellander, “Scalable machine learning-assisted model exploration and inference using sciope,” Bioinformatics, 2020.

M. Åkesson, P. Singh, F. Wrede, and A. Hellander, “Convolutional neural networks as summary statistics for approximate bayesian computation,” IEEE/ACM Transactions on Computational Biology and Bioinformatics (accepted), 2021.

P. Singh, J. Van Der Herten, D. Deschrijver, I. Couckuyt, and T. Dhaene, “A sequential sampling strategy for adaptive classification of computationally expensive data,” Structural and multidisciplinary optimization, vol. 55, iss. 4, p. 1425–1438, 2017.

P. Singh, I. Couckuyt, F. Ferranti, and T. Dhaene, “A constrained multi-objective surrogate-based optimization algorithm,” in 2014 ieee congress on evolutionary computation (cec), 2014, p. 3080–3087.


– A. Coulier, P. Singh, M. Sturrock, and A. Hellander. “A pipeline for systematic comparison of model levels and parameter inference settings applied to negative feedback gene regulation.”, bioRxiv, 2021.

Prashant Singh

Data as a research component holds great promise, particularly when analytical treatment is impractical or insufficient. Our research group explores the fundamentals of data-driven science, focusing on developing scalable and data-efficient methods especially in the field of computational biology.

Our research is highly multidisciplinary, and lies at the intersection of computational biology, machine learning, scientific computing and statistics. Examples of some high-level questions we consider include:

  • How do we obtain informative data? This theme draws upon methods from statistical sampling and considers the classic trade-off between exploration of unknown data patterns, and exploitation of known interesting data patterns. An example problem in this context would be identification of potentially interesting patterns in very large datasets.
  • How do we extrapolate interesting patterns from data? Predictive machine learning models have delivered substantial progress in this area. Some aspects that we explore include design of optimal training sets for surrogate  ML models, scalable Bayesian modeling, adaptive sampling methods for iterative surrogate model refinement in a data-efficient manner, etc.
  • How do we efficiently answer what-if scenarios? Such scenarios can often be cast as inverse problems where one would like to, e.g., find system parameters that give rise to specific output behavior, which is of interest. Such problems can either be solved as optimization problems or parameter inference problems. Both approaches are very different, and involve substantial challenges that must be overcome. Our recent work has focused on parameter inference of stochastic biochemical reaction networks.

We strive to make our research publicly available in the form of open-source software libraries and tools. Recent software libraries include the scalable inference, optimization and parameter exploration, Sciope; Python toolbox, and the stochastic simulation service (StochSS) toolbox for modeling, simulation, inference and analysis of biochemical models. StochSS is also available as a free-to-use web service.



Last updated: 2022-11-30

Content Responsible: David Gotthold(