Talks and presentations

Deep learning-guided adaptive sampling with uncertainty rewards enhances exploration in molecular dynamics simulations

March 28, 2023

Talk, American Chemical Society Spring Meeting 2023, Indianapolis, Indiana

Abstract Methods development for the rapid exploration of the conformational ensemble of biological molecules remains an active area of research due to the difficulty of sampling rare state transitions in Molecular Dynamics (MD) simulations. In recent years, an increasing number of studies have exploited Machine Learning (ML) models to guide and analyze MD trajectories. Notably, diverse ML models have been developed to approximate optimal biasing potentials to force rare state transitions. On the other hand, Deep Neural Network (DNN) models have been proposed to extract the kinetic properties of a simulated system. In this work, we show that the latter type of model can accelerate the exploration of a thermodynamic ensemble without introducing biasing forces. For this purpose, we combined a VAMPNet (a DNN model that learns transformations to maximize the VAMP-2 score of a set of trajectories) with different adaptive sampling approaches. In brief, we propose an iterative procedure where a reward function selects restarting conformations from the latent space learned by a VAMPNet which is then refined by training on the collected data. Since DNNs do not produce isometric transformations in general, we validated our assumption that reward functions based on distance metrics tend to perform poorly against uncertainty-based rewards in latent spaces. We support our observations through empirical results obtained on typical test systems, which show up to 100% improvement in exploration.

Multi-agent reinforcement learning based adaptive sampling of conformational free energy landscapes of proteins

August 21, 2022

Talk, American Chemical Society Fall Meeting 2022, Chicago, Illinois

Abstract Molecular Dynamics (MD) simulations have become a crucial tool in chemistry, biology, condensed matter physics, and materials science. Rare state transitions tend to be hard to sample even under massively parallel simulation schemes. To address this issue, several algorithms inspired by reinforcement learning (RL) have arisen to promote exploration of the slow collective variables (CV) of complex systems. However, most of these algorithms are not well-suited to leverage the information gained by sampling a system from different initial states (e.g., a ligand-receptor system starting from bound and unbound poses). To fill this gap, we propose an algorithm inspired by multi-agent RL that extends the functionality of two closely-related techniques (REAP and TSLC) to situations where the sampling can be accelerated by learning from different regions of the CV landscape. Essentially, the algorithm works by remembering which agent discovered each conformation and sharing this information with others at the action-space discretization step. In this way, agents only sense rewards from regions of the landscape that they discovered. The consequences are threefold: (i) agents learn which CV carries more weight in the reward function using only relevant data, (ii) they deprioritize redundant actions, and (iii) agents that obtain higher rewards are assigned more actions. The conformations that are deemed the most rewarding are finally selected as starting points for new simulations. We compare our algorithm with baseline versions of LeastCounts, REAP, TSLC, and AdaptiveBandit based adaptive sampling to show and rationalize the gain in performance.