Posts by Collection

portfolio

publications

Intriguing Role of Water in Plant Hormone Perception

Published in bioRxiv, 2021

Abstract Plant hormones are small molecules that regulate plant growth, development, and responses to biotic and abiotic stresses. Plant hormones are specifically recognized by the binding site of their receptors. In this work, we investigated the role of water displacement and reorganization at the binding site of plant receptors on the binding of eight classes of phytohormones (auxin, jasmonate, gibberellin, strigolactone, brassinosteroid, cytokinin, salicylic acid, and abscisic acid) using extensive molecular dynamics simulations and inhomogeneous solvation theory. Our findings demonstrated that displacement of water molecules by phytohormones contributes to free energy of binding via entropy gain and is associated with free energy barriers. Also, our results have shown that displacement of unfavorable water molecules in the binding site can be exploited in rational agrochemical design. Overall, this study uncovers the role of water molecules in plant hormone perception, which creates new avenues for agrochemical design to target plant growth and development.

Recommended citation: Zhao, Chuankai, Diego Eduardo Kleiman, and Diwakar Shukla. "Intriguing Role of Water in Plant Hormone Perception." bioRxiv (2021). https://www.biorxiv.org/content/10.1101/2021.10.04.462894v1.full

Refining the RNA Force Field with Small-Angle X-ray Scattering of Helix–Junction–Helix RNA

Published in The journal of physical chemistry letters, 2022

Abstract The growing recognition of the functional and therapeutic roles played by RNA and the difficulties in gaining atomic-level insights by experiments are paving the way for all-atom simulations of RNA. One of the main impediments to the use of all-atom simulations is the imbalance between the energy terms of the RNA force fields. Through exhaustive sampling of an RNA helix–junction–helix (HJH) model using enhanced sampling, we critically assessed the select Amber force fields against small-angle X-ray scattering (SAXS) experiments. The tested AMBER99SB, DES-AMBER, and CUFIX force fields show deviations from measured profiles. First, we identified parameters leading to inconsistencies. Then, as a way to balance the forces governing RNA folding, we adopted strategies to refine hydrogen bonding, backbone, and base-stacking parameters. We validated the modified force field (HB-CUFIX) against SAXS data of the HJH model in different ionic strengths. Moreover, we tested a set of independent RNA systems to cross-validate the force field. Overall, HB-CUFIX demonstrates improved performance in studying thermodynamics and structural properties of realistic RNA motifs.

Recommended citation: He, Weiwei, et al. "Refining the RNA Force Field with Small-Angle X-ray Scattering of Helix–Junction–Helix RNA." The journal of physical chemistry letters 13.15 (2022): 3400-3408. https://pubs.acs.org/doi/full/10.1021/acs.jpclett.2c00359

Multiagent Reinforcement Learning-Based Adaptive Sampling for Conformational Dynamics of Proteins

Published in Journal of Chemical Theory and Computation, 2022

Abstract Machine learning is increasingly applied to improve the efficiency and accuracy of molecular dynamics (MD) simulations. Although the growth of distributed computer clusters has allowed researchers to obtain higher amounts of data, unbiased MD simulations have difficulty sampling rare states, even under massively parallel adaptive sampling schemes. To address this issue, several algorithms inspired by reinforcement learning (RL) have arisen to promote exploration of the slow collective variables (CVs) of complex systems. Nonetheless, most of these algorithms are not well-suited to leverage the information gained by simultaneously sampling a system from different initial states (e.g., a protein in different conformations associated with distinct functional states). To fill this gap, we propose two algorithms inspired by multiagent RL that extend the functionality of closely related techniques (REAP and TSLC) to situations where the sampling can be accelerated by learning from different regions of the energy landscape through coordinated agents. Essentially, the algorithms work by remembering which agent discovered each conformation and sharing this information with others at the action-space discretization step. A stakes function is introduced to modulate how different agents sense rewards from discovered states of the system. The consequences are three-fold: (i) agents learn to prioritize CVs using only relevant data, (ii) redundant exploration is reduced, and (iii) agents that obtain higher stakes are assigned more actions. We compare our algorithm with other adaptive sampling techniques (least counts, REAP, TSLC, and AdaptiveBandit) to show and rationalize the gain in performance.

Recommended citation: Kleiman, Diego E., and Diwakar Shukla. "Multiagent Reinforcement Learning-Based Adaptive Sampling for Conformational Dynamics of Proteins" J. Chem. Theory Comput. (2022). https://pubs.acs.org/doi/10.1021/acs.jctc.2c00683

Active Learning of the Conformational Ensemble of Proteins Using Maximum Entropy VAMPNets

Published in Journal of Chemical Theory and Computation, 2023

Abstract Rapid computational exploration of the free energy landscape of biological molecules remains an active area of research due to the difficulty of sampling rare state transitions in molecular dynamics (MD) simulations. In recent years, an increasing number of studies have exploited machine learning (ML) models to enhance and analyze MD simulations. Notably, unsupervised models that extract kinetic information from a set of parallel trajectories have been proposed including the variational approach for Markov processes (VAMP), VAMPNets, and time-lagged variational autoencoders (TVAE). In this work, we propose a combination of adaptive sampling with active learning of kinetic models to accelerate the discovery of the conformational landscape of biomolecules. In particular, we introduce and compare several techniques that combine kinetic models with two adaptive sampling regimes (least counts and multiagent reinforcement learning-based adaptive sampling) to enhance the exploration of conformational ensembles without introducing biasing forces. Moreover, inspired by the active learning approach of uncertainty-based sampling, we also present MaxEnt VAMPNet. This technique consists of restarting simulations from the microstates that maximize the Shannon entropy of a VAMPNet trained to perform the soft discretization of metastable states. By running simulations on two test systems, the WLALL pentapeptide and the villin headpiece subdomain, we empirically demonstrate that MaxEnt VAMPNet results in faster exploration of conformational landscapes compared with the baseline and other proposed methods.

Recommended citation: Kleiman, Diego E., and Diwakar Shukla. "Active Learning of the Conformational Ensemble of Proteins Using Maximum Entropy VAMPNets" J. Chem. Theory Comput. (2023). https://pubs.acs.org/doi/abs/10.1021/acs.jctc.3c00040

talks

Multi-agent reinforcement learning based adaptive sampling of conformational free energy landscapes of proteins

Published:

Abstract Molecular Dynamics (MD) simulations have become a crucial tool in chemistry, biology, condensed matter physics, and materials science. Rare state transitions tend to be hard to sample even under massively parallel simulation schemes. To address this issue, several algorithms inspired by reinforcement learning (RL) have arisen to promote exploration of the slow collective variables (CV) of complex systems. However, most of these algorithms are not well-suited to leverage the information gained by sampling a system from different initial states (e.g., a ligand-receptor system starting from bound and unbound poses). To fill this gap, we propose an algorithm inspired by multi-agent RL that extends the functionality of two closely-related techniques (REAP and TSLC) to situations where the sampling can be accelerated by learning from different regions of the CV landscape. Essentially, the algorithm works by remembering which agent discovered each conformation and sharing this information with others at the action-space discretization step. In this way, agents only sense rewards from regions of the landscape that they discovered. The consequences are threefold: (i) agents learn which CV carries more weight in the reward function using only relevant data, (ii) they deprioritize redundant actions, and (iii) agents that obtain higher rewards are assigned more actions. The conformations that are deemed the most rewarding are finally selected as starting points for new simulations. We compare our algorithm with baseline versions of LeastCounts, REAP, TSLC, and AdaptiveBandit based adaptive sampling to show and rationalize the gain in performance.

Deep learning-guided adaptive sampling with uncertainty rewards enhances exploration in molecular dynamics simulations

Published:

Abstract Methods development for the rapid exploration of the conformational ensemble of biological molecules remains an active area of research due to the difficulty of sampling rare state transitions in Molecular Dynamics (MD) simulations. In recent years, an increasing number of studies have exploited Machine Learning (ML) models to guide and analyze MD trajectories. Notably, diverse ML models have been developed to approximate optimal biasing potentials to force rare state transitions. On the other hand, Deep Neural Network (DNN) models have been proposed to extract the kinetic properties of a simulated system. In this work, we show that the latter type of model can accelerate the exploration of a thermodynamic ensemble without introducing biasing forces. For this purpose, we combined a VAMPNet (a DNN model that learns transformations to maximize the VAMP-2 score of a set of trajectories) with different adaptive sampling approaches. In brief, we propose an iterative procedure where a reward function selects restarting conformations from the latent space learned by a VAMPNet which is then refined by training on the collected data. Since DNNs do not produce isometric transformations in general, we validated our assumption that reward functions based on distance metrics tend to perform poorly against uncertainty-based rewards in latent spaces. We support our observations through empirical results obtained on typical test systems, which show up to 100% improvement in exploration.

teaching

LAS 291/292: Global Perspectives for Intercultural Learning

Undergraduate course, University of Illinois at Urbana-Champaign, College of Liberal Arts and Sciences, 2023

Teaching Assistant.

Prepares students who are going abroad for a semester or academic year for their transition through a) examining expectations, b) focusing on the purpose and value of the abroad experience, c) preparing students culturally and logistically, d) addressing issues of culture shock, e) helping students with articulating their experience for future personal and professional goals, f) enhancing intercultural communication and global understanding, and g) assisting with re-entry planning. May be repeated in separate terms.