We use cookies to improve your experience. By continuing to browse this site, you accept our cookie policy.×
InterviewOpen Accesscc iconby iconnc iconnd icon

Multiscale computational modeling offers key to understanding molecular logic underpinning development and disease

    Himanshu Kaul

    *Author for correspondence:

    E-mail Address: himanshu.kaul@leicester.ac.uk

    University of Leicester, School of Engineering, Leicester, LE1 7RH, UK

    Department of Respiratory Sciences, University of Leicester, Leicester, LE1 9HN, UK

    Published Online:https://doi.org/10.2144/btn-2023-0039

    Please can you introduce yourself and your research background?

    I am Dr. Himanshu Kaul, and I am a Royal Academy of Engineering Research Fellow at the University of Leicester (UK). Currently, I have cross-appointments with the School of Engineering and the Department of Respiratory Sciences. I am a biomedical engineer by training. I did my bachelor's at Arizona State University (AZ, USA) and then moved to the UK to do my PhD at the University of Oxford (UK). I then undertook a postdoc with Professor Peter Zandstra first at the University of Toronto (Canada) and then at the University of British Columbia (Canada). I also had a stint at the University of Sheffield working as part of the AirPROM consortium [1].

    My research focuses on multiscale emergent bioengineering. This entails the development of in vitro and in silico approaches to understand how processes, mechanisms and phenomena integrate across scales, from one layer of decision-making, be it genomic, gene regulatory or network-oriented, to the cell- and tissue-levels. Essentially, I integrate in vitro, in vivo and in silico approaches, so we can better understand why certain diseases emerge; what are the multiscale mechanisms mediating the emergence of these diseases; and how can we use that knowledge to create better therapies for our patients? Over the course of my career, my focus has been on asthma, but I've also worked in stem cell bioengineering with Professor Peter Zandstra (University of British Columbia) investigating the emergence of germ-layer markers in pluripotent stem cell colonies.

    It's certainly been a rewarding journey and I have enjoyed learning about different cultures and meeting new people along the way.

    You recently published a study using virtual cells in a virtual microenvironment to recapitulate early development-like patterns in human pluripotent stem cell colonies. Can you give an overview of the computational model used in the study?

    The fundamental goal was to understand the earliest decision-making steps during human development. There are different ways you can approach this. For example, via gene regulatory networks (GRNs) or reaction-diffusion models. GRNs are networks that help understand how molecular regulators that interact with each other and other intracellular components regulate functions of a cell. GRNs consist of nodes, which represent genes and their regulators, and edges, which represent the relationship between the nodes, for example, whether gene A activates or represses gene B, etc. Boolean GRN models represent the nodes as either OFF or ON. Despite this simplification, Boolean GRN models can reveal dynamic properties of the system. With reaction-diffusion models, you will typically use differential equations to capture interactions between two diffusible molecules, and how that influences the overall spatial gradients of those molecules.

    My approach was to create a multiscale framework [2] to study the phenomenon bottom-up: from the gene network layer to the tissue layer. So, I developed a gene-regulatory agent-based reaction-diffusion modeling environment (GARMEN). We started with a very simple GRN of human peri-gastrulation [3] to capture aspects of human gastrulation in a dish. These gene networks were embedded within agents [4], which represented human pluripotent stem cells (hPSCs): essentially virtual hPSCs. The GRN regulated cell fate, meaning it dictated whether the virtual cell changed its state (i.e., differentiates) or decided to release a certain protein. The virtual cells were embedded within a dynamic environment, where the spatial and temporal signal gradients could change due to virtual cell activity. These virtual cells, therefore, mediated changes in the environmental signals over time and, as a result, the virtual cells themselves underwent changes, essentially differentiating based on the concentration of the local signal molecules. This enabled us to couple GRN activity with tissue-level outcomes.

    In summary, the critical novelty of the study was the multiscale approach that allowed us to discretize GRNs in space. By embedding the GRNs in agents, we were able to link thousands of GRNs in space and understand the spatial and temporal changes between the different agents, how the GRN was acting within these virtual cells and how the different decision-making layers were evolving in sync with each other. This was not feasible previously.

    What was your motivation for developing GARMEN?

    When I was working at the University of Sheffield, I created a virtual asthma patient [5–7] to capture asthma pathogenesis. We were able to use this virtual patient to predict, with quite a bit of accuracy, the impact of three different asthma drugs at the patient level. However, when I used the model to make patient-specific predictions, even though I could capture patient trends, the predicted absolute values of the responders and non-responders weren't exactly accurate. For example, if a patient was responding to therapy, but their response effectiveness was 75%, the model was predicting 50%. And if there was a nonresponder and the response to therapy was a 2% improvement, the model was predicting 6%. I felt that I was missing something that captures the uniqueness of the individual patients.

    Upon reflection, I realized what I was missing were gene regulatory networks capturing asthma pathobiology, and I hypothesized that embedding these GRNs within my virtual patients could potentially increase prediction accuracy. So, I started looking for labs and projects where I could build up the requisite knowledge and skill set, and create something that I could then take further. I realized pretty quickly that doing that with patient data was going to be an absolute nightmare because the data are so messy and creating gene networks is not easy. The alternative was to do something in vitro with a simpler model, which would be highly reproducible, but also amenable to manipulations. In the study where we created GARMEN, the underlying in vitro platform was central to building a robust computation framework and conducting validation. The platform was quite simple, highly reproducible and high throughput, so we could generate as many statistical points as possible, which would not be possible with patients.

    What were the main challenges during this process and how did you overcome them?

    Let's start with the very low-level, technical ones. When you are solving equations for a differential equations-based model, time is inbuilt. However, when you create agent-based models, which are typically nonequations-based, there is no inbuilt time. Same goes with GRNs: if they are of the Boolean kind, there is no inbuilt time. To integrate these three modules, the gene, the cell and the microenvironment, we had to figure out how to align them in time as I wanted to make predictions that were time-sensitive.

    To overcome this in my agent-based model, I introduced cell division or proliferation. So, we would start with a non-confluent virtual hPSC colony, and the hPSCs will first undergo proliferation until confluence. This also reflected the experimental initial condition in vitro. We could have started with a colony that was already confluent and ready to initiate patterning. But, by adding proliferation, or rate of change of growth of cells, I was able to introduce time into the agent-based model. From there it was easy to integrate the agent-based module with the reaction-diffusion layer such that both could progress temporally in sync. Rule-based GRN models also do not have implicit time, so the approach to embed them in the time-sensitive agent-based model was quite significant to linking all three modules in time. This was a significant achievement, especially as this aspect is not very well explored in multiscale modeling within biomedical-related fields. I think this is an important achievement to highlight.

    Another challenge was validating our methodology and data. We wanted to show high confidence in our predictions, and that meant determining which experiments to perform so that we had an almost one-to-one correlation between what we were seeing in silico, versus what we were seeing in vitro, but then also teasing out at what level we could show validation. For example, to show validation in the study, we compared how certain germ-layer markers like mesodermal and endodermal markers are expressed under different conditions. Following computational prediction, we explicitly showed that for the control condition, they diverge, so the loci of those two markers should be statistically significant. Whereas, in some of the other conditions, like where we had dynamically silenced OCT4 in the hPSCs, the loci wouldn't be statistically significant because the two layers would not have separated, consistent with model predictions.

    More generally, and I think this is of importance to anyone who reads this and wants to do interdisciplinary work, it was challenging to integrate overlapping terminologies from three different mathematical models as well as from our in vitro approach. So, for example, the term “growth" can be interpreted in many different ways depending on the expert's perspective. As such, we wanted to preserve the meaning of the words in different contexts, but in a simple and coherent way to ensure we got our message across. This was a huge challenge, but I'm glad to have done it.

    Had you thought about the importance of defining terms before you started this multidisciplinary study?

    I wouldn't say this was the first time I experienced it, but it was certainly a bigger challenge given the three different mathematical paradigms and validation with in vitro data. Our team boasted experts specializing in different aspects of the project; the GRN, reaction-diffusion, agent-based modeling and micropatterning. And, this forced the team to think about how we can present the message in a way that does not confound the terms we use across the three mathematical formulations, but at the same time is also accessible to a general audience. Full credit to Dr. Daniel Aguilar-Hidalgo [8] for leading from the front on this particular aspect.

    Your results were verified using technology developed at Professor Peter Zandstra's lab. Could you briefly describe the technology used and how the lab validated your results?

    To begin explaining this, I would start with a high-level overview of human gastrulation, which is the stage in human embryo development where the germ layers form. In the later stages of development, these layers will be responsible for the formation of all tissues and organs. The phenomenon is not very well understood because embryo research is limited to 14 days as that's when gastrulation commences. Therefore, the work on human embryos and human gastrulation is quite limited.

    To address this issue, Peter Zandstra's [9] and Ali Brivanlou's [10] labs (latter at Rockefeller University) cultured hPSCs on extracellular matrix patterns of defined shape and size. When these cells were exposed to BMP4, they reorganized in a manner reminiscent of gastrulation. It's a good approximation of gastrulation; hence, we refer [to] it as peri-gastrulation. This technique has, in a short time, broadened our understanding of the steps to go from pluripotent fate to these different germ layer fates.

    Peters' lab created a robust in vitro platform that enabled high-content screening of these colonies and their ability to undergo peri-gastrulation [11]. The approach was incredibly simple, very robust and amenable to changes so you could change the protein you were adding to initiate differentiation, or you could engineer cell lines and micropattern them on the system. You could even introduce synthetic gene circuits within the micropatterned cells [12]. All these attributes made this technology ideal for developing and validating the in silico multiscale framework.

    Whether we started differentiation with different proteins, or whether we silenced the activity of a particular gene, we could pretty much do a one-to-one correlation between the in vitro and in silico systems. For example, we used a WNT agonist rather than BMP4 to trigger differentiation in one of the perturbation experiments. In others, we used an engineered cell line where the OCT4 activity was silenced before differentiation and during differentiation. I am very proud of the work where we silenced OCT4 activity before and during differentiation, and incredibly impressed by Dr Ross Jones [13] who engineered the cell lines in a matter of a few weeks.

    Why are computational models so important for the understanding of disease development?

    Computational models allow you to have a large degree of control over the processes you want to study, especially how the components can work with each other. This allows you to generate testable hypotheses. You can tightly control the parameters within a computational model, so you can vary one and observe the overall impact on the performance of the system as a whole, meaning you can really understand the impact of lower level mechanisms at the systems level. Doing that in vivo or in vitro can be quite challenging because of the redundancies that are built into biological networks. Computational models are a way to simplify those to an extent so you can generate meaningful hypotheses and test them in vitro or in vivo.

    Additionally, computer models save you time and resources. If you have a validated framework that approximates the outcome of interest with tolerable error, then it can help improve your study design and/or help investigate phenomena more systematically. This is particularly beneficial, for example, if you have limited resources and are trying to decide whether to invest in a particular molecule to develop a new drug before embarking on extremely costly activities such as animal or clinical testing.

    Responding to this question, however, I would like to add one caveat that all models are wrong, though some are useful and helpful. And I believe that's an important thing to remember. Ultimately, computational models are an aide to biology. They can help us make decisions faster. But at the end of the day, you'll have to go back to the lab to validate and confirm model predictions and ensure the error is tolerable.

    What are you hoping to do next in this area?

    My immediate goal is to create an in silico lung [14] that captures the physiology and pharmacology of the human lung. Via this multiscale framework, we can explore bottom-up, from the gene to the cell to the tissue to the organ level, how aberrant activities at the gene regulatory level scale up and lead to reduced lung function during asthma. The rationale is that understanding this multiscale disease biology will help us engineer or reverse engineer strategies that will silence or reverse these aberrant interactions.

    A more long-term aim is to use this in silico lung to create digital avatars of patients to accurately stratify patient response to therapy. Translating this technology into a pipeline will help clinicians tailor therapies to patients' profiles. I believe that, when functional, pharmaceutical companies will be able to use this approach for drug design and optimization by predicting which pharmacodynamics will have a more optimal or more desired effect versus the other. However, this is a long-term objective for which we need a lot of testing and experimentation.

    So, in summary, the overarching, long-term vision is to make personalized medicine available at clinical scale [15]. And key to that is packaging the pipeline in a way that ensures clinicians can easily operate it independently in a clinical setting.

    Financial & competing interests disclosure

    Financial support was received from the Royal Academy of Engineering (grant RF\201920\19\275) and Michael Smith Health Research BC (award 18427). The author has no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.

    No writing assistance was utilized in the production of this manuscript.

    Open access

    This work is licensed under the Attribution-NonCommercial-NoDerivatives 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/

    References

    • 1. European Lung Foundation. AirPROM. https://europeanlung.org/en/projects-and-campaigns/past-projects/airprom/
    • 2. Kaul H, Werschler N, Ross JD et al. Virtual cells in a virtual microenvironment recapitulate early development-like patterns in human pluripotent stem cell colonies. Stem Cell Rep. 18(1), 377–393 (2023).
    • 3. Tewary M, Ostblom J, Prochazka L et al. A stepwise model of reaction-diffusion and positional information governs self-organized human peri-gastrulation-like patterning. Development 144(23), 4298–4312 (2017).
    • 4. Kaul H, Ventikos Y. Investigating biocomplexity through the agent-based paradigm. Brief. Bioinformatics 16(1), 137–152 (2015).
    • 5. Saunders R, Kaul H, Berair R et al. DP2 antagonism reduces airway smooth muscle mass in asthma by decreasing eosinophilia and myofibroblast recruitment. Sci. Transl. Med. 11(479), eaao6451(2019).
    • 6. Chachi L, Diver S, Kaul H et al. Computational modelling prediction and clinical validation of impact of benralizumab on airway smooth muscle mass in asthma. Eur. Respir. J. 54, 1900930 (2019).
    • 7. Saunders R, Kaul H, Berair R et al. Fevipiprant (QAW039) reduces airway smooth muscle mass in asthma via antagonism of the prostaglandin D2 receptor 2 (DP2). Am. J. Respir. Crit. Care Med. 207, A4677 (2023).
    • 8. ORCiD Daniel Aguilar-Hidalgo. https://orcid.org/0000-0003-1366-9574
    • 9. Tewary M, Ostblom J, Prochazka L et al. A stepwise model of reaction-diffusion and positional information governs self-organized human peri-gastrulation-like patterning. Development 144(23), 4298–4312 (2017).
    • 10. Warmflash A, Sorre Benoit, Etoc F, Siggia ED, Brivanlou AH. A method to recapitulate early embryonic spatial patterning in human embryonic stem cells. Nat. Methods 11, 847–854 (2014).
    • 11. Tewary M, Dziedzicka D, Ostblom J et al. High-throughput micropatterning platform reveals nodal-dependent bisection of peri-gastrulation–associated versus preneurulation-associated fate patterning. PLoS Biol. 17(10), 33000081 (2019).
    • 12. Prochazka L, Michaels Y, Lau C et al. Synthetic gene circuits for cell state detection and protein tuning in human pluripotent stem cells. Mol. Syst. Biol. 18(11), e10886 (2022).
    • 13. ORCiD Ross Daniel Jones. https://orcid.org/0000-0003-0891-6761
    • 14. Laboratory for Multiscale Emergent Bioengineering. www.kaullab.com/
    • 15. Kaul H. Respiratory healthcare by design: Computational approaches bringing respiratory precision and personalised medicine closer to bedside. Morphologie 103(343), 194–202 (2019).