
Improbable Defence’s research efforts aim to enable a new generation of multi-domain, multi-use synthetic environments that can bolster the security and national resilience of the UK and our allies.
Over the coming weeks and months, we’ll be taking a closer look at some of the most intractable problems that both Improbable Defence and our partners face, from model composition and integration to calibration and validation, and digging into the original research we’re conducting in order to solve them.
Synthetic environments (SEs) are a means by which we can glean useful insight about the real world by building, adjusting and integrating models in a virtual one. But the complexity of the modern world is such that we can’t model everything all at once and hope to get the answers we want.
Instead, composing SEs that allow decision takers, policy makers and personnel to make decisions at the needed pace means modelling what matters. We must work in an agile way to rapidly identify key areas of focus in order to create an SE that contains those elements of a system that really matter. If a decision maker wants to quickly find out the likely effects of flash flooding on transport CNI, they’ll need to combine a traffic model and a weather model. Then they’ll need to hone in on this system and its models, interrogate them, zoom out, adjust and repeat to get the data that’ll lead to more insightful decision making.
The natural response to this process, both from an engineering and scientific perspective, is to break models down into smaller, separate components before using these to compose a synthetic environment. Even better if a user can pull these models off-the-shelf and tweak them to fit, since it’s then that much easier to rapidly compose the required SE.
This isn’t a strange concept. It’s commonly used to ensure trust and validity from the outset, and from a standing start. Computer programmers build on programming languages, pulling together libraries to build their tools, for instance. Engineers start with components and materials they know and understand, in order to work with a greater degree of confidence. Even businesses take best policy practice and advice from different sources to operate most effectively.
Model integration is all about ensuring that models, when broken down and recombined as in the above examples, talk to each other in a rational, expected way.
Making models interact to give users the desired results is a challenge
What we can’t do is build models in isolation and expect them to work together. Take the weather and traffic models I mentioned previously. Both need to interact to provide accurate and useful insight, while also respecting any confidentiality or regulatory obligations of the participating data providers. If the weather model produces rain, we’d expect people to drive more slowly due to wet road surfaces. Flooding may cut off some roads entirely, leading to congestion elsewhere. Extreme heat may cause more cars to break down, and so on.
But whether the modeller who produced these models thought about and understood the nuances of these interactions is an open question. Whether they built them in a way that allows the weather to impact the traffic is not guaranteed. The easiest approach is to simply run the models independently of each other, but this keeps them in silos and doesn’t allow them to have the required interactions.
When we talk about model integration, what we’re instead asking is: how do we make models work together? How do we develop and then combine models in a way that’s both computationally effective – because we know the performance speed of our models matters, either in time to make a decision or in the cost to run the model – but also scientifically valid? Model integration also introduces new challenges in calibration, validation and quantification of uncertainty, which must be addressed to ensure that any conclusions arising from the model aren’t misleading.
Modelling is complex and often messy, much like the real world
Decisions aren’t made in the simple, linear way many assume. Instead, the decision making process often resembles a spiral, where some of the question is answered, leading to subsequent questions that refine or narrow down the answer. We have to account for this when it comes to modelling and simulation. The ideal is to have a model that isn’t just a snippet of computer code of how to calculate something. It should also reflect the assumptions and considerations that have gone into its creation and the understanding behind it.
The research team at Improbable aims to capture this. We’re concerned with facilitating the rapid composition of synthetic environments to help decision takers, policy makers and personnel tackle challenges as quickly as they appear. This means pushing the scientific boundaries of how to rapidly and effectively integrate the most relevant existing models, regardless of who supplies them.
To this end, we’re developing a range of new approaches to rapidly integrate relevant models into an SE, so that they can run at the scale, speed and fidelity needed to more realistically reflect the complexity of the modern world.
This area of research means addressing the question: how can we make the process of composing models together easy, repeatable and effective? We want models to run fast once they’re composed. We want them to be easy to compose. And we want to make sure we’re not misleading users or breaking models by combining them incorrectly.
Model integration is a big but worthwhile challenge
There’s no doubt model integration is a very hard thing to get right – but the value of doing so is huge for the global modelling and simulation community. That’s why we’re building a business strategy based on this working, and investing research effort in this area by putting our brightest minds to use.
But it’ll take more than just us. Looking ahead, we’ll be collaborating with experts from the academic community as part of the Myridian programme to build a suite of standardised, pre-written models that interact in different ways. With the help of academia we’ll explore and develop different techniques, where the pros and cons lie for each, and stress-test various approaches to model integration that will not only provide a baseline for future experiments, but also challenge some of our assumptions to push our research forward.
In Part One, Accelerating synthetic environment creation to support national resilience and security, Rob Solly, Director of Research Partnerships, discusses the need to accelerate the process of creating and deploying synthetic environments and the importance of partnerships in achieving this.
Keep up-to-date with the latest R&D via our Research page. Want to find out more about our work and network of commercial and research partners? Contact RobSolly[@]improbable.io for more information.