Facilitators: Cassandra Toy (Microsoft) and Gowri Srinivasan (LANL)
Artificial Intelligence and Machine Learning are revolutionizing problem solving across several domains, including computational mechanics. We have already seen preliminary successes in the use of machine learning techniques within the computational sciences, such as the use of surrogate models for fast evaluation, improved accuracy in reduced-order modeling, and simplifications of workflows. The initial promise of machine learning techniques in better and faster solutions for physics problems offer the hope for future many possible technical advances. We focus on 3 different areas related to this broad theme:
- Interpretability: One of the main reasons ML has had limited traction in the scientific community is the lack of direct interpretability of results. While certain methods like Random Forests are amenable to preserving physical features, deep learning methods like neural networks tend to obscure the reasoning behind the response. Being able to explain the outputs of the machine learning algorithm goes far in lending credibility to the methodologies and is an active research area.
Furthermore with the increase of ML models used in important decision-making processes in the industry, the GDPR accountability principle defined in Article 5.2 states, a data controller “must be able to demonstrate that personal data are processed in a transparent manner in relation to the data subject.” As certain classes of ML models are “black box” systems, creating “interpretable” models became a requirement under GDPR.
- Multi-scale modeling: The modeling community has developed accurate physics models from first principles, in many cases, of processes at lower length and time scales that influence behavior at the higher scales of interest. However, it is not computationally feasible to incorporate all of the lower length scale physics directly into upscaled models. As a result, predictions using simulations often fail to match experimental data collected at the higher length scales. This is an area where machine learning methods offer the promise of several orders of magnitude speedup in mimicking the lower length scale physics accurately. The use of ML in upscaling is being widely considered currently with promising preliminary results in modeling physics processes
- Quantum optimization and heterogeneous architectures: The hope of quantum optimization is to speed up processes that may not be feasible on a classical computer. In theory quantum computing systems have proven useful when utilizing quantum principles, and phenomena such as superposition of qubits, tunneling utilized within quantum annealing, and entanglement for quantum information processing. This area has given rise to different types of computing and tooling such as heterogenous quantum architectures which are able to execute algorithms which contain both classical and quantum logic, the d-wave, and lastly quantum programming languages such as Microsoft’s Q#, Google’s Cirq, and IBM’s