applied math

Joint Inverse Problems/Data Sciences/Applied Math Seminar at Colorado State University

Thursday 3:30-4:30PM, Weber 223

[Back to Department of Mathematics]
[Back to Organizer]











Zoom Meeting link  
Meeting ID: 952 3553 8484

Fall 2023

      

Spring 2024

Aug 31   Oct 19   Oct 26   Nov 09   Nov 16   Dec 07   Jan 25   Feb 08   Feb 29   Mar 28   Apr 04   Apr 25  
 
 
 
Aug 31   Back to top

Selected Topics in Inverse Problems, Data Sciences, and Applied Math

Margaret Cheney, Michael Kirby, and Wolfgang Bangerth

 
 
 
Oct 19   Back to top

Olivier Pinaud

Department of Mathematics, Colorado State University

Title  Instantaneous Time Mirrors and Time Reversal

Abstract  Instantaneous time mirrors (ITMs) were recently introduced by M. Fink and collaborators as a new avenue for time reversal. The latter allows for the focusing of waves, whether acoustic, electromagnetic or elastic, and has found many important applications in medical imaging, non-destructive testing, and telecommunications for instance. The main practical difficulty of standard time reversal is the recording/reversal process which necessitates a quite complex apparatus. ITMs offer on the contrary a simplified experimental alternative that does not require any measurements, provided there is some control over the medium of propagation.

We will review in this talk the basics of the time reversal of waves introduced in the nineties, and discuss the ITMs and some of their properties. In addition to the experimental setup proposed by M. Fink et al, we will describe another physically realizable system based on surface plasmons.

 
 
 
Oct 26   Back to top

Stephen Dauphin

Sandia National Laboratories

Title  Synthetic Aperture Radar Exploitation Techniques

Abstract  We will talk a variety of radar work going on at Sandia, and introduce opportunities of internships and jobs.

 
 
 
Nov 09   Back to top

Tobias Weinzierl

Department of Computer Science, Durham University, UK

Title  Who said using GPUs was straightforward

Abstract  With OpenMP and oneAPI's SYCL, we have two programming languages on the table which promise that you can write scientific codes once, while they run on both CPUs and GPUs. We report on our efforts to migrate ExaHyPE, a hyperbolic PDE solver which we use for earthquake simulations and gravitational wave research, onto GPUs.

Hereby, we focus on three major challenges: The orchestration of compute steps on the GPU, the memory management on GPUs, and the realisation of multitasking for tasks that can either run on a CPU or an accelerator. For us, none of these three areas seem to work out-of-the-box with current technologies. For all the stumbling blocks that we encounter, we present workarounds and solutions. Yet, we can only sketch ideas how to answer the overarching research questions: Do these languages provide the right abstraction level for the realisation of modern numerical codes, do we have to rephrase key ingredients of scientific codes in different ways to make them fit for GPUs, and can the technologies on the table allow us to write performance-portable code and not only platform-portable realisations?



 
 
 
Nov 16   Back to top

Graham Harper

Sandia National Laboratories

Title  Compressed Finite Element Methods and Linear Solvers for Extreme-Scale Computing

Abstract  To push finite element method applications to larger scales, many have considered approaches such as sum factorization to combat the dimension-based exponent on the cost for both evaluating and storing finite element function data. While this allows one to solve higher dimensional problems more efficiently, we instead focus on structure detection and pattern analysis across mesh cells to reduce redundancy in finite element basis evaluations. We show that preprocessing a mesh for redundancy by considering the de Rham complex increases the speed of a simulation while allowing for larger simulations. We perform mathematical analysis to understand the error incurred in the operator by performing compression. We additionally perform similar compression and analysis for domain-decomposition based linear solvers. We provide numerical results demonstrating the effectiveness of our approach, including a Trilinos-based finite element simulation of 1 billion elements on a single computer node and timings from simulations on Sandia's supercomputers. We conclude by presenting goals for future directions and potential intern projects.



 
 
 
Dec 07   Back to top

Jeonghun Lee

Department of Mathematics, Baylor University

Title  Robust finite element methods for poroelasticity and its coupled equations

Abstract  Poroelasticity equations arise from many applications in geophysics and biomechanics, so numerical simulations of poroelasticity equations are of great interest nowadays. In this talk I present advanced finite element methods for poroelasticity and related problems.

In the first part, I introduce parameter-robust discretization of poroelasticity, and explain that efficient preconditioners can be easily obtained for the system by the operator preconditioning approach. In the second part, I present hybridizable discontinuous Galerkin (HDG) methods for the problems that Stokes/Navier-Stokes equations and porous/poroelastic equations are coupled with interface. The HDG methods give numerical solutions which preserve many important physical quantities such as the compressibilities of fluid and poroelastic matrix, and the fluid mass in poroelastic domain.

The talk is based on joint works with K.-A. Mardal (University of Oslo), M. E. Rognes (Simula Research Laboratory), A. Cesmelioglu (Oakland University) S. Rhebergen (University of Waterloo), and other collaborators.



 
 
 
Jan 25   Back to top

Daniel McKenzie

Department of Applied Mathematics & Statistics, Colorado School of Mines

Title  Sparse Gradients in Derivative-Free Optimization.

Abstract  Derivative-Free Optimization (DFO) is concerned with minimizing a function without using derivatives, and is used in applications where computing gradients is impossible, intractable, or expensive. While DFO has been classically applied to problems with 10^3 variables or fewer, emerging applications in machine learning require solving DFO problems with 10^5 variables or more. In this talk I'll survey some of my recent work in extending DFO to this high-dimensional regime, primarily by exploiting sparsity in gradients to constuct good gradient approximations cheaply.

 
 
 
Feb 08   Back to top

Stephen Becker

Department of Applied Mathematics, University of Colorado Boulder

Title  Randomization methods for big-data

Abstract  In this era of big-data, we must adapt our algorithms to handle large datasets. One obvious issue is that the number of floating-point operations (flops) increases as the input size increases, but there are many less obvious issues as well, such as the increased communication cost of moving data between different levels of computer memory. Randomization is increasingly being used to alleviate some of these issues, as those familiar with random mini-batch sampling in machine learning are well aware of. This talk goes into some specific examples of using randomization to improve algorithms. We focus on special classes of structured random dimensionality reduction, including the count sketch, tensorSketch, Kronecker fast Johnson-Lindenstrauss sketch, and pre-conditioned sampling. These randomized techniques can then be applied to speeding up the classical Lloyd's algorithm for K-means and for computing tensor decompositions, for example. If time permits, we will also show extensions to optimization, including a gradient-free method that uses random finite differences and a method for solving semi-definite programs in an optimal low-memory fashion.



 
 
 
Feb 29   Back to top

Shuhao Cao

School of Sciences and Engingeering, University of Missouri - Kansas City

Title  Structure-conforming Operator Learning via Transformers

Abstract  GPT, Stable Diffusion, AlphaFold 2, etc., all these state-of-the-art deep learning models use a neural architecture called "Transformer". Since the emergence of "Attention Is All You Need" paper by Google, Transformer is now the ubiquitous architecture in deep learning. At Transformer's heart and soul is the "attention mechanism". In this talk, we shall dissect the "attention mechanism" through the lens of Galerkin methods. We shall give a specific example to try to answer a fundamental but critical question: whether and how one can benefit from the theoretical structure of a mathematical problem to develop task-oriented and structure-conforming deep neural networks? An attention-based deep direct sampling method is proposed for solving Electrical Impedance Tomography (EIT), a class of boundary value inverse problems. Progresses within different communities will be briefed to answer some open problems on the mathematical properties of the attention mechanism in Transformers. This is joint work with Ruchi Guo (UC Irvine) and Long Chen (UC Irvine).



 
 
 
Mar 28   Back to top

Teemu Saksala

Department of Mathematics, North Carolina State University

Title  Hyperbolic inverse problems with time dependent and time independent coefficients

Abstract  In this talk we will begin by surveying some known results for inverse problems for hyperbolic PDEs. In particular, we will discuss the differences in the methodology of determining time-dependent and time-independent coefficients appearing in a hyperbolic equation in a Riemannian manifold. The second part of the talk is based on two recent research projects: 1) We will prove that under certain geometric assumptions the knowledge of a partial Cauchy data set uniquely determines time-dependent lower order coefficients appearing in a hyperbolic initial / boundary value problem. 2) We will prove that a local source-to-solution map of a hyperbolic partial differential operator on a complete Riemannian manifold (no boundary, and possible non-compact) determines a) the topology and the geometry of the manifold uniquely and b) the lower order time-independent coefficients upto a natural obstruction.

This talk is based on joint works with Boya Liu (NC State University), Andrew Shedlock (NC State University) and Lili Yan (University of Minnesota).



 
 
 
Apr 04   Back to top

Elaine Spiller

Department of Mathematical and Statistical Sciences, Marquette University

Title  A surrogate-based strategy for analyzing and forecasting geophysical hazards

Abstract  Geophysical natural hazards -storm surge, post-fire debris flows, volcanic flows and ash fall, etc.- impact thousands to millions of people annually. Yet the most devastating hazards, those resulting in loss of life and property, are often both geographically and temporally localized. Thus, they are effectively rare events to those impacted.

We will present methodology to produce probabilistic hazard maps that can rapidly be updated to account for various aleatoric scenarios and epistemic uncertainties. This hazard analysis utilizes stochastic emulators to combine computationally expensive simulations of the underlying geophysical processes with probabilistic descriptions of uncertain scenarios and model parameters. The end goal is not a map, but a family of maps that represent how a hazard threat evolves under different assumptions or different potential future scenarios. Further, this approach allows us to rapidly update hazard maps as new data or precursor information arrives.



 
 
 
Apr 25   Back to top

Zhuoran Wang

Department of Mathematics, University of Kansas

Title  Full weak Galerkin FEMs with BDF time-discretization for linear and nonlinear poroelasticity problems

Abstract  We aim to develop full weak Galerkin (WG) schemes to address both linear and nonlinear poroelasticity problems. This involves discretizing both the Darcy pressure and the elasticity using the WG finite element method. We establish discrete weak gradients for both pressure and solid displacement in the Arbogast-Correa space and obtain penalty-free schemes. The fully-discrete system is formulated using the Backward Differentiation Formulas (BDFs). To linearize the nonlinear cases with permeability depending on dilation, we utilize Picard iterations. Numerical experiments are conducted to validate the accuracy and locking-free property of the new solvers. This work is a collaboration with both Dr. James Liu and Dr. Simon Tavener from Colorado State University, as well as Dr. Ruishu Wang from Jilin University (China).