Starting in the spring 2013, I videotaped the lectures for my MATH 676: Finite element methods in scientific computing course at the KAMU TV studio at Texas A&M. These are lectures on many aspects of scientific computing, software, and the practical aspects of the finite element method, as well as their implementation in the deal.II software library. Support for creating these videos was also provided by the National Science Foundation and the Computational Infrastructure in Geodynamics.

The videos are part of a broader effort to develop a modern way of teaching Computational Science and Engineering (CS&E) courses. If you are interested in adapting our approach, you may be interested in this paper I wrote with a number of education researchers about the structure of such courses and how they work.

Note 1: In some of the videos, I demonstrate code or user interfaces. If you can't read the text, change the video quality by clicking on the "gear" symbol at the bottom right of the YouTube player.

Note 2: deal.II is an actively developed library, and in the course of this development we occasionally deprecate and remove functionality. In some cases, this implies that we also change tutorial programs, but the nature of videos is that this is not reflected in something that may have been recorded years ago. If in doubt, consult the current version of the tutorial.

Lecture 17: step-6: Adaptive meshes

This lecture demonstrates how one uses adaptive mesh refinement and the constraints discussed in the previous lecture to solve the Laplace equation. We show the 10 or 15 lines of additional code necessary to deal with adaptive meshes. The remainder of the program is unchanged despite the radically different way of viewing meshes in this situation.

In the code, I refine 30% of the cells and coarsen 3%. These "magic" numbers have a purpose: In 2d, if you refine a fraction x of your N cells, then you replace each of these cells by their four children, so you will get a total of (1-x)N+4xN=(1+3x)N cells in the next round. x=30% therefore ensures that we get roughly twice as many cells the next time around. The fraction of 3% for coarsening is small enough so that not many cells that were previously refined will be coarsened again, but large enough that we can undo our own mistakes.

Of course, this then leads to the question of why it is useful to double the number of cells in each refinement. To understand this, let's assume that we have M cells on the final mesh. Then we had M/2, M/4, M/8, ... cells on the meshes in previous iterations. If we have a perfect solver in which it costs us O(M) operations to solve a linear system with M unknowns, then solving on a sequence of meshes that always double will cost us a total compute time of ...+O(M/8)+O(M/4)+O(M/2)+O(M)=O(2M) operations, or about twice as many as on the finest mesh -- not too large a penalty to pay, given that we have likely constructed a good mesh by adaptively refining. On the other hand, if we have chosen a fraction x much smaller than 30% (i.e., we are very careful in refining the mesh) then we may have a slightly better mesh (i.e., M is smaller), but the sum above adds up to something larger. On the other hand, if we chose x=1 (i.e., global refinement), then we will likely have a mesh that doesn't suit the solution at all (i.e., we need a very large M to get any kind of accuracy at all) but the sum for the work would be ...+O(M/32)+O(M/8)+O(M/4)+O(M)=O(4M/3): even in the best of cases (if a globally refined mesh is what is needed) only one third faster than the adaptive refinement with x=30%.

It is worth noting that the factor x that leads to a doubling of the number of cells is different in 3d.

中国Bilibili站点视频,请点击此处。

Slides: click here