Starting in the spring 2013, I videotaped the lectures for my MATH 676: Finite element methods in scientific computing course at the KAMU TV studio at Texas A&M. These are lectures on many aspects of scientific computing, software, and the practical aspects of the finite element method, as well as their implementation in the deal.II software library. Support for creating these videos was also provided by the National Science Foundation and the Computational Infrastructure in Geodynamics.

The videos are part of a broader effort to develop a modern way of teaching Computational Science and Engineering (CS&E) courses. If you are interested in adapting our approach, you may be interested in this paper I wrote with a number of education researchers about the structure of such courses and how they work.

Note 1: In some of the videos, I demonstrate code or user interfaces. If you can't read the text, change the video quality by clicking on the "gear" symbol at the bottom right of the YouTube player.

Note 2: deal.II is an actively developed library, and in the course of this development we occasionally deprecate and remove functionality. In some cases, this implies that we also change tutorial programs, but the nature of videos is that this is not reflected in something that may have been recorded years ago. If in doubt, consult the current version of the tutorial.

Lecture 17.5: Generating adaptively refined meshes: Which cells to refine

Lecture 17.25 gave us a way to estimate or approximate the size of the error between the exact and discrete solutions on each cell. What remains for a viable mesh refinement criterion is a rule that helps us decide which cells to refine based on these indicators. These will presumably be the cells with the largest indicators, but it's not quite clear a priori how many of these cells we shall refine.
This lecture introduces different refinement criteria – in particular the "fixed fraction" or "bulk marking" strategy, and the "fixed number" strategy. It also touches on a variation used for time dependent problems. Finally, I talk about two different viewpoints on what it means to create an adaptive mesh: one being to vastly increase accuracy at little additional cost, the other being to vastly reduce the computational cost at little loss of accuracy.