Abstract

We computationally explore the dynamics of superconductivity near the superheating field in two ways. First, we use a finite element method to solve the time-dependent Ginzburg-Landau equations of superconductivity. We present a novel way to evaluate the superheating field Hsh and the critical mode that leads to vortex nucleation using saddle-node bifurcation theory. We simulate how surface roughness, grain boundaries, and islands of deficient Sn change those results in 2 and 3 spatial dimensions. We study how AC magnetic fields and heat waves impact vortex movement. Second, we use automatic differentiation to abstract away the details of deriving the equations of motion and stability for Ginzburg-Landau and Eilenberger theory. We present calculations of Hsh and the critical wavenumber using linear stability analysis.

Abstract

Superconducting Radio Frequency (SRF) cavities are important components of particle accelerators. SRF cavity performance is limited by a maximum allowed applied magnetic field, known as the superheating field ($H_{\rm sh}$) at which magnetic vortices spontaneously enter the material and cause the superconducting material to quench. Previous work has calculated the theoretical maximum field a superconductor can withstand. However, this calculation assumed a perfectly smooth surface with no material inhomogeneities or surface roughness. Real world cavities are polycrystalline (typically Nb or Nb$_3$Sn) and exhibit surface defects near grain boundaries. Cavity preparation methods also lead to material inhomogeneities. I use the time-dependent Ginzburg-Landau theory and finite element methods to model the role of surface defects and material inhomogeneities in magnetic vortex nucleation. Results show the amount by which Hsh is reduced depends on the concentration of impurities as well as the physical dimensions of the defect. Reducing the size of grain boundaries and material inhomogeneities found therein has the potential to significantly increase SRF cavity performance.

Abstract

In 1952 Hodgkin and Huxley formulated the fundamental biophysical model of how neurons integrate input and fire electric spikes. With 25 parameters and 4 dynamical variables, the model is quite complex. Using information theory, we analyze the model complexity and demonstrate that it is unnecessarily complex for many neural modeling tasks. Using the manifold boundary approximation method of model reduction, we perform a series of parameter reductions on the original 25-parameter model and create a series of spiking Hodgin-Huxley models, each with decreasing parameter number. We analyze the physical meaning of some key approximations uncovered by our systematic reduction methods, which are "blind" to the real physical processes the model is intended to capture. We then evaluate the behavior of the most greatly reduced 14-parameter model under different experimental conditions, including networks of neurons. We also discuss new questions that have arisen as a result of our work

Katrina Lynn Pedersen (Masters Thesis, December 2018,
Advisor: Mark Transtrum
)

Abstract

[Abstract]

Abstract

Many-parameter models of complex systems are ubiquitous, yet often difficult to interpret. To gain insight, these models are often simplified, sacrificing some of their global considerations as well as versatility. The task of finding a model that balances these features is of particular interest in statistical mechanics. Our group addresses the problem through a novel approach—the Manifold Boundary Approximation Method (MBAM). As the central step to this approach, we interpret models geometrically as manifolds. Many model manifolds have a set of boundary cells arranged in a hierarchy of dimension. Each of these boundary cells is itself a manifold which corresponds to a simpler version of the original model, with fewer parameters. Thus, a complete picture of all the manifold’s boundary cells—the boundary complex—yields a corresponding family of simplified models. It also characterizes the relationships among the extreme behaviors of the original model, as well as relationships among minimal models that relate subsets of these extreme behaviors. This global picture of the boundary complex for a model is termed the model’s manifold topology. Beginning in the context of statistical mechanics, this thesis defines a class of models— Superficially Determined Lattice (SDL) models—whose manifold topologies can be ascertained algorithmically. This thesis presents two algorithms. Given an SDL model, the Reconstruction Algorithm determines its manifold topology from minimal information. Given a model and desired extreme behaviors, the Minimal Model Algorithm finds the simplified model (with fewest parameters) that interpolates between all of the behaviors.

Abstract

Using a finite element method, we numerically solve the time-dependent Ginzburg-Landau equations of superconductivity to explore vortex nucleation in type II superconductors. We consider a cylindrical geometry and simulate the transition from a superconducting state to a mixed state. Using saddle-node bifurcation theory we evaluate the superheating field for a cylinder. We explore how surface roughness and thermal fluctuations influence vortex nucleation. This allows us to simulate material inhomogeneities that may lead to instabilities in superconducting resonant frequency cavities used in particle accelerators.

Abstract

Adaptation is an important biological function that can be achieved through networks of enzyme reactions. These networks can be modelled by systems of coupled differential equations. There has been recent interest in identifying what aspect of a network allows it to achieve adaptation. We ask what design principles are necessary for a network to adapt to an external stimulus. We use an information geometric approach that begins with a fully connected network and uses automated model reduction to remove unnecessary combinations of components, effectively constructing and tuning the network to the simplest form that still can achieve adaptation. We interpret the simplified network and combinations of parameters that arise in our model reduction to identify minimal mechanisms of adaptation in enzyme networks, and we consider the applications of these methods to other fields.

Abstract

Fitting non-linear models to data is a notoriously difficult problem. The standard algorithm, known as Levenberg-Marquardt (LM), is a gradient search algorithm based on a trust region approach that interpolates between gradient decent and the Gauss-Newton methods. Algorithms (including LM) often get lost in parameter space and take an unreasonable amount of time to converge, especially for models with many parameters. The computational challenge and bottleneck is calculating the derivatives of the model with respect to each parameter to construct the so-called Jacobian matrix. We explore methods for improving the efficiency of LM by approximating the Jacobian using partial-rank updates. We construct an update method that reduces the computational cost of the standard Levenberg-Marquardt routine by a factor of .64 on average for a set of test problems.

Abstract

We numerically study the time-dependent Ginzburg-Landau equations of superconductivity using a Galerkin method implemented in FEniCS, an automated differential equation solver. We consider geometries for both a bulk material (line from zero to infinity) and a film (half-line), corresponding to mixed and Neumann boundary conditions respectively. We simulate quenching by switching on an external magnetic field, allowing the material to approach a steady state, and then switching on a greater field. Our solutions exhibit the Meissner effect, convergence to the steady state solution, and quenching of superconductors.