-
GSOC 2017 summary
As GSOC 2017 is coming to an end, I'd like to use this blog post to give a summary of the things I have been working on.
-
Orthogonalization performance
Methods like GMRES and Jacobi-Davidson construct an orthogonal basis for their search subspace. It is well-known that classical Gram-Schmidt (CGS) is vulnerable for loss of orthogonality due to rounding errors. Modified Gram-Schmidt (MGS) is usually the fix for this, yet it is not free of rounding errors and is memory-bound when it comes to performance. In this post we'll look into iterative or refined orthogonalization methods and their performance.
-
Jacobi-Davidson example
Let's take a look how the Jacobi-Davidson algorithm targets interior eigenvalues of a Poisson matrix.
-
Harmonic Ritz values visualized
A hopefully insightful visualizion of harmonic Ritz values versus ordinary Ritz values.
-
Interior eigenvalues in Jacobi-Davidson
From the Arnoldi method we know that Ritz values tend to converge to exterior eigenvalues; similar behaviour is observed in Jacobi-Davidson. This is an issue when restarting the method: a Ritz value close to a specified interior target is not necessarily a good approximation of an eigenvalue in that neighbourhood; the Ritz value might as well be on its way to an exterior eigenvalue, and in that case we would not want the Ritz vector to be retained at restart. An alternative is to use harmonic Ritz pairs.
-
Towards the next eigenpair in Jacobi-Davidson
After removing a converged Ritz vector from the search space, we must ensure that new expansions do not bring it back. In practice this means updating the correction equation so that these directions are deflated.
-
Non-Hermetian eigenproblems in Jacobi-Davidson
Non-Hermetian matrices do not necessarily have orthogonal eigenvectors for distinct eigenvalues. So far we have relied upon this property when shrinking the search subspace. Fortunately we can work with the Schur decomposition to tackle this problem.
-
Shrinking the search subspace in Jacobi-Davidson
Once an approximate eigenvector has converged, we must remove it from the search space, so that other eigenpairs can be found. Also, when a search subspace becomes too large, a restart is necessary. This also requires removing vectors from the search space. Fortunately, shrinking the search subspace is easier in comparison to IRAM.
-
Options in Jacobi-Davidson
In my previous post I showed the defining property for Jacobi-Davidson, but it turns out that there is a lot of freedom in the remaining parts of the algorithm. For instance in how to initialize it; this post will be about exactly that.
-
Jacobi-Davidson
As part of Google Summer of Code I'm implementing several iterative methods for linear systems of equations and eigenvalue problems in Julia. Up to now I have studied Jacobi-Davidson and have a poor man's version working. The first post is about my notes on the method.
-
Compile-time primes
Anyone who has seen C++ template meta-programming must have come across the compile-time prime number generator by Erwin Unruh. Since then the language has evolved and provides a very readible alternative. In this post I'll touch on both the classical and modern approach.
-
Hello world
Every now and then I'll post some findings in programming and math, ranging from solutions to Project Euler exercises to numerical methods in partial differential equations.