Initial search space
We could initialize the search space with a single random vector ; extract an approximate eigenvector from ; solve the correction equation approximately for ; expand the search space by ; and repeat the process.
Clearly this makes little sense at the start. The correction equation was motivated by the assumption that the search space already contains a reasonable approximation to an eigenvector, and this is not the case at the very first iteration. In fact the first approximate eigenvector is the initial random vector .
Now we have a couple options:
- As long as the residual of a Ritz pair is large, we could replace in the correction equation with our target .
- In the first few iterations we could simply add the residual to the search subspace.
- Option 2 is actually equivalent to “solving” the correction equation with a single iteration of a Krylov solver, so a generalized approach would be to incrementally increase the accuracy level of the Krylov solver at each Jacobi-Davidson iteration.
- We could also initialize the search space as a Krylov subspace; so Jacobi-Davidson would start out as the Arnoldi method.
- In some applications we already have a good guess for the eigenvectors and we could initialize the search subspace with them. For instance, in bifurcation analysis people typically compute eigenvalues of a Jacobian matrix multiple times near a bifurcation point. This Jacobian matrix only varies slightly, so one would expect the eigenpairs to vary slightly as well. Hence, you could reuse the eigenvectors from the first solve, to obtain a very good initial search subspace for the second solve.
It seems sensible to use option 3 as a default when there is no prior information about the eigenpairs. Option 4 and 5 can be implemented by letting the user provide an initial search subspace.