Moving the poles of a rational Krylov space
Mario Berljafa and Stefan Güttel, June 2015Download PDF or m-file
Rational Krylov spaces
A rational Krylov space is a linear vector space of rational functions in a matrix times a vector . Let be a square matrix of size , an nonzero starting vector, and let be a sequence of complex or infinite poles all distinct from the eigenvalues of . Then the rational Krylov space of order associated with is defined as
where is the common denominator of the rational functions associated with . The rational Krylov sequence method by Ruhe  computes an orthonormal basis of . The first column of can be chosen as . The basis matrix satisfies a rational Arnoldi decomposition of the form
where is an (unreduced) upper Hessenberg pencil of size .
The poles of a rational Krylov space
Given a rational Arnoldi decomposition of the above form, it can be shown  that the poles of the associated rational Krylov space are the generalized eigenvalues of the lower subpencil of . Let us verify this at a simple example by first constructing a rational Krylov space associated with the poles . The matrix is of size and chosen as the tridiag matrix from MATLAB's gallery, and is the first canonical unit vector. The rat_krylov command is used to compute the quantities in the rational Arnoldi decomposition:
N = 100; A = gallery('tridiag', N); b = eye(N, 1); m = 5; xi = [-1, inf, -1i, 0, 1i]; [V, K, H] = rat_krylov(A, b, xi);
Indeed, the rational Arnoldi decomposition is satisfied with a residual norm close to machine precision:
format shorte disp(norm(A*V*K - V*H) / norm(H))
And the chosen poles are the eigenvalues of the lower subpencil:
-1.0000e+00 + 0.0000e+00i Inf + 0.0000e+00i 0.0000e+00 - 1.0000e+00i 0.0000e+00 + 0.0000e+00i 0.0000e+00 + 1.0000e+00i
Moving the poles explicitly
There is a direct link between the starting vector and the poles of a rational Krylov space . A change of the poles to can be interpreted as a change of the starting vector from to , and vice versa. Algorithms for moving the poles of a rational Krylov space are described in  and implemented in the functions move_poles_expl and move_poles_impl.
For example, let us move the poles of the above rational Krylov space to the points :
xi_new = -1:-1:-5; [KT, HT, QT, ZT] = move_poles_expl(K, H, xi_new);
The output of move_poles_expl are unitary matrices and , and transformed upper Hessenberg matrices and , so that the lower part of the pencil has as generalized eigenvalues the new poles :
-1.0000e+00 + 0.0000e+00i -2.0000e+00 + 1.4085e-15i -3.0000e+00 - 7.2486e-16i -4.0000e+00 + 1.6407e-16i -5.0000e+00 - 2.4095e-16i
Defining , the transformed rational Arnoldi decomposition is
This can be verified numerically by looking at the residual norm:
VT = V*QT'; disp(norm(A*VT*KT - VT*HT) / norm(HT))
It should be noted that the function move_poles_expl can be used to move the poles to arbitrary locations, including to infinity, and even to the eigenvalues of . In latter case, the transformed space does not correspond to a rational Krylov space generated with starting vector and poles , but must be interpreted as a filtered rational Krylov space. Indeed, the pole relocation problem is very similar to that of applying an implicit filter to the rational Krylov space [3,4]. See also  for more details.
Moving the poles implicitly
Assume we are given a nonzero vector with coefficient representation , where is a vector with entries. The function move_poles_impl can be used to obtain a transformed rational Arnoldi decomposition with starting vector .
As an example, let us take and hence transform the rational Arnoldi decomposition so that , the last basis vector in :
c = zeros(m+1,1); c(m+1) = 1; [KT, HT, QT, ZT] = move_poles_impl(K, H, c); VT = V*QT';
The poles of the rational Krylov space with the modified starting vector can again be read off as the generalized eigenvalues of the lower part of :
3.2914e+00 - 5.5756e-02i 1.8705e+00 - 1.2100e-01i 7.7852e-01 - 9.2093e-02i 1.9752e-01 - 3.0824e-02i 4.4392e-03 - 3.5884e-04i
This implicit pole relocation procedure is key element of the RKFIT algorithm described in [1,2].
Some fun with moving poles
To conclude this example, let us consider a random matrix , a random vector , and the corresponding -dimensional rational Krylov space with poles at :
A = (randn(10) + 1i*randn(10))*.5; b = randn(10,1) + 1i*randn(10,1); m = 5; xi = -2:2; [V, K, H] = rat_krylov(A, b, xi);
Here are the eigenvalues of :
figure plot(eig(A),'ko','MarkerFaceColor','y') axis([-2.5,2.5,-2.5,2.5]), grid on, hold on
We now consider a -dependent coefficient vector such that is continuously "morphed" from to . The poles of the rational Krylov space with the transformed starting vector are then plotted as a function of .
for t = linspace(1,2,51), c = zeros(m+1,1); c(floor(t)) = cos(pi*(t-floor(t))/2); c(floor(t)+1) = sin(pi*(t-floor(t))/2); [KT, HT, QT] = move_poles_impl(K, H, c);% transformed pencil xi_new = sort(eig(HT(2:m+1,1:m),KT(2:m+1,1:m))); % new poles plot(real(xi_new), imag(xi_new), 'b+') end
As one can see, only one of the five poles starts moving away from , with the remaining four poles staying at their positions. This is because "morphing" the starting vector from to only affects a two-dimensional subspace of which includes the vector and is itself a rational Krylov space, and this space is parameterized by one pole only.
As we now continue morphing from to , another pole starts moving:
for t = linspace(2,3,51), c = zeros(m+1,1); c(floor(t)) = cos(pi*(t-floor(t))/2); c(floor(t)+1) = sin(pi*(t-floor(t))/2); [KT, HT, QT, ZT] = move_poles_impl(K, H, c); xi_new = sort(eig(HT(2:m+1,1:m),KT(2:m+1,1:m))); plot(xi_new, 'r+') end
Morphing from to , then to , and finally to will eventually affect all five poles of the rational Krylov space:
for t = linspace(3, 5.99, 150) c = zeros(m+1,1); c(floor(t)) = cos(pi*(t-floor(t))/2); c(floor(t)+1) = sin(pi*(t-floor(t))/2); [KT, HT, QT, ZT] = move_poles_impl(K, H, c); xi_new = sort(eig(HT(2:m+1, 1:m), KT(2:m+1, 1:m))); switch floor(t) case 3, plot(xi_new', 'g+') case 4, plot(xi_new', 'm+') case 5, plot(xi_new', 'c+') end end
 M. Berljafa and S. Güttel. Generalized rational Krylov decompositions with an application to rational approximation, SIAM J. Matrix Anal. Appl., 36(2):894--916, 2015.
 M. Berljafa and S. Güttel. The RKFIT algorithm for nonlinear rational approximation, SIAM J. Sci. Comput., 39(5):A2049--A2071, 2017.
 G. De Samblanx and A. Bultheel. Using implicitly filtered RKS for generalised eigenvalue problems, J. Comput. Appl. Math., 107(2):195--218, 1999.
 G. De Samblanx, K. Meerbergen, and A. Bultheel. The implicit application of a rational filter in the RKS method, BIT, 37(4):925--947, 1997.
 A. Ruhe. Rational Krylov: A practical algorithm for large sparse nonsymmetric matrix pencils, SIAM J. Sci. Comput., 19(5):1535--1551, 1998.