To compute the SVD of A, an orthogonal matrix V is generated such that the transformed matrix AV = W has orthogonal columns. Scaling each column of W so that its 2-norm is unity we get :
![]() |
(2) |
Comparing equation (2) with equation (1), we notice the different sizes of U and as well as
and
. However, since the null columns of U and
are associated with zero diagonal elements of
and
, respectively, the non-null columns of U and
form orthogonal bases for range(A) and there is no essential difference between equations (1) and (2). Successive Jacobi rotations are applied to A in order to obtain W, giving us the following iteration :
Ak+1 = Ak Jk
where
a(k+1)p = c a(k)p - s a(k)q
a(k+1)q = s a(k)p + c a(k)q
a(k+1)Tp a(k+1)q = c s a(k)Tp a(k)p + c2 a(k)Tp a(k)q - s2 a(k)Tq a(k)p - c s a(k)Tq a(k)q = 0
Denoting
For every iteration we choose different column pairs (p,q) and obtain a matrix Jk that will orthogonalize those columns. As we are computing W, we can also compute V beginning with the identity matrix and performing the following with each iteration :
Vk+1 = Vk Jk
There are several ways of choosing the pairs of columns (p,q) that are orthogonalized at every step. The approach used here is one that is suitable for parallel implementation on the CM.