decomposition

logistic_guy

Senior Member
Joined
Apr 17, 2024
Messages
1,526
here is the question

Find a singular value decomposition of A=[110011]\displaystyle A = \begin{bmatrix}1 & 1 \\0 & 0 \\ 1 & 1 \end{bmatrix} .


my attemb
after finding the eigenvalues, what should i do?🙁
 
here is the question

Find a singular value decomposition of A=[110011]\displaystyle A = \begin{bmatrix}1 & 1 \\0 & 0 \\ 1 & 1 \end{bmatrix} .


my attemb
after finding the eigenvalues, what should i do?🙁
AA represents a linear transformation R2AR3. \mathbb{R}^2\stackrel{A}{\longrightarrow }\mathbb{R}^3.
An eigenvalue of A A is a number λ \lambda such that Av=λv A\cdot v=\lambda \cdot v for some vector v0. v\neq 0.

How can v v be two-dimensional and three-dimensional at the same time?
 
AA represents a linear transformation R2AR3. \mathbb{R}^2\stackrel{A}{\longrightarrow }\mathbb{R}^3.
An eigenvalue of A A is a number λ \lambda such that Av=λv A\cdot v=\lambda \cdot v for some vector v0. v\neq 0.
this i understand

How can v v be two-dimensional and three-dimensional at the same time?
this i don't know🙁

i'm not talk about eignvalue of A\displaystyle A the eignvalue for ATA\displaystyle A^{T}A

the pDF file say

A=UΣVT\displaystyle A = U\Sigma V^{T}

U\displaystyle U is m×m\displaystyle m\times m matrix
V\displaystyle V is n×n\displaystyle n\times n matrix
Σ\displaystyle \Sigma is m×n\displaystyle m\times n matrix

Example 3.3 in pDF file use A=2×3\displaystyle A = 2\times3 but my A=3×2\displaystyle A = 3\times 2 i'm confused to follow it🥺

i do my analize before i write the question in this website and i find the eignvalues of ATA\displaystyle A^{T}A
λ1=4\displaystyle \lambda_1 = 4
λ2=0\displaystyle \lambda_2 = 0

what's the next step?😣
 
i'm not talk about eignvalue of A\displaystyle A the eignvalue for ATA\displaystyle A^{T}A
Yes, I know, but you didn't say so.
the pDF file say

A=UΣVT\displaystyle A = U\Sigma V^{T}

U\displaystyle U is m×m\displaystyle m\times m matrix
V\displaystyle V is n×n\displaystyle n\times n matrix
Σ\displaystyle \Sigma is m×n\displaystyle m\times n matrix

Example 3.3 in pDF file use A=2×3\displaystyle A = 2\times3 but my A=3×2\displaystyle A = 3\times 2 i'm confused to follow it🥺

i do my analize before i write the question in this website and i find the eignvalues of ATA\displaystyle A^{T}A
λ1=4\displaystyle \lambda_1 = 4
λ2=0\displaystyle \lambda_2 = 0

what's the next step?😣

The single values are thus the non-negative square roots of the eigenvalues: σ1=2,σ2=0 \sigma_1=2\, , \,\sigma_2=0 and r=1 r=1 is the number of non-zero values.

We proceed by theorem 3.2 (in chapter 3: how to find the SVD). We first need the (two-dimensional) eigenvectors v1,v2 v_1,v_2 to the eigenvalues λ1=4 \lambda_1=4 and λ2=0 \lambda_2=0 of ATA. A^TA. Note that they should be orthonormal.

What are they?

If you found them, we proceed by theorem 3.2 and definition 2.1.
 
Yes, I know, but you didn't say so.


The single values are thus the non-negative square roots of the eigenvalues: σ1=2,σ2=0 \sigma_1=2\, , \,\sigma_2=0 and r=1 r=1 is the number of non-zero values.

We proceed by theorem 3.2 (in chapter 3: how to find the SVD). We first need the (two-dimensional) eigenvectors v1,v2 v_1,v_2 to the eigenvalues λ1=4 \lambda_1=4 and λ2=0 \lambda_2=0 of ATA. A^TA. Note that they should be orthonormal.

What are they?

If you found them, we proceed by theorem 3.2 and definition 2.1.
thank

[2λ1222λ1]v1=[2λ1222λ1][ab]=[242224][ab]=[2222][ab]=[00]\displaystyle \begin{bmatrix}2 - \lambda_1 & 2 \\2 & 2-\lambda_1 \end{bmatrix} \bold{v_1} = \begin{bmatrix}2 - \lambda_1 & 2 \\2 & 2-\lambda_1 \end{bmatrix} \begin{bmatrix}a \\b \end{bmatrix} = \begin{bmatrix}2 - 4 & 2 \\2 & 2-4 \end{bmatrix} \begin{bmatrix}a \\b \end{bmatrix} = \begin{bmatrix}-2 & 2 \\2 & -2 \end{bmatrix} \begin{bmatrix}a \\b \end{bmatrix} = \begin{bmatrix}0 \\0 \end{bmatrix}

if i solve this system i get v1=[11]\displaystyle \bold{v_1} = \begin{bmatrix}1 \\1 \end{bmatrix}

[2λ2222λ2]v2=[2λ2222λ2][cd]=[202220][cd]=[2222][cd]=[00]\displaystyle \begin{bmatrix}2 - \lambda_2 & 2 \\2 & 2-\lambda_2 \end{bmatrix} \bold{v_2} = \begin{bmatrix}2 - \lambda_2 & 2 \\2 & 2-\lambda_2 \end{bmatrix} \begin{bmatrix}c \\d \end{bmatrix} = \begin{bmatrix}2 - 0 & 2 \\2 & 2-0 \end{bmatrix} \begin{bmatrix}c \\d \end{bmatrix} = \begin{bmatrix}2 & 2 \\2 & 2 \end{bmatrix} \begin{bmatrix}c \\d \end{bmatrix} = \begin{bmatrix}0 \\0 \end{bmatrix}

if i solve this system i get v2=[11]\displaystyle \bold{v_2} = \begin{bmatrix}1 \\-1 \end{bmatrix}

Note that they should be orthonormal.
you mean i divide each one by its magnitude?

v1=12+12=1+1=2\displaystyle |\bold{v_1}| = \sqrt{1^2 + 1^2} = \sqrt{1 + 1} = \sqrt{2}
v2=12+12=1+1=2\displaystyle |\bold{v_2}| = \sqrt{1^2 + -1^2} = \sqrt{1 + 1} = \sqrt{2}

the orthonomral vector

v1=[1212]\displaystyle \bold{v_1} = \begin{bmatrix}\frac{1}{\sqrt{2}} \\\frac{1}{\sqrt{2}} \end{bmatrix}
v2=[1212]\displaystyle \bold{v_2} = \begin{bmatrix}\frac{1}{\sqrt{2}} \\\frac{-1}{\sqrt{2}} \end{bmatrix}

is this correct?😓
 
I am once more confused by the fact that you re-norm the vectors but keep the notation. If I now write v1 \mathbf{v_1} how will you know whether I mean v1=(1,1)T \mathbf{v_1}=(1,1)^T or v1=(1/2,1/2)T  ? \mathbf{v_1}=\left(1/\sqrt{2},1/\sqrt{2}\right)^T \;? This is not how math works, or written communication!

Let's see.

(ATA)v1=(2222)(1/21/2)=(2/2+2/22/2+2/2)=(4/24/2)=4(1/21/2)(A^TA)\mathbf{v_1}= \begin{pmatrix}2&2\\2&2\end{pmatrix} \begin{pmatrix}1/\sqrt{2}\\1/\sqrt{2}\end{pmatrix}=\begin{pmatrix}2/\sqrt{2}+2/\sqrt{2}\\2/\sqrt{2}+2/\sqrt{2}\end{pmatrix}=\begin{pmatrix}4/\sqrt{2}\\4/\sqrt{2}\end{pmatrix}=4\cdot \begin{pmatrix}1/\sqrt{2}\\1/\sqrt{2}\end{pmatrix}and the vector norm is 1. 1.
The equation (ATA)v2=(00)=0v2 (A^TA)\mathbf{v_2}=\begin{pmatrix}0\\0\end{pmatrix}=0\cdot \mathbf{v_2} is also correct, v2 \mathbf{v_2} has vector norm 1, 1, too. Both vectors are also orthogonal since
(1/2,1/2)(1/21/2)=0. \begin{pmatrix}1/\sqrt{2}\, , \,1/\sqrt{2}\end{pmatrix}\cdot \begin{pmatrix}1/\sqrt{2}\\-1/\sqrt{2}\end{pmatrix}=0.
Yes, you are correct. I first thought you made a scaling mistake - that's why I checked it here - but your scaling is ok. Now you can use σ1=2 \sigma_1=2 and definition 2.1 to build Σ, \Sigma, and theorem 3.2 to build V V and U. U. Note that r=1 r=1 and you have some arbitrariness by determining the second column in U. U. However, U U must be an orthogonal matrix, i.e. we need UTU=I. U^T\cdot U=I.
 
I am once more confused by the fact that you re-norm the vectors but keep the notation. If I now write v1 \mathbf{v_1} how will you know whether I mean v1=(1,1)T \mathbf{v_1}=(1,1)^T or v1=(1/2,1/2)T  ? \mathbf{v_1}=\left(1/\sqrt{2},1/\sqrt{2}\right)^T \;? This is not how math works, or written communication!
you're right it's better to use different vector like s1=[11]\displaystyle \bold{s_1} = \begin{bmatrix}1 \\1 \end{bmatrix} and s2=[11]\displaystyle \bold{s_2} = \begin{bmatrix}1 \\-1 \end{bmatrix}

Yes, you are correct. I first thought you made a scaling mistake - that's why I checked it here - but your scaling is ok. Now you can use σ1=2 \sigma_1=2 and definition 2.1 to build Σ, \Sigma, and theorem 3.2 to build V V and U. U. Note that r=1 r=1 and you have some arbitrariness by determining the second column in U. U. However, U U must be an orthogonal matrix, i.e. we need UTU=I. U^T\cdot U=I.
the pDF file you send say V=n×n\displaystyle V = n\times n matrix but it don't say how many component it hold
is it correct to assume only two?
if v1\displaystyle \bold{v_1} is the first component and v2\displaystyle \bold{v_2} is the second component
it mean we already find V\displaystyle V

V=[12121212]\displaystyle V = \begin{bmatrix}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \end{bmatrix}

i'm wrong?☹️
 
you're right it's better to use different vector like s1=[11]\displaystyle \bold{s_1} = \begin{bmatrix}1 \\1 \end{bmatrix} and s2=[11]\displaystyle \bold{s_2} = \begin{bmatrix}1 \\-1 \end{bmatrix}


the pDF file you send say V=n×n\displaystyle V = n\times n matrix but it don't say how many component it hold
is it correct to assume only two?
if v1\displaystyle \bold{v_1} is the first component and v2\displaystyle \bold{v_2} is the second component
it mean we already find V\displaystyle V

V=[12121212]\displaystyle V = \begin{bmatrix}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \end{bmatrix}

Looks ok. Let me think out loud while checking how the matrices are built. We start with A A which is a matrix with 3 rows and 2 columns, so m=3 m=3 and n=2 n=2 according to the paper that says that A A is a m×n m\times n matrix. This is the notation I am used to: first the rows and then the columns. Let's see how far we get by this convention.

Σ\Sigma is the easiest one to start with. It is of the same shape as A, A, has σ1=2 \sigma_1=2 on its first diagonal entry and zeros elsewhere, i.e.
Σ=(200000). \Sigma=\begin{pmatrix}2&0\\0&0\\0&0\end{pmatrix} .Then theorem 3.2 says that V V consists of the two orthonormal eigenvectors of ATA A^TA which is the one you have, too. Let me write it as
V=12(1111) V= \dfrac{1}{\sqrt{2}}\begin{pmatrix}1&1\\1&-1\end{pmatrix}instead. The theorem says further that the first column of U U is
u1:=σ11Av1=122(110011)(11)=122(202)=12(101)=(1/201/2). \mathbf{u_1}:=\sigma_1^{-1}A \mathbf{v_1}=\dfrac{1}{2\sqrt{2}}\begin{pmatrix}1&1\\0&0\\1&1\end{pmatrix}\cdot \begin{pmatrix}1\\1\end{pmatrix}=\dfrac{1}{2\sqrt{2}}\begin{pmatrix}2\\0\\2\end{pmatrix}=\dfrac{1}{\sqrt{2}}\begin{pmatrix}1\\0\\1\end{pmatrix}=\begin{pmatrix}1/\sqrt{2}\\0\\1/\sqrt{2}\end{pmatrix}.Now we need to find vectors u2,u3 \mathbf{u_2}\, , \,\mathbf{u_3} with 3 3 components that are orthonormal two u1 \mathbf{u_1} and each other.

i'm wrong?☹️

No. You are right. Now find the rest of U U and finally check UΣVT=?A. U\Sigma V^T\stackrel{?}{=}A.
 
Looks ok. Let me think out loud while checking how the matrices are built. We start with A A which is a matrix with 3 rows and 2 columns, so m=3 m=3 and n=2 n=2 according to the paper that says that A A is a m×n m\times n matrix. This is the notation I am used to: first the rows and then the columns. Let's see how far we get by this convention.
i use this notation too

No. You are right. Now find the rest of U U and finally check UΣVT=?A. U\Sigma V^T\stackrel{?}{=}A.
thank🥺

Now we need to find vectors u2,u3 \mathbf{u_2}\, , \,\mathbf{u_3} with 3 3 components that are orthonormal two u1 \mathbf{u_1} and each other.
i think that's the hard part😭the idea simple but process take a lot of calculation
my think the dot product betwwen two vector is orthonomral if it's zero

u1u2=0\displaystyle \bold{u_1}\cdot \bold{u_2} = 0

12[101][abc]=0\displaystyle \frac{1}{\sqrt{2}}\begin{bmatrix}1 \\0 \\1 \end{bmatrix}\cdot \begin{bmatrix}a \\b \\c \end{bmatrix} = 0

12×a+0×b+12×c=0\displaystyle \frac{1}{\sqrt{2}}\times a + 0\times b + \frac{1}{\sqrt{2}} \times c = 0

12(a+c)=0\displaystyle \frac{1}{\sqrt{2}}(a + c) = 0

a+c=0\displaystyle a + c = 0

a=c\displaystyle a = -c

u2=[1b1]\displaystyle \bold{u_2} = \begin{bmatrix}1 \\b \\-1 \end{bmatrix}

the system don't give the value of b\displaystyle b☹️
 
You can use the same trick as before. You had v1v2 \mathbf{v_1}\perp \mathbf{v_2} with (up to a factor) v1=(1,1) \mathbf{v_1}=(1,1) and v2=(1,1). \mathbf{v_2}=(1,-1). This situation remains if we add a zero in the middle:
12(101)=u1u2=12(101).\dfrac{1}{\sqrt{2}} \begin{pmatrix}1\\0\\1 \end{pmatrix}=\mathbf{u_1} \perp \mathbf{u}_2=\dfrac{1}{\sqrt{2}}\begin{pmatrix}1\\0\\-1\end{pmatrix}.
But, yes, you can choose any value for b b . But that would make things unnecessarily complicated. b=0 b=0 is fine. Especially, as it allows us to write down u3 \mathbf{u_3} immediately. With b=0, b=0, the two vectors u1,u2 \mathbf{u_1},\mathbf{u_2} span a two-dimensional plane in R3 \mathbb{R}^3 with no disturbances from the coordinate in the middle, since both are zero. This means that the plane they span is the (x,z) (x,z) -plane! So the y y -coordinate spans the missing dimension that is automatically perpendicular to the (x,z) (x,z) -plane, i.e. we get u3=(010) \mathbf{u_3}=\begin{pmatrix}0\\1\\0\end{pmatrix} as the third vector for free.
 
thank fresh_42 very much

you explained it nicely

But, yes, you can choose any value for b b . But that would make things unnecessarily complicated. b=0 b=0 is fine.
this for me first time i know. it'll help me in next questions

This means that the plane they span is the (x,z) (x,z) -plane! So the y y -coordinate spans the missing dimension that is automatically perpendicular to the (x,z) (x,z) -plane, i.e. we get u3=(010) \mathbf{u_3}=\begin{pmatrix}0\\1\\0\end{pmatrix} as the third vector for free.
i like this trick
if i don't see trick i just take the cross product of u1×u2\displaystyle \bold{u_1} \times \bold{u_2}, right?
i do cross product before it take a lot of calculation and i don't want it this time

so the solution

V=UΣVT=[1201201012012][200000][12121212]\displaystyle V = U\Sigma V^{T} = \begin{bmatrix}\frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} \\0 & 1 & 0 \\ \frac{1}{\sqrt{2}} & 0 & -\frac{1}{\sqrt{2}} \end{bmatrix}\begin{bmatrix}2 & 0 \\0 & 0 \\ 0 & 0\end{bmatrix}\begin{bmatrix}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \end{bmatrix}

if σ1=2\displaystyle \sigma_1 = 2 and σ2=1\displaystyle \sigma_2 = 1, Σ=[200100]\displaystyle \Sigma = \begin{bmatrix}2 & 0 \\0 & 1 \\ 0 & 0\end{bmatrix}, right?

i see the sigma matrix have only σ1\displaystyle \sigma_1 and σ2\displaystyle \sigma_2 why it's necessary to add the last row? i mean do we must make the Σ\displaystyle \Sigma matrix 3×2\displaystyle 3 \times 2 to make it match matrix A\displaystyle A or there's another reason?

i also see the matrix V=VT\displaystyle V = V^{T} i know it's symmetry. do it mean anything else in this case?
 
There is no σ2 \sigma_2 since the sigmas that constitute Σ \Sigma are only the σi \sigma_i with ir i\leq r and r, r, which counts the non-zero eigenvalues is r=1 r=1 because
λ2=0 \lambda_2=0 and only λ10. \lambda_1\neq 0. This means that Σ=(200000). \Sigma=\begin{pmatrix}2&0\\0&0\\0&0\end{pmatrix} .


In case your question was: "Given another example with σ1=2 \sigma_1=2 and σ2=1 \sigma_2=1 what would we get for Σ? \Sigma \,?" then you are right. The paper orders the sigmas from large to small, so σ1=2>σ2=1>0 \sigma_1=2>\sigma_2=1>0 and Σ=(200100). \Sigma= \begin{pmatrix}2&0\\0&1\\0&0\end{pmatrix}. Yes. If that was behind that little word "if", then you are right.


I think it doesn't matter since theorem 3.2 only requires that
u1=σ11Av1=(1/201/2) \mathbf{u_1}=\sigma_1^{-1}A\mathbf{v_1}=\begin{pmatrix} 1/\sqrt{2}\\0\\1/\sqrt{2}\end{pmatrix} and the rest of the matrix U U is just any orthonormal complement. I have written U U with switched columns, (extended) v2 \mathbf{v_2} second and u3=(0,1,0)T \mathbf{u_3}=(0,1,0)^T last.


Hint: calculations will be a little easier if you write U=12(101020101) U=\dfrac{1}{\sqrt{2}}\begin{pmatrix}1&0&1\\0&\sqrt{2}&0\\1&0&-1\end{pmatrix} and V=12(1111). V=\dfrac{1}{\sqrt{2}}\begin{pmatrix}1&1\\1&-1\end{pmatrix}.

By the way: Thanks for the question. I learned SVD now and the paper is nice to have since it is written as a recipe.
 
By the way: Thanks for the question. I learned SVD now and the paper is nice to have since it is written as a recipe.
you mean you don't solve this quesiton before:eek:then you're very clever teacher

The paper orders the sigmas from large to small, so σ1=2>σ2=1>0 \sigma_1=2>\sigma_2=1>0 and Σ=(200100). \Sigma= \begin{pmatrix}2&0\\0&1\\0&0\end{pmatrix}. Yes. If that was behind that little word "if", then you are right.
you mean the order of sigma is important, large value come first
if σ1=2\displaystyle \sigma_1 = 2 and σ2=5\displaystyle \sigma_2 = 5

Σ=[500200]\displaystyle \Sigma = \begin{bmatrix}5 & 0\\0 & 2\\0 & 0\end{bmatrix}, right?

if both σ10\displaystyle \sigma_1 \neq 0 and σ20\displaystyle \sigma_2 \neq 0 which one to chose to work with formula?
σi1Av1\displaystyle \sigma_i^{-1}A\mathbf{v_1}

Hint: calculations will be a little easier if you write U=12(101020101) U=\dfrac{1}{\sqrt{2}}\begin{pmatrix}1&0&1\\0&\sqrt{2}&0\\1&0&-1\end{pmatrix} and V=12(1111). V=\dfrac{1}{\sqrt{2}}\begin{pmatrix}1&1\\1&-1\end{pmatrix}.
i think i like this notation. i'll adabt in future questions:)

attempt not attemb
thank
english isn't my language😞
 
you mean you don't solve this quesiton before:eek:then you're very clever teacher

I may have seen it before but I didn't remember. The paper deserves merit. It was my second hit on Google. The first one was from MIT but was less convincing.

I remember a dialogue with my professor at university when I saw the announcement of his courses for the upcoming semester. I commented: "I didn't know you are familiar with <some topic I can't remember which>." He replied: "I am not. I want to learn it. That's why I'm holding the lecture!" Being forced to explain something means being prepared for some unusual questions means having to learn it deeper than just "reading it".

you mean the order of sigma is important, large value come first
if σ1=2\displaystyle \sigma_1 = 2 and σ2=5\displaystyle \sigma_2 = 5

Σ=[500200]\displaystyle \Sigma = \begin{bmatrix}5 & 0\\0 & 2\\0 & 0\end{bmatrix}, right?

Yes, that's correct. I think the order is only used to avoid discrepancies with the ordering of the columns of the three matrices U,Σ,V. U,\Sigma,V. Their columns have to match the corresponding values σk \sigma_k so that the equations will be correct. Ordering the σk \sigma_k by size makes it clearer and avoids confusion.

if both σ10\displaystyle \sigma_1 \neq 0 and σ20\displaystyle \sigma_2 \neq 0 which one to chose to work with formula?
σi1Av1\displaystyle \sigma_i^{-1}A\mathbf{v_1}

Your matrix in the example is correct. The paper says σ1σ2σn0 \sigma_1\ge \sigma_2\ge \ldots\ge \sigma_n\ge 0 and establishes thus an order for column 1 to column n this way. I would bet that the reverse order would work as well as long as we use the reverse order of column vectors in the matrices, too. But "filling up the rest" is much clearer than starting with the arbitrariness in this rest and administrating the size of this rest only to begin with it. Remember that we had u1,,ur \mathbf{u_1},\ldots,\mathbf{u_r} and any orthonormal complement for ur+1,,un. \mathbf{u_{r+1}},\ldots,\mathbf{u_n}. Now imagine writing: Start with an arbitrary orthonormal complement u1,,unr \mathbf{u_1},\ldots,\mathbf{u_{n-r}} of unr+1,,un \mathbf{u_{n-r+1}},\ldots,\mathbf{u_n} , etc. That would probably work, too, but would be much more inconvenient.
 
Yes I thought so, just trying to help :)
i do appreciate your help🙏

I may have seen it before but I didn't remember. The paper deserves merit. It was my second hit on Google. The first one was from MIT but was less convincing.

I remember a dialogue with my professor at university when I saw the announcement of his courses for the upcoming semester. I commented: "I didn't know you are familiar with <some topic I can't remember which>." He replied: "I am not. I want to learn it. That's why I'm holding the lecture!" Being forced to explain something means being prepared for some unusual questions means having to learn it deeper than just "reading it".
nice story i'll write it down:)

Yes, that's correct. I think the order is only used to avoid discrepancies with the ordering of the columns of the three matrices U,Σ,V. U,\Sigma,V. Their columns have to match the corresponding values σk \sigma_k so that the equations will be correct. Ordering the σk \sigma_k by size makes it clearer and avoids confusion.
thank

Your matrix in the example is correct. The paper says σ1σ2σn0 \sigma_1\ge \sigma_2\ge \ldots\ge \sigma_n\ge 0 and establishes thus an order for column 1 to column n this way. I would bet that the reverse order would work as well as long as we use the reverse order of column vectors in the matrices, too. But "filling up the rest" is much clearer than starting with the arbitrariness in this rest and administrating the size of this rest only to begin with it. Remember that we had u1,,ur \mathbf{u_1},\ldots,\mathbf{u_r} and any orthonormal complement for ur+1,,un. \mathbf{u_{r+1}},\ldots,\mathbf{u_n}. Now imagine writing: Start with an arbitrary orthonormal complement u1,,unr \mathbf{u_1},\ldots,\mathbf{u_{n-r}} of unr+1,,un \mathbf{u_{n-r+1}},\ldots,\mathbf{u_n} , etc. That would probably work, too, but would be much more inconvenient.
thank fresh_42 very much

i appreciate your nice explanation very much🙏
 
Top