Bases and Dimension: Prove the plane P is a subspace of R^3

Bronn

Junior Member
Joined
Jan 13, 2017
Messages
62
I'm on the basis and dimension (vector spaces) chapter of my textbook and need to prove the following:


Consider the plane P in R^3 whose equation is x+y+z=0

prove P is a subspace in R^3.



I suspect they don't want to us to use the method of proving subspaces by showing closure under addition etc. but rather make use of some theorems in the chapter, in particular, for this question I assume

Theorem: Suppose that S is a finite subset of a vector space. The span of every proper subset of S is a proper subspace of span(S) if and only if S is a linearly independent set.


So is it suffice to show that if P is a basis for R2 (linearly independent/spans R2), and because R2 is a subspace of R3 then P is a subspace of R3?



cheers
 
Last edited:
I'm on the basis and dimension (vector spaces) chapter of my textbook and need to prove the following:


Consider the plane P in R^3 whose equation is x+y+z=0

prove P is a subspace in R^3.



I suspect they don't want to us to use the method of proving subspaces by showing closure under addition etc. but rather make use of some theorems in the chapter, in particular, for this question I assume

Theorem: Suppose that S is a finite subset of a vector space. The span of every proper subset of S is a proper subspace of span(S) if and only if S is a linearly independent set.


So is it suffice to show that if P is a basis for R2 (linearly independent/spans R2), and because R2 is a subspace of R3 then P is a subspace of R3?



cheers
One question for you. Is the plane P a finite subset of the vector space R^3? If it is, then can you please list this finite set P. Thanks
 
Last edited:
I'm on the basis and dimension (vector spaces) chapter of my textbook and need to prove the following:
Consider the plane P in R^3 whose equation is x+y+z=0. Prove P is a subspace in R^3.
I suspect they don't want to us to use the method of proving subspaces by showing closure under addition etc. but rather make use of some theorems in the chapter, in particular, for this question I assume

Theorem: Suppose that S is a finite subset of a vector space. The span of every proper subset of S is a proper subspace of span(S) if and only if S is a linearly independent set.
To Bronn: is the set of vectors \(\displaystyle \bf{\mathcal{B}=\{<1,0,-1>~\&~<0,1,-1>\}}\) a set of linearly independent vectors? Can you prove it is?
Can you show that \(\displaystyle \bf{\text{span}(\mathcal{B})}\) is the plane \(\displaystyle x+y+z=0~?\)
 
One question for you. Is the plane P a finite subset of the vector space R^3? If it is, then can you please list this finite set P. Thanks

oh..., good question. I'm having trouble answering this, my head feels so jumbled on this topic. Is it not a finite subset because you can use an infinite variety of vectors to represent the plane?

To Bronn: is the set of vectors \(\displaystyle \bf{\mathcal{B}=\{<1,0,-1>~\&~<0,1,-1>\}}\) a set of linearly independent vectors? Can you prove it is?
Can you show that \(\displaystyle \bf{\text{span}(\mathcal{B})}\) is the plane \(\displaystyle x+y+z=0~?\)

Ill have a go..

if the vectors from the set B form the columns of matrix A then (A|0) can be reduced to RE form, so it has a unique solution that the vectors only equal 0 when multiplied by 0, meaning they are linearly independent.

if you let the vector v=<x, y, z> be some vector in the plane P then if it is in the span of B there will be a solution to (A|v)
which when made to RREF has a solution only when x+y+z=0

im unsure how to tie this all up into a proof for the question tho
 
oh..., good question. I'm having trouble answering this, my head feels so jumbled on this topic. Is it not a finite subset because you can use an infinite variety of vectors to represent the plane?
Ill have a go..if the vectors from the set B form the columns of matrix A then (A|0) can be reduced to RE form, so it has a unique solution that the vectors only equal 0 when multiplied by 0, meaning they are linearly independent.
if you let the vector v=<x, y, z> be some vector in the plane P then if it is in the span of B there will be a solution to (A|v)
which when made to RREF has a solution only when x+y+z=0
Your notation is a totally mysterious to me. What the he** does a column matrix have to do with anything in vector geometry?
I have never seen the notation \(\displaystyle (A|\bf v)\) .
 
Your notation is a totally mysterious to me. What the he** does a column matrix have to do with anything in vector geometry?
I have never seen the notation \(\displaystyle (A|\bf v)\) .

oh I was using the shorthand for the augmented matrix for example https://en.wikipedia.org/wiki/Augmented_matrix
maybe im using it wrong

the method I was shown (as I understand it) is to put the set of vectors into an augmented matrix A. If Ax=0 has a unique solution then it is linearly independent.
 
Last edited:
oh I was using the shorthand for the augmented matrix for example https://en.wikipedia.org/wiki/Augmented_matrix
maybe im using it wrong
That maybe. Still what do matrices have to do with basic vector geometry?
You asked a question about vector geometry. You want to know about the set \(\displaystyle \{(a,b,c): a+b+c=0\}\) being a subspace of \(\displaystyle \mathbb{R}^3\).
I submit to you that I gave you a linearly independent set \(\displaystyle \{<1,0,-1>,<0,1,-1>\}\) that spans the plane.
 
That maybe. Still what do matrices have to do with basic vector geometry?
You asked a question about vector geometry. You want to know about the set \(\displaystyle \{(a,b,c): a+b+c=0\}\) being a subspace of \(\displaystyle \mathbb{R}^3\).
I submit to you that I gave you a linearly independent set \(\displaystyle \{<1,0,-1>,<0,1,-1>\}\) that spans the plane.


sorry. I'm being unclear or perhaps flat wrong.
In regards to proving the set B of vectors is linearly independent, this is what I did.

if B={v1,v2} where v1= <1,0,-1> and v2=<0,1,-1>

then for linear independence λv1+μv2 = 0 only when λ,μ = 0

as a system of linear equations

λ<1,0,-1>+μ<0,1,-1>=0


λ+0=0
0+μ=0
-λ-μ=0

which I put in an augmented matrix to solve (i don't know how to write augmented matrices in this forum) , if it can be made into row echelon form then it is linearly independent. That's how was shown to prove linear independence.
 
Last edited:
sorry. I'm being unclear or perhaps flat wrong.
In regards to proving the set B of vectors is linearly independent, this is what I did.

if B={v1,v2} where v1= <1,0,-1> and v2=<0,1,-1>

then for linear independence λv1+μv2 = 0 only when λ,μ = 0

as a system of linear equations

λ<1,0,-1>+μ<0,1,-1>=0


λ+0=0
0+μ=0
-λ-μ=0

which I put in an augmented matrix to solve (i don't know how to write augmented matrices in this forum) , if it can be made into row echelon form then it is linearly independent. That's how was shown to prove linear independence.
The 1st equation says λ=0, 2nd equation says that μ=0 and the 3rd equation say that λ=-μ. You really need to reduce a matrix to see that λ = μ = 0?

In λ<1,0,-1>+μ<0,1,-1>=0 you should see for the sum to be 0 for the 1st term you will need λ = 0 and similarly for the 2nd term to be 0 you need μ = 0. Done, as both coefficients must be 0. That is the 2 vectors are independent. Stop using formulas all the time and think about these things.
 
Well I never set up a system of linear equations or anything initially, I could see that they would reduce and skipped to writing (A|0) can be RE form.
the rest was trying to justify using (A|0) notation to pka is all. totally sidetracked.
 
Well I never set up a system of linear equations or anything initially, I could see that they would reduce and skipped to writing (A|0) can be RE form.
the rest was trying to justify using (A|0) notation to pka is all. totally sidetracked.
OK, I see. Then you did a good job for that part. That is all that matters!
 
Top