Hey, I'm taking the MIT open courseware multivariable calculus course. In their list, there is the following exercise: suppose a point P moves on the surface of a sphere with center at the origin, let OP = r(t) = <x(t), y(t), z(y)>. Show that the velocity vector v is always perpendicular r without coordinates (as a tip, it tells you that the rule (r . s)' = v' . s + s' . v is also valid in space).
I thought about it and my conclusion was if I assumed that:
r . r = a constant
I could differentiate both sides and get that 2 (r . v) = 0, leading me to the conclusion that the vectors were perpendicular.
However, I couldn't quite justify my assumption unless (I think) I was speaking of the vector r(t1), t1 being the time, and therefore multiplying the exact same vector for itself (r(t1) . r(t1)), and getting that r(t1) . v(t1) = 0.
I checked the answer that MIT provided, and they said that since P is on a sphere, the module of r = constant = a (a being the ray of the sphere). Okay, but from there it says you can conclude the assumption that r . r = a constant (the constant being a²). I don't understand why that is the case, since if you're calculating the scalar product of say r(t1) and r(t2), the cosine of the angle will have a value Y which might be a different value if you're multiplying r(t1) and r(t3), which means you couldn't generalize for the case r . r.
The other option is that you're not generalizing, but multiplying r(t1) and r(t1), in which case you would indeed get a² since the cosine of the angle between the vectors will be zero. But if that is the case, why do you need the hypothesis of P moving in a sphere? Multiplying a vector by itself wouldn't always provide you with the same value? However, that would lead you to the conclusion that the velocity vector is ALWAYS perpendicular to the position vector for any given position vector, even outside a sphere, which seems fishy (but I don't know, maybe it is correct).
I'm a bit confused here, hope someone can justify why the sphere hypothesis matters and how you can make the generalization that MIT provided.
I thought about it and my conclusion was if I assumed that:
r . r = a constant
I could differentiate both sides and get that 2 (r . v) = 0, leading me to the conclusion that the vectors were perpendicular.
However, I couldn't quite justify my assumption unless (I think) I was speaking of the vector r(t1), t1 being the time, and therefore multiplying the exact same vector for itself (r(t1) . r(t1)), and getting that r(t1) . v(t1) = 0.
I checked the answer that MIT provided, and they said that since P is on a sphere, the module of r = constant = a (a being the ray of the sphere). Okay, but from there it says you can conclude the assumption that r . r = a constant (the constant being a²). I don't understand why that is the case, since if you're calculating the scalar product of say r(t1) and r(t2), the cosine of the angle will have a value Y which might be a different value if you're multiplying r(t1) and r(t3), which means you couldn't generalize for the case r . r.
The other option is that you're not generalizing, but multiplying r(t1) and r(t1), in which case you would indeed get a² since the cosine of the angle between the vectors will be zero. But if that is the case, why do you need the hypothesis of P moving in a sphere? Multiplying a vector by itself wouldn't always provide you with the same value? However, that would lead you to the conclusion that the velocity vector is ALWAYS perpendicular to the position vector for any given position vector, even outside a sphere, which seems fishy (but I don't know, maybe it is correct).
I'm a bit confused here, hope someone can justify why the sphere hypothesis matters and how you can make the generalization that MIT provided.