Just a bit of a nettlesome problem I'd like some help with.

Elroy

New member
Joined
Dec 28, 2014
Messages
5
Hi,

My first post to these forums. I hope this is a good place for this post.

I have a problem with complex numbers, but it can be framed to avoid concerning ourselves with complex numbers, and I shall do so. Also, it's mostly an algebra problem, but also involves a bit of geometry. Let's say I've got two variables, alpha (\(\displaystyle \alpha\)) and beta (\(\displaystyle \beta\)), such that:

\(\displaystyle \alpha = A_\alpha + B_\alpha\)

and

\(\displaystyle \beta = A_\beta + B_\beta\)

Now, there are additional relationships between alpha and beta (which I will outline shortly), but it can be shown that \(\displaystyle B_\alpha\) can always be forced to zero. That's my problem. I need help working out the algebra such that \(\displaystyle B_\alpha = 0\), where all the other equalities stay unchanged.

Okay, first, no matter how we change \(\displaystyle B_\alpha\), or, for that matter, \(\displaystyle A_\alpha\), \(\displaystyle A_\beta\), or \(\displaystyle B_\beta\), the values for both \(\displaystyle \alpha

\) and \(\displaystyle \beta\) must remain unchanged.

The primary relationship between \(\displaystyle \alpha\) and \(\displaystyle \beta\) is:

\(\displaystyle A_\alpha^2 + B_\alpha^2 + A_\beta^2 + B_\beta^2 = 1\)

I believe that this relationship provides enough to get most of it done. However, let's not forget that any of \(\displaystyle A_\alpha\), \(\displaystyle B_\alpha\), \(\displaystyle A_\beta\), or \(\displaystyle B_\beta\) can be negative. Therefore, the above relationship doesn't sort out our signs for us. That's where a bit of geometry comes in. To sort out the geometry, I shall introduce theta (\(\displaystyle \theta\)) and phi (\(\displaystyle \varphi\)). These are polar coordinates (in radians) such that:

\(\displaystyle 0 \le \theta \le \pi\)
and
\(\displaystyle 0 \le \varphi < 2\pi\)

It may be useful to think of these in terms of a unit sphere (a sphere in 3D space with radius = 1, and centered on the 0,0,0 origin). Then, we can imagine a unit vector (a vector anchored on the 0,0,0 origin, that's one unit long, and pointing out in any direction, with the vector's arrow just touching the surface of our unit sphere):

Unit_Sphere.jpg


\(\displaystyle \theta\) is an angle of rotation about the Y axis. It can rotate the vector from straight up (0) to straight down (\(\displaystyle \pi\)). \(\displaystyle \varphi\) is an angle of rotation about the Z axis. Looking straight down from the top of the sphere, it rotates the vector around in the sphere. \(\displaystyle \theta\) and \(\displaystyle \varphi\) provide unique positions on any/all points of the surface of the sphere (with the exceptions of when \(\displaystyle \theta = 0\) or \(\displaystyle \theta = \pi\) at which times, the value of \(\displaystyle \varphi\) doesn't matter (and is just twisting rather than rotating the vector).

Okay, here are the relationships between our \(\displaystyle \alpha\) and our \(\displaystyle \beta\) with our newly introduced \(\displaystyle \theta\) and \(\displaystyle \varphi\). But this is where my problem comes in. These relationships assume that \(\displaystyle B_\alpha\) is already zero. So, assuming that \(\displaystyle B_\alpha\) is zero, here are the relationships:

\(\displaystyle \theta = 2 \times ArcCos(A_\alpha)\)
and
\(\displaystyle \varphi = ArcCos(A_\beta / Sin(\theta / 2))\), recognizing that \(\displaystyle \theta\) must be calculated first.
... or alternatively ...
\(\displaystyle \varphi = ArcSin(B_\beta / Sin(\theta / 2))\), still recognizing that \(\displaystyle \theta\) must be calculated first.

However, there's still one more consideration for calculating \(\displaystyle \varphi\)...
If \(\displaystyle B_\beta\) is less than zero, we must subtract it from \(\displaystyle 2\pi\). So, we might add something like:
IF \(\displaystyle B_\beta < 0\) THEN \(\displaystyle \varphi = 2\pi - \varphi\)
(and this needs to be done regardless of which of the above methods is used to initially calculate \(\displaystyle \varphi\)).

Still assuming that \(\displaystyle B_\alpha\) is zero, if we have \(\displaystyle \theta\) and \(\displaystyle \varphi\), we can solve for \(\displaystyle A_\alpha\), \(\displaystyle A_\beta\), and \(\displaystyle B_\beta\) as follows:

\(\displaystyle A_\alpha = Cos(\theta / 2)\)
\(\displaystyle A_\beta = Cos(\varphi) \times Sin(\theta / 2)\)
\(\displaystyle B_\beta = Sin(\varphi) \times Sin(\theta / 2)\)

Recognizing that the first two quadrants of the XY plane (i.e., rotations around the Z axis, i.e., \(\displaystyle \varphi\)) are positive for Y, with quadrants #3 and #4 being negative for Y, this should provide the information necessary to sort out the signs of the final \(\displaystyle A_\alpha\), \(\displaystyle A_\beta\), and \(\displaystyle B_\beta\) based on the signs of the original \(\displaystyle A_\alpha\), \(\displaystyle B_\alpha\), \(\displaystyle A_\beta\), and \(\displaystyle B_\beta\). It may also be useful to envision the XZ plane (with rotations around the Y axis). In this plane, the first and second quadrants have negative Y values (in the 3D space). Given this information, I can tell you one more piece of information. If \(\displaystyle B_\beta\) starts out as zero, and \(\displaystyle B_\alpha\) starts out as positive, then, once \(\displaystyle B_\alpha\) is forced to zero, \(\displaystyle B_\beta\) will be negative (but not necessarily the same value). And if \(\displaystyle B_\beta\) starts out as zero, and \(\displaystyle B_\alpha\) starts out as negative, then, once \(\displaystyle B_\alpha\) is forced to zero, \(\displaystyle B_\beta\) will be positive (but not necessarily the same value). However, in the beginning, neither \(\displaystyle B_\alpha\) nor \(\displaystyle B_\beta\) may be zero. And again, that's the problem. If \(\displaystyle B_\alpha\) isn't zero, I need to know how to change \(\displaystyle A_\alpha\), \(\displaystyle A_\beta\), and \(\displaystyle B_\beta\) such that \(\displaystyle B_\alpha\) becomes zero.

Just to restate the problem:

Final \(\displaystyle \alpha\) must equal original \(\displaystyle \alpha\).
Final \(\displaystyle \beta\) must equal original \(\displaystyle \beta\).
In all cases, \(\displaystyle A_\alpha^2 + B_\alpha^2 + A_\beta^2 + B_\beta^2 = 1\).

The task is, if an input (original) \(\displaystyle B_\alpha\) isn't zero, how do we change \(\displaystyle A_\alpha\), \(\displaystyle A_\beta\), and \(\displaystyle B_\beta\) such that \(\displaystyle B_\alpha\) is zero, without changing our equality criteria?

Thanks in Advance for All Helping Posts,
Elroy
 
If I'm understanding you correctly, I believe you are over working the problem. Restating the problem as I understand it:
\(\displaystyle \alpha,\space A_\alpha,\space B_\alpha,\space \beta,\space A_\beta,\space and, B_\beta\) are given constants such that
\(\displaystyle \alpha = A_\alpha + B_\alpha\),
\(\displaystyle \beta = A_\beta + B_\beta\),
and
\(\displaystyle A_\alpha^2 + B_\alpha^2 + A_\beta^2 + B_\beta^2 =1\).

You would like to find elements
A = \(\displaystyle A_\alpha + dA\)
B = 0
C = \(\displaystyle A_\beta + dC\)
and
D = \(\displaystyle B_\beta + dD\)
such that
\(\displaystyle \alpha = A = A + B\)
\(\displaystyle \beta = C + D\)
and
A2 + B2 + C2 + D2 = 1

Looking at the equations we see that
dA = \(\displaystyle B_\alpha\)
dD = - dC
Using those two equations and expanding
A2 + B2 + C2 + D2 = 1
you will get a quadratic equation in dC. Solve it and choose a root.
 
Ahhh Ishuda,

I believe that you may be onto something. And yes, I thought I may be overworking it. I haven't verified your ideas but I will be doing that today. And yes, I somewhat knew that there would be two answers, as it's acceptable to multiply both \(\displaystyle \alpha\) and \(\displaystyle \beta\) by -1 and still have everything work out fine. Traditionally the \(\displaystyle \alpha\) = A = A + B is positive, but it really doesn't matter. I'm suspecting that one root will take us one way whereas the other root will take us the other.

I'll let you know if it works out.

Elroy
 
Hi Ishuda,

I'm trying to follow your logic, but I'm still getting stuck. Your concise reformulation of the problem is excellent, and I completely agree with your...
\(\displaystyle \Delta\)A = \(\displaystyle B_\alpha\)
\(\displaystyle \Delta\)D = -\(\displaystyle \Delta\)C
...equivalencies.

However, as I try and expand A2 + B2 + C2 + D2 = 1, I run into problems.

Here's my initial pass at expansion:

\(\displaystyle (A_\alpha + B_\alpha)^2 + 0 + (A_\beta + \Delta C)^2 + (B_\beta - \Delta C)^2 = 1\)

Multiplying this all out, we'd get...

\(\displaystyle A_\alpha^2 + 2A_\alpha B_\alpha + B_\alpha^2 + A_\beta^2 + 2A_\beta \Delta C + \Delta C^2 + B_\beta^2 - 2B_\beta \Delta C + \Delta C^2 = 1 \)

Hmmm, I guess that could be reworked as a quadratic, as follows, but this still seems like a lot of terms, but maybe that's just what it is:

\(\displaystyle 2\Delta C^2 + (2A_\beta - 2B_\beta)\Delta C + (A_\alpha^2 + 2A_\alpha B_\alpha + B_\alpha^2 + A_\beta^2 + B_\beta^2 - 1) = 0\)


Given that
\(\displaystyle A_\alpha, B_\alpha, A_\beta, B_\beta\) are all constants for a particular instance of the problem, we could certainly solve for \(\displaystyle \Delta C\), and that would certainly solve the entire problem. I'll give it a try and see what happens.

Ishuda, is this what you were thinking?

Thanks,
Elroy


 
Yes, but you can use
\(\displaystyle A_\alpha^2 + B_\alpha^2 + A_\beta^2 + B_\beta^2 =1\)
to reduce it to
\(\displaystyle \Delta C^2 + (A_\beta - B_\beta) \Delta C + A_\alpha B_\alpha = 0\)
 
Ahhh, very good Ishuda,

I totally see that. I'm on the cusp of trying it out right now (i.e., coding it up). As you may have guessed, I'm working out a program to show how a Bloch sphere can show the internal workings of a qubit, and how gaits are applied to it. In my first post, that's actually a partial screenshot of the program. The unit vector of the Bloch sphere can be represented as two complex numbers, but it's customary to force the first one (Alpha) to always be "real". That's where my problem came in. And also, after a gait is applied, the Alpha number can easily wind up with an "imaginary" part, and I was struggling with how to force it back to only a "real" part.

It's funny how I can work out linear algebra, but something like the quadratic can stump me.

I'll report back on how/whether it all worked.

Elroy

 
Crud,

That totally works, but now I've got another problem. I just need to beat my head on the wall for a while.

I'll be back,
Elroy
 
Top