Hello Everyone!
I have a problem that I hope someone can shine some light on. I think it's somewhat me not understanding what's going on and some confusion on how to take what I know and change it to how to solve my issue.
For my problem, I have a ball at rest with an IMU in it. The ball is orientated so the Z axis is aligned with gravity. If we put this in terms of gravity (g), I have data from the accelerometer that puts this point of data at (-0.037, 0.114, -1.013). All vectors and formulas from here out will have (x,y,z) assumptions. Obviously the sensor is not completely aligned and its putting small forces on the other coordinate values. I can also see from my Gyroscope reading that I am getting (-1.88, -4.88, 11.02) in degrees per second. This being at rest, we could call this the offset baseline and zero this out. I haven't implemented the magnetometer numbers yet so we can ignore those for the time being.
Now suppose I toss the ball into the area and record the accelerometer and gyroscope measurements at time T (See attached image). What I am really trying to solve is the position displacement, vertically, from the start point to time T. This is a short run test that is pretty much just tracking the ball for a few seconds as you throw it up and down so I am not too concerned with the long term drift of dead reckoning like a lot of papers highlight. As this is a non fixed object, obviously the ball can rotate as you toss it so you can't do just the acceleration numbers because the rotation will change as it travels. My reading and research has put me through the whole field of rotation matrices, Euler Anger, Quternions, AHRS, Madgwick filters, Mahony filters, Kalmna filters, complementary filters, etc... Most of the papers are more to deal with orientation tracking rather than position. I think its probably because most systems want that position over time so the localization papers mention more sensors and usage but I believe for my application, I can get away with enough accuracy since I am resetting my interial frame before every toss and the data is so short (in time) that a lot of the variance hopefully can be mitigated. I guess I am looking for something similar to robot localization but in a vertical field.
From all this I believe these are the steps I need to take and wanted to verify.
Processing steps:
Summary: I want to know the vertical position of a ball with an IMU over a small period of time. I need assistance in checking my math and how the quaternion plays into this. I am pretty sure that most my assumptions are correct but just need someone to verify and make it obvious.
I have a problem that I hope someone can shine some light on. I think it's somewhat me not understanding what's going on and some confusion on how to take what I know and change it to how to solve my issue.
For my problem, I have a ball at rest with an IMU in it. The ball is orientated so the Z axis is aligned with gravity. If we put this in terms of gravity (g), I have data from the accelerometer that puts this point of data at (-0.037, 0.114, -1.013). All vectors and formulas from here out will have (x,y,z) assumptions. Obviously the sensor is not completely aligned and its putting small forces on the other coordinate values. I can also see from my Gyroscope reading that I am getting (-1.88, -4.88, 11.02) in degrees per second. This being at rest, we could call this the offset baseline and zero this out. I haven't implemented the magnetometer numbers yet so we can ignore those for the time being.
Now suppose I toss the ball into the area and record the accelerometer and gyroscope measurements at time T (See attached image). What I am really trying to solve is the position displacement, vertically, from the start point to time T. This is a short run test that is pretty much just tracking the ball for a few seconds as you throw it up and down so I am not too concerned with the long term drift of dead reckoning like a lot of papers highlight. As this is a non fixed object, obviously the ball can rotate as you toss it so you can't do just the acceleration numbers because the rotation will change as it travels. My reading and research has put me through the whole field of rotation matrices, Euler Anger, Quternions, AHRS, Madgwick filters, Mahony filters, Kalmna filters, complementary filters, etc... Most of the papers are more to deal with orientation tracking rather than position. I think its probably because most systems want that position over time so the localization papers mention more sensors and usage but I believe for my application, I can get away with enough accuracy since I am resetting my interial frame before every toss and the data is so short (in time) that a lot of the variance hopefully can be mitigated. I guess I am looking for something similar to robot localization but in a vertical field.
From all this I believe these are the steps I need to take and wanted to verify.
- Calibrate the sensors (not every time you toss it but every once in awhile)
- Orientate the ball into a set position that places Z pretty much aligned with gravity
- Use the sensor values at this point to set the initial frame values
- Once the ball starts to travel, this is triggered by an accelerometer change greater than a set number, start the log the data
- Once the ball comes to rest, the test is over and we can process the data
Processing steps:
- Set all data at T=0 as the inertial frame for calculations
- For T > 0, we can use the new data to map out a few different items of interest
- From the 3 acceleration numbers
- We can calculator the acceleration vector, [MATH]F{n} = \sqrt{Ax{n}^2+Ay{n}^2+Az{n}^2}[/MATH]
- We can normalize this vector, [MATH]F{n{norm}} = <Ax{n}/F{n},Ay{n}/F{n},Az{n}/F{n}>[/MATH]
- We can also do the angular acceleration, [MATH]Angles{n} = <Anglex{n} = <arccos(Ax{n}/F{n}),arccos(Ay{n}/F{n}),arccos(Az{n}/F{n})>[/MATH]
- From the 3 gyro numbers
- We can calculate the gyro vector (same as acceleration above)
- We can normalize the gyro vector (same as acceleration above)
- We can calculate the angles that will be used with the angles from the accelerometer, [MATH]AngleGx{n} = AngleGx{n-1} + Gyrox{n} * deltaT[/MATH] for each axes
- Now we extracted all the items we need, this would be the point where the conversion and quaternion would come in to adjust/account for the orientation difference
- This is where all the white papers come into play and we get into the matrix world and converting the data vectors we have into the quaternion
- I can either go with an magnetometer or no magnetometer. With my test being so short, I am not sure if the magnetometer added data would be a huge difference or not
- The acceleration quaternion equation is attach in image.
- Now we're rotated, we can do the filter game to figure out the higher accuracy
- Complementary filter uses the accelerometer and gyro angles to complement the final angle. Can also to the moving average to filter items out that could be bad
- Kalman filters I am not an expert on but are another way to do this hah.
- Now we're happy with our rotation, hopefully the force is corrected and matching what we're expecting. In our case, the acceleration would be in the down direction (since it would be slowing down after the initial force application and gravity bringing it back down).
- We can then take that acceleration turn into velocity and ultimately into position through integration and double integration.
Summary: I want to know the vertical position of a ball with an IMU over a small period of time. I need assistance in checking my math and how the quaternion plays into this. I am pretty sure that most my assumptions are correct but just need someone to verify and make it obvious.