What a confusing exercise. It seems to me that the unit of measurement is "seconds" because that's how the measurement is reported.
1/10th-seconds is not a standard unit of time measurement.
If tenth-second is supposed to be the smallest "unit" used in the given measurement, then the period of time should have been reported as 287 tenth-seconds.
Are they trying to ask about the smallest fraction of a unit used to approximate the period of time measured here? That would be 1/10th of a second.
The absolute error of the measured quantity is:
|u - 28.7|
where u = the exact period of time measured
The exact period of time must lie between 28 seconds and 29 seconds.
How could one state limits with more precision, without knowing how the measurement was taken?