Alright, so I have a question for someone who knows more about math than I do. As far as I understand it, the statement that ten minus an infinitesimal is equal to 9 is true. That might be totally wrong, but I'm asking so that I can learn. So here's my question:
if 10 - ε = 9.999... ,
and 9.999... = 10 ,
wouldn't that suggest ε = 0? I know that isn't true, infinitesimals have value by definition, it's just too small to measure. So, what have I done wrong?
Again, I'm not trying to say that 9.999... ≠ 10, I'm totally down with that. I just want some help understanding the flaw in my logic from somebody who's well-versed in mathematics.
(I wasn't totally sure where to put this, but I'm thinking that this is a basic arithmetic question. I don't know though. Also, I'm a 4th grader who's in way over his head; so forgive me if I ask some follow-up questions. Thanks for reading my question.)
EDIT: I am sorry. I did not initially notice that you were in 4th grade. So my original answer shown below is almost certainly incomprehensible on a stand-alone basis.
The history of the idea of infinitesimals is one of tentative acceptance during the 17th and 18th centuries, rejection during the 19th and early 20th centuries, and grudging acceptance since 1962.
The idea of an infinitesimal makes some things much easier, but it is a super-slippery concept. An infinitesimal, if you accept the idea at all, is a number that is not zero but sometimes acts like zero and sometimes does not act like zero. That's weird, but the mathematics that arises from accepting that idea is called non-standard analysis. And, as I said, once you get over the initial weirdness, it makes certain branches of mathematics sort of accord with common sense.
But other mathematicians say that they do not want to deal with infinitesimals at all. That is standard analysis. So for those mathematicians, they will say your question is every bit as meaningless as whether unicorns' tails make the best dust mops. What then do those mathematicians mean when they say
\(\displaystyle 0.9999.... \text{ forever } = 1.\)
They basically mean something like this
\(\displaystyle 0.99 < 1 < 1.02 \implies 0.33 < \dfrac{1}{3} < 0.34 \text { and}\)
\(\displaystyle 0.999 < 1 < 1.002 \implies 0.333 < \dfrac{1}{3} < 0.334 \text { and}\)
so on for however long you want to continue. So a finite sequence of decimal digits can never equal one third. But the more threes we tack on, the closer it gets one third. So if we could continue forever, we would get to exactly one third. Of course, that is not practical, but we can get as close as we want. So we summarize that thought by saying
\(\displaystyle 0.333...\ = \dfrac{1}{3}.\)
We never mention the idea of infinitesimal. We just say that the more 3's we tack on to the end of the decimal, the closer we get to one third. That, in standard analysis, is what the statement above means. BUT
\(\displaystyle 0.333...\ = \dfrac{1}{3} \implies 3 \times 0.333...\ = 3 * \dfrac{1}{3} \implies 0.999...\ = 1.\)
The "logic" that I just gave is not "rigorous," but it shows how many mathematicians don't bother with infinitesimals. And if infinitesimals don't exist, your problem simply goes away.
ORIGINAL, UNEDITED ANSWER
I am not going to try to answer your fundamental question. I merely want to point out that you
seem to be mixing concepts from standard analysis and non-standard analysis. The concept of infinitesimals does not exist in standard analysis, and non-standard analysis operates in a different number system from real numbers.
In standard analysis, what is being said is that
\(\displaystyle \displaystyle \left ( \lim_{n \rightarrow \infty} \sum_{j=1}^n \dfrac{9}{10^j} \right ) = 1.\)
Nothing about infinitesimals at all.
I know very little about non-standard analysis. I suspect that the answer to your question has something to do with the equality of the real part of two hyper-real numbers, but that is really not much more than a guess.