My math classes and degree were some time ago [more than half a century] so I'm somewhat out of date as far as 'standard notations' go with some things. Just out of curiosity, consider the notations o(x) and O(x) when speaking about the order of behavior of a function. Although they both mean (about ?) the same thing, that is the function behaves like (a constant times) x, is there any implication of size involved? BTW: What I mean by f(x) = o(g(x)) [or O(g(x))] is
\(\displaystyle \frac{f(x)}{g(x)}\) ~ constant
for x in some restricted neighborhood, i.e. close to zero or very large or ...
For example, take the polynominal
P(x) = an xn + an-1 xn-1 + ... + a1 x + a0
where an is not zero and a0 is zero. When I 'was growing up', 'short cut notation' was P = o(x) meant that P(x) behaved like x when x 'was small' and P = O(xn) meant that P behaved like xn when x 'was large'. [Unless, of course, something else was specified about x.] Is that still the case?
\(\displaystyle \frac{f(x)}{g(x)}\) ~ constant
for x in some restricted neighborhood, i.e. close to zero or very large or ...
For example, take the polynominal
P(x) = an xn + an-1 xn-1 + ... + a1 x + a0
where an is not zero and a0 is zero. When I 'was growing up', 'short cut notation' was P = o(x) meant that P(x) behaved like x when x 'was small' and P = O(xn) meant that P behaved like xn when x 'was large'. [Unless, of course, something else was specified about x.] Is that still the case?