Proof, Epsilon and limits

LilleDaffi

New member
Joined
Sep 7, 2020
Messages
3
A function f : D → R, D ⊆ R, has a limit L when x approaches c ∈ D if for every number Epsilon > 0 there is a
number δ > 0 such that
if 0 < |x − c| < δ then |f (x) − L| < Epsilon

a) Suppose that the above implication holds for some fixed Epsilon_0 > 0. Prove that it holds for all Epsilon ≥ Epsilon_0 .


I understand all the notation and meaning behind the variables δ and Epsilon, and how they are used to define a limit.
I am just having some trouble understanding how this can hold for all Epsilon ≥ Epsilon_0, and how to prove it.
 
A function f : D → R, D ⊆ R, has a limit L when x approaches c ∈ D if for every number Epsilon > 0 there is a
number δ > 0 such that
if 0 < |x − c| < δ then |f (x) − L| < Epsilon

a) Suppose that the above implication holds for some fixed Epsilon_0 > 0. Prove that it holds for all Epsilon ≥ Epsilon_0 .


I understand all the notation and meaning behind the variables δ and Epsilon, and how they are used to define a limit.
I am just having some trouble understanding how this can hold for all Epsilon ≥ Epsilon_0, and how to prove it.
Start with the definition of LIMIT.

Please show us what you have tried and exactly where you are stuck.

Please follow the rules of posting in this forum, as enunciated at:


Please share your work/thoughts about this problem.
 
The Definition of limit was:
"A function f : D → R, D ⊆ R, has a limit L when x approaches c ∈ D if for every number Epsilon > 0 there is a
number δ > 0 such that
if 0 < |x − c| < δ then |f (x) − L| < Epsilon"
Meaning that if x is within some value δ of c (c is the x-value of the limit L), then f(x) has to be within some distance of L +/- Epsilon.
Se figure below:
1599559506689.png
Couldn't figure out how to turn the picture :) ....................Done by SirJonah
But to specify my question, I dont fully understand how if this ^ is the case for some fixed value of Epsilon_0 > 0, why it should hold for all Epsilon > Epsilon_0.
 
Last edited by a moderator:
I may not fully understand your confusion.
Suppose that there is a \(\delta>0\) so that if \(0<|x-a|<\delta\) then \(|f(x)-f(a)|<0.5\)
That says that if \(x\in(a-\delta,a+\delta)\) [also known as a \(\delta\)-neighborhood of \(a)\) then \(f(x)\) is in \(0.5\)-neighborhood of \(f(a)\).
Surely you see that \((f(a)-0.5,f(a)+0.5)\subset (f(a)-0.75,f(a)+0.75)\)? Do you not?
So that if \(\varepsilon>0.5\) then \((f(a)-0.5,+0.5)\subset (f(a)-\varepsilon,f(a)+\varepsilon)\)?
 
Oh okay, I wasn't sure it was that simple. I thought i also had to extend the neighborhood of a, to the greatest possible domain allowed by the new and higher Epsilon. (In hindsight, your interpretation makes 10x more sense). Thanks!
 
Top