Analysis of an Iterative Method for Solving Nonlinear Equations

fireshtorm1k

New member
Joined
Jun 18, 2024
Messages
4
I came across an intriguing iterative algorithm for solving a nonlinear equation of the form
[imath]ln(f(x))=0[/imath], which differs from the classical Newton's method. This method utilizes a logarithmic difference to calculate the next approximation of the root. A notable feature of this method is its faster convergence compared to the traditional Newton’s method.

The formula for the method is as follows:

[math]x_{n+1} = \frac{\ln(f(x + dx)) - \ln(f(x))}{\ln(f(x + dx)) - \ln(f(x)) \cdot \frac{x_n}{x + dx}} \cdot x_n[/math]
Example:

* Using the classical Newton's method, the initial approximation [imath]x_0=111.625[/imath] leads to [imath]x_1=148.474[/imath]
* Using the above method, the initial value [imath]x_0=111.625[/imath] yields [imath]x_1=166.560[/imath], which is closer to the exact answer [imath]166.420[/imath]

Questions:

1. How is this formula derived?
2. Can this method be expected to provide a higher rate of convergence for a broad class of nonlinear functions?
3. What are the possible limitations or drawbacks of this method?
 
I came across an intriguing iterative algorithm for solving a nonlinear equation of the form
[imath]ln(f(x))=0[/imath], which differs from the classical Newton's method. This method utilizes a logarithmic difference to calculate the next approximation of the root. A notable feature of this method is its faster convergence compared to the traditional Newton’s method.

The formula for the method is as follows:

[math]x_{n+1} = \frac{\ln(f(x + dx)) - \ln(f(x))}{\ln(f(x + dx)) - \ln(f(x)) \cdot \frac{x_n}{x + dx}} \cdot x_n[/math]
Example:

* Using the classical Newton's method, the initial approximation [imath]x_0=111.625[/imath] leads to [imath]x_1=148.474[/imath]
* Using the above method, the initial value [imath]x_0=111.625[/imath] yields [imath]x_1=166.560[/imath], which is closer to the exact answer [imath]166.420[/imath]

Questions:

1. How is this formula derived?
2. Can this method be expected to provide a higher rate of convergence for a broad class of nonlinear functions?
3. What are the possible limitations or drawbacks of this method?
In your example, what is f(x)? And what are you using for x and dx? I don't think you've fully described the method yet.

And where did you "come across" this method? Provenance can help answer this sort of question.
 
In your example, what is f(x)? And what are you using for x and dx? I don't think you've fully described the method yet.

And where did you "come across" this method? Provenance can help answer this sort of question.
[imath]f(x)[/imath] - difficult arbitrary function
x = [imath]x_n[/imath]
dx - small increment
 
[imath]f(x)[/imath] - difficult arbitrary function
Hi. I think Dr.Peterson asked you to share the specific function used in your example. In that example, you'd calculated approximate values for x1 using two different methods. What is f(x) in that example? :)
 
Example:

* Using the classical Newton's method, the initial approximation [imath]x_0=111.625[/imath] leads to [imath]x_1=148.474[/imath]
* Using the above method, the initial value [imath]x_0=111.625[/imath] yields [imath]x_1=166.560[/imath], which is closer to the exact answer [imath]166.420[/imath]
In particular, along with identifying the function f, please show the actual calculations you did to get these numbers. That will help us better understand what you are saying, and then analyze its significance.

For example, Newton's method as stated here is [math]x_{n+1}=x_n−\frac{f(x_n)}{f'(x_n)}[/math] and for the specific example he gives, [imath]f(x)=\cos(x)-x[/imath], that becomes [math]{x_{n + 1}} = {x_n} - \frac{{\cos x_{n} - x_{n}}}{{ - \sin x_{n} - 1}}[/math] and taking [imath]{x_0} = 1[/imath], the first step is [math]{x_1} = 1 - \frac{{\cos \left( 1 \right) - 1}}{{ - \sin \left( 1 \right) - 1}} = 0.7503638679[/math]
I want to see that work for your function, for both methods. That is necessary for us to fully understand your method.
 
Last edited:
I found out that the function is approximated not by a straight line
 
Last edited:
I came across an intriguing iterative algorithm for solving a nonlinear equation of the form
[imath]ln(f(x))=0[/imath], which differs from the classical Newton's method. This method utilizes a logarithmic difference to calculate the next approximation of the root. A notable feature of this method is its faster convergence compared to the traditional Newton’s method.

The formula for the method is as follows:

[math]x_{n+1} = \frac{\ln(f(x + dx)) - \ln(f(x))}{\ln(f(x + dx)) - \ln(f(x)) \cdot \frac{x_n}{x + dx}} \cdot x_n[/math]
Example:

* Using the classical Newton's method, the initial approximation [imath]x_0=111.625[/imath] leads to [imath]x_1=148.474[/imath]
* Using the above method, the initial value [imath]x_0=111.625[/imath] yields [imath]x_1=166.560[/imath], which is closer to the exact answer [imath]166.420[/imath]

Questions:

1. How is this formula derived?
2. Can this method be expected to provide a higher rate of convergence for a broad class of nonlinear functions. i got the soltuion from Brilliant org reviews.
3. What are the possible limitations or drawbacks of this method?
I’ve been looking at this for the last hour. These methods are often best compared graphically. I will post when I have a good comparison, but my first thoughts are that it cannot find the true root zero of f(x) as the ln(f(x)) when f(x) = 0 is undefined. However, it could search for a real root if the equation is searching for where ln(f(x)+1) = 0.
 
I’ve been looking at this for the last hour. These methods are often best compared graphically. I will post when I have a good comparison, but my first thoughts are that it cannot find the true root zero of f(x) as the ln(f(x)) when f(x) = 0 is undefined. However, it could search for a real root if the equation is searching for where ln(f(x)+1) = 0.
The claim being made is that this will solve, not f(x) = 0, but ln(f(x)) = 0, that is, f(x) = 1.

But the supposed method has not been properly defined, so I don't think there's anything to be said about it.
 
Top