
Multipoint Methods for Solving Nonlinear Equations
[Book Description]
This book is the first on the topic and explains the most cutting-edge methods needed for precise calculations and explores the development of powerful algorithms to solve research problems. Multipoint methods have an extensive range of practical applications significant in research areas such as signal processing, analysis of convergence rate, fluid mechanics, solid state physics, and many others. The book takes an introductory approach in making qualitative comparisons of different multipoint methods from various viewpoints to help the reader understand applications of more complex methods. Evaluations are made to determine and predict efficiency and accuracy of presented models useful to wide a range of research areas along with many numerical examples for a deep understanding of the usefulness of each method. This book will make it possible for the researchers to tackle difficult problems and deepen their understanding of problem solving using numerical methods. Multipoint methods are of great practical importance, as they determine sequences of successive approximations for evaluative purposes. This is especially helpful in achieving the highest computational efficiency. The rapid development of digital computers and advanced computer arithmetic have provided a need for new methods useful to solving practical problems in a multitude of disciplines such as applied mathematics, computer science, engineering, physics, financial mathematics, and biology. It provides a succinct way of implementing a wide range of useful and important numerical algorithms for solving research problems. It illustrates how numerical methods can be used to study problems which have applications in engineering and sciences, including signal processing, and control theory, and financial computation. It facilitates a deeper insight into the development of methods, numerical analysis of convergence rate, and very detailed analysis of computational efficiency. It provides a powerful means of learning by systematic experimentation with some of the many fascinating problems in science. It includes highly efficient algorithms convenient for the implementation into the most common computer algebra systems such as Mathematica, MatLab, and Maple.
[Table of Contents]
1 Basic concepts 1 (26)
1.1 Classification of iterative methods 1 (2)
1.2 Order of convergence 3 (5)
1.2.1 Computational order of 5 (1)
convergence (COC)
1.2.2 R-order of convergence 6 (2)
1.3 Computational efficiency of iterative 8 (3)
methods
1.4 Initial approximations 11 (4)
1.5 One-point iterative methods for 15 (6)
simple zeros
1.6 Methods for determining multiple zeros 21 (3)
1.7 Stopping criterion 24 (3)
2 Two-point methods 27 (58)
2.1 Cubically convergent two-point methods 27 (13)
2.1.1 Composite multipoint methods 28 (2)
2.1.2 Traub's two-point methods 30 (4)
2.1.3 Two-point methods generated by 34 (6)
derivative estimation
2.2 Ostrowski's fourth-order method and 40 (10)
its generalizations
2.3 Family of optimal two-point methods 50 (6)
2.4 Optimal derivative free two-point 56 (8)
methods
2.5 Kung-Traub's multipoint methods 64 (2)
2.6 Optimal two-point methods of 66 (10)
Jarratt's type
2.6.1 Jarratt's two-step methods 67 (7)
2.6.2 Jarratt-like family of two-point 74 (2)
methods
2.7 Two-point methods for multiple roots 76 (9)
2.7.1 Non-optimal two-point methods for 76 (3)
multiple zeros
2.7.2 Optimal two-point methods for 79 (6)
multiple zeros
3 Three-point non-optimal methods 85 (24)
3.1 Some historical notes 85 (2)
3.2 Methods for constructing sixth-order 87 (10)
root-finders
3.2.1 Method 1 -- Secant-like method 88 (1)
3.2.2 Method 2 -- Rational bilinear 89 (1)
interpolation
3.2.3 Method 3 -- Hermite's 90 (2)
interpolation
3.2.4 Method 4 -- Inverse interpolation 92 (1)
3.2.5 Method 5 -- Newton's interpolation 92 (4)
3.2.6 Method 6 -- Taylor's 96 (1)
approximation of derivative
3.3 Ostrowski-like methods of sixth order 97 (2)
3.4 Jarratt-like methods of sixth order 99 (5)
3.5 Other non-optimal three-point methods 104(5)
4 Three-point optimal methods 109(54)
4.1 Optimal three-point methods of Bi, 109(4)
Wu, and Ren
4.2 Interpolatory iterative three-point 113(17)
methods
4.2.1 Optimal three-point methods based 113(6)
on Hermite's interpolation
4.2.2 Three-point methods based on 119(4)
rational function interpolation
4.2.3 Three-point methods constructed 123(4)
by inverse interpolation
4.2.4 Numerical examples 127(3)
4.3 Optimal methods based on weight 130(18)
functions
4.3.1 Family based on the sum of three 131(7)
weight functions
4.3.2 Liu-Wang's family 138(3)
4.3.3 Family based on two weight 141(5)
functions
4.3.4 Geum-Kim's families 146(2)
4.4 Eighth-order Ostrowski-like methods 148(10)
4.4.1 First Ostrowski-like family 148(5)
4.4.2 Second Ostrowski-like family 153(2)
4.4.3 Third Ostrowski-like family 155(1)
4.4.4 Family of quasi-Ostrowski's type 156(2)
4.5 Derivative free family of optimal 158(5)
three-point methods
5 Higher-order optimal methods 163(26)
5.1 Some comments on higher-order 163(1)
multipoint methods
5.2 Geum-Kim's family of four-point 164(2)
methods
5.3 Kung-Traub's families of arbitrary 166(6)
order of convergence
5.4 Methods of higher-order based on 172(5)
inverse interpolation
5.5 Multipoint methods based on Hermite's 177(3)
interpolation
5.6 Generalized derivative free family 180(9)
based on Newtonian interpolation
6 Multipoint methods with memory 189(50)
6.1 Early works 189(3)
6.1.1 Self-accelerating Steffensen-like 190(1)
method
6.1.2 Self-accelerating secant method 191(1)
6.2 Multipoint methods with memory 192(5)
constructed by inverse interpolation
6.2.1 Two-step method with memory of 192(2)
Neta's type
6.2.2 Three-point method with memory of 194(3)
Neta's type
6.3 Efficient family of two-point 197(11)
self-accelerating methods
6.4 Family of three-point methods with 208(5)
memory
6.5 Generalized multipoint root-solvers 213(13)
with memory
6.5.1 Derivative free families with 216(3)
memory
6.5.2 Order of convergence of the 219(7)
generalized families with memory
6.6 Computational aspects 226(13)
6.6.1 Numerical examples (I) -- 227(5)
two-point methods
6.6.2 Numerical examples (II) -- 232(2)
three-point methods
6.6.3 Comparison of computational 234(5)
efficiency
7 Simultaneous methods for polynomial zeros 239(42)
7.1 Simultaneous methods for simple zeros 240(12)
7.1.1 A third-order simultaneous method 240(5)
7.1.2 Simultaneous methods with 245(7)
corrections
7.2 Simultaneous method for multiple zeros 252(6)
7.3 Simultaneous inclusion of simple zeros 258(11)
7.4 Simultaneous inclusion of multiple 269(4)
zeros
7.5 Halley-like inclusion methods of high 273(8)
efficiency
Bibliography 281(12)
Glossary 293(2)
Index 295