• 188 Views
  • 86 Downloads
ram: Vol. 1
Research Article
Research in Applied Mathematics
Vol. 1 (2017), Article ID 101259, 8 pages
doi:10.11131/2017/101259

Local Convergence for a Frozen Family of Steffensen-Like Methods under Weak Conditions

Ioannis K. Argyros1 and Santhosh George2

1Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA

2Department of Mathematical and Computational Sciences, NIT Karnataka, 575 025, India

Received 8 November 2016; Accepted 17 July 2017

Editor: Hyunsung Kim

Copyright © 2017 Ioannis K. Argyros and Santhosh George. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In the present paper, we study the local convergence analysis of a Steffensen-like method considered also in Amat et al. [1] modified suitably to solve equations in the Banach space. Using our idea of restricted convergence domains we extend the applicability of this method. Numerical examples where earlier results cannot apply to solve equations but our results can apply are also given in this study.

1. Introduction

Recently, Amat et al. in [1] studied the efficiency of a frozen family of Steffensen-like methods defined by

x 0 ( 0 ) = x 0 x n ( j + 1 ) = x n ( j ) A n 1 F ( x n ( j ) ) , (1.1)

where xn+1 = xn(k), xn(0) = xn, x0D an initial point k a natural number, n = 0, 1, 2, ..., 0≤jk − 1, An = [xn, xn + F(xn); F] with [., .] : D×DL(X) being a divided difference of order one on D [2]. That is, they considered k−step iterative method from the Steffensen's method with frozen divided difference operator for solving a system of nonlinear equations and computed the maximum computational efficiency of the method. In this study we present the local convergence analysis of method (1.1) for approximating the solution of a nonlinear equation

F ( x ) = 0 , (1.2)

where F : DXX is a continuously Fréchet-differentiable operator and D is a convex subset of the Banach space X. Due to the wide applications, finding solution for the equation (1.2) is an important problem in mathematics.

Our goal is to weaken the assumptions in [1], so that the applicability of the method (1.1) can be extended. Notice that the same technique can be used to extend the applicability of other iterative methods that have appeared in [3,4,5,10,11,13,14,15].

The rest of the paper is organized as follows. In Section 2 we present the local convergence analysis. We also provide a radius of convergence, computable error bounds and a uniqueness result. Numerical examples are given in the last section.

2. Local Convergence

The local convergence of method (1.1) is based on some scalar functions and parameters. Let w0, w, v0, v, w1 be continuous, non-negative, non-decreasing functions defined on the interval [0, +), (0, +), [0, +), (0, +), [0, +)2, respectively with values in the interval [0, +) with w0(0) = w(0) = w1(0, 0) = 0. Define the parameters r1, r2 and r0 by

r 1 = sup { t 0 : w 0 ( t ) < 1 } ,

r 2 = sup { t 0 : w 1 ( t , ( 1 + v 0 ( t ) ) t ) < 1 }

and

r 0 = min { r 1 , r 2 } . (2.1)

Define functions g, h and p on the interval [0, r0) by

g ( t ) = 1 1 w 0 ( t ) [ 0 1 w ( ( 1 θ ) t ) d θ + ( p ( t ) + w 0 ( t ) ) v ( t ) 1 p ( t ) ]

and

h ( t ) = g ( t ) 1 ,

where

p ( t ) = w 1 ( t , ( 1 + v 0 ( t ) t ) .

We have that h(0) = −1<0 and h(t)→+ as tr0. It then follows from the intermediate value theorem that function h has zeros in the interval (0, r0). Denote by r the smallest such zero. Then, for each t∈[0, r)

0 g ( t ) < 1 . (2.2)

Let U ( a , ρ ) , U ¯ ( a , ρ ) stand respectively for the open and closed balls in X with center aX and of radius ρ>0. Next, we present the local convergence analysis of method (1.1) using the preceding notation.

Theorem 2.1

Let F : DXX be continuously Fréchet-differentiable operator with divided differences of order one [., .; F] : D2L(X). Suppose: there exist x*D, and non-decreasing continuous functions w0, v0, w1 defined on the intervals [0, +), [0, +), [0, +)2, respectively with values on the interval [0, +) and w0(0) = w1(0, 0) = 0 such that for each x, yD

F ( x * ) = 0 , F ' ( x * ) 1 L ( X ) , (2.3)

F ' ( x * ) 1 ( F ' ( x ) F ' ( x * ) ) w 0 ( x x * ) , (2.4)

F ' ( x * ) 1 ( [ x , y ; F ] F ' ( x * ) ) w 1 ( x x * , y x * ) , (2.5)

and

[ x , x * ; F ] v 0 ( x x * ) ; (2.6)

there exist continuous, non decreasing functions w, v defined on the interval [0, r0) with values on the interval [0, +) and w(0) = 0 such that for each xD0 = DU(x*, r0)

F ' ( x * ) 1 ( F ' ( x ) F ' ( y ) ) w ( x y ) , (2.7)

F ' ( x * ) 1 [ x , x * ; F ] v ( x y ) , (2.8)

and

U ¯ ( x * , R ) D , (2.9)

where the r0 is defined by (2.1), R : = (1 + v0(r))r and r is defined previously. Then, the sequence {xn} generated for x0U(x*, r) − {x*} by method (1.1) is well defined in U(x*, r), remains in U(x*, r) for each n = 0, 1, 2, ... and converges to x*. Moreover, the following estimates hold

x n + 1 x * = x n ( k ) x * ( g ( x 0 x * ) ) k ( n + 1 ) x 0 x * < r . (2.10)

Furthermore, if there exists R0r such that

w 1 ( 0 , R 0 ) < 1 o r w 1 ( R 0 , 0 ) < 1 , (2.11)

then the limit point x* is the only solution of equation F(x) = 0 in D 1 = D U ¯ ( x * , R 0 ) .

Proof

We shall show using mathematical induction that sequence {xn} satisfies (2.10) and converges to x*. By hypothesis x0U(x*, r) − {x*}, (2.4) and the definition of r, we have that

F ' ( x * ) 1 ( F ' ( x 0 ) F ' ( x * ) ) w 0 ( x 0 x * ) w 0 ( r 0 ) < 1 . (2.12)

It follows from (2.12) and the Banach Lemma on invertible operators [1,14] that F'(x)−1L(X) and

F ' ( x 0 ) 1 F ' ( x * ) 1 1 w 0 ( x 0 x * ) . (2.13)

We can write by (2.3) that

F ( x 0 ) = F ( x 0 ) F ( x * ) = [ x 0 , x * ; F ] ( x 0 x * ) . (2.14)

Then, we have by (2.6) that

F ( x 0 ) v ( x 0 x * ) x 0 x * . (2)

We also have that

x 0 + F ( x 0 ) x * x 0 x * + F ( x 0 ) < r + v 0 ( r ) r = ( 1 + v 0 ( r ) ) r = R ,

so x0 + F(x0)∈U(x*, R). Next, we show that A0−1L(X). We get by (2.5) and (2.15) that

F ' ( x * ) 1 ( A 0 F ' ( x * ) ) w 1 ( x 0 x * , x 0 x * + F ( x 0 ) ) w 1 ( x 0 x * , x 0 x * + v 0 ( x 0 x * ) x 0 x * ) w 1 ( r , r + v 0 ( r ) r ) = p ( r ) < 1 , (2)

so,

A 0 1 F ' ( x * ) 1 1 p ( r ) . (2.17)

We also have by method (1.1) that x0(1), x0(2), ...x0(k) = x1 are well defined. Let j = 0. We can write

x 1 ( 0 ) x * = x 0 ( 0 ) x * F ' ( x 0 ( 0 ) ) 1 F ( x 0 ( 0 ) ) + A 0 1 [ ( A 0 F ' ( x * ) ) + ( F ' ( x * ) F ' ( x 0 ( 0 ) ) ) ] [ F ' ( x 0 ( 0 ) ) 1 F ' ( x * ) , F ' ( x * ) 1 F ( x 0 ( 0 ) ) ] . (2.18)

Using (2.2), (2.3), (2.7), (2.13), (2.17) and (2.18) we obtain, in turn that

x 1 ( 0 ) x * 0 1 w ( ( 1 θ ) x 0 ( 0 ) x * ) d θ x 0 ( 0 ) x * 1 w 0 ( x 0 ( 0 ) x * ) + ( p ( x 0 ( 0 ) x * ) + w 0 ( x 0 ( 0 ) x * ) ) v ( x 0 ( 0 ) x * ) x 0 ( 0 ) x * ( 1 w 0 ( x 0 ( 0 ) x * ) ) ( 1 p ( x 0 ( 0 ) x * ) ) = g ( x 0 ( 0 ) x * ) x 0 ( 0 ) x * x 0 ( 0 ) x * < r , (2.19)

which shows (2.10) for n = 0, k = 0 and x1(0)U(x*, r), where we also used that x0(0) = x0. Similarly, we get that

x 2 ( 0 ) x * 0 1 w ( ( 1 θ ) x 0 ( 1 ) x * ) d θ x 0 ( 1 ) x * 1 w 0 ( x 0 ( 1 ) x * ) + ( p ( x 0 ( 0 ) x * ) + w 0 ( x 0 ( 1 ) x * ) ) v ( x 0 ( 1 ) x * ) x 0 ( 1 ) x * ( 1 w 0 ( x 0 ( 0 ) x * ) ) ( 1 p ( x 0 ( 1 ) x * ) ) = g ( x 0 ( 0 ) x * ) x 0 ( 1 ) x * g 2 ( x 0 ( 0 ) x * ) x 0 ( 0 ) x * x 0 ( 0 ) x * < r , (2.20)

which shows (2.10) for k = 1 and n = 0. So, inductively, we obtain that for 0≤mj + 1

x 0 ( m + 1 ) x * g ( x 0 ( 0 ) x * ) x 0 ( m ) x * g m + 1 ( x 0 ( 0 ) x * ) x 0 ( 0 ) x * x 0 x * < r , (2.21)

which shows (2.10) for m = 0, 1, 2, ..., j + 1, n = 0 and x0(m+1)U(x*, r). By simply replacing x0(0), x0(1), ...x0(k) by xi(0), xi(1), ...xi(k) in the preceding estimates, we arrive at estimates (2.10). Then, from (2.13), we have the estimate

x n + 1 x * c x n x * < r , (2.22)

where c = g(∥x0x*∥)k(n+1)∈[0, 1), so we deduce that lim k x k = x * and xk+1U(x*, r). Finally to show the uniqueness part, let y*D1 with F(y*) = 0. Define Q = [x*, y*; F] (or Q = [y*, x*; F]). Then, using (2.5) and (2.11) we get that

F ' ( x * ) 1 ( Q F ' ( x * ) ) w 1 ( 0 , x * y * ) w 1 ( 0 , R 0 ) < 1 , (2.23)

so Q−1L(X). Then, from the identity 0 = F(y*) − F(x*) = Q(y*x*), we conclude that x* = y*.

Remark 2.2

The sufficient semilocal convergence conditions were given in non-affine invariant form [1]. The local convergence analysis of method (1.1) was studied in [1] based on Taylor expansions and hypotheses reaching up to the third Fréchet derivative of F. Moreover, no computable error bounds were given nor the radius of convergence. We have addressed the problems in Theorem 2.1. In order for us to compare the new results with the old ones in [1] we rewrite the conditions in affine invariant form as:

F ' ( x * ) 1 ( [ x , y ; F ] [ u , v ; F ] ) K 1 ( x y + y v ) (2.24)

for each x, y, u, vD with xy, uv. In view of (2.24), we also have that

F ' ( x * ) 1 ( F ' ( x ) F ' ( y ) ) 2 K 2 x y , (2.25)

F ' ( x * ) 1 ( [ x , y ; F ] F ' ( x * ) ) K 3 ( x x * + y x * ) (2.26)

and

F ' ( x * ) 1 ( F ' ( x ) F ' ( x * ) ) 2 K 4 x x * . (2.27)

Clearly conditions (2.4, (2.5), (2.7) are weaker than (2.27), (2.26) and (2.25), respectively (see also the numerical examples). Moreover, let w0(t) = 2K0t, w1(s, t) = K2s + K3t and w(t) = 2K4t. Then, if D0 = D and K0 = K2 = K3 = K4 = K1 then our conditions (2.4), (2.5) and (2.7) reduce to (2.27), (2.26) and (2.25), respectively. Moreover, if D0 is a strict subset of D, then, we have that

K 0 K 1

K 2 K 1

K 3 K 1

and

K 4 K 1 .

Hence, even in this special case the new results are better leading to a wider choice of initial guesses ( the new radius of convergence will be at least as large); the error bounds on the distances ∥xnx*∥ at least as tight (leading to fewer iterations to obtain a desired error tolerance) and an at least as precise information on the location of the solution. Finally, it is also worth noticing that conditions (2.4), (2.5) and (2.7) are weaker than (2.24) used in [1] (in non affine invariant form).

3. Numerical Examples

We present two examples in this section. In the first one, we show that the claims at the end of the Remark 2.2 are justified. In the second example, we show that the results in [1] cannot apply to solve those equations. In both examples, we define for simplicity [ x , y ; F ] = 1 2 ( F ' ( x ) + F ' ( y ) ) for each x, yD with xy and [x, x; F] = F'(x) for each xD.

Example

Let X = Y = 3 , D = U ¯ ( 0 , 1 ) , x * = ( 0 , 0 , 0 ) T . Define function F on D for w = (x, y, z)T by

F ( w ) = ( e x 1 , e 1 2 y 2 + y , z ) T .

Then, the Fréchet-derivative is given by

F ' ( v ) = [ e x 0 0 0 ( e 1 ) y + 1 0 0 0 1 ] .

Using the approach in [1] (see also the Remark 2.2), we can choose w ¯ 0 ( t ) = e t , w ¯ 1 ( s , t ) = e ( s + t ) , w ¯ ( t ) = e t , v ¯ ( t ) = v ¯ 0 ( t ) = e + 1 2 . Then the radius of convergence is given by

r ¯ = 0 . 0321

Under the new approach, we can choose w 0 ( t ) = ( e - 1 ) t , w 1 ( s , t ) = ( e - 1 ) ( s + t ) , v 0 ( t ) = e + 1 2 , w ( t ) = 2 e r 0 t , r 1 = 1 e - 1 , r 2 = 2 ( e - 1 ) ( e + 5 ) = r 0 and v ( t ) = 1 + e r 0 2 . Then, the radius of convergence r is given by

r = 0 . 5003 .

Hence, we have that

r ¯ < r .

Moreover, we have

w 0 ( t ) < w ¯ 0 ( t )

w 1 ( s , t ) < w ¯ 1 ( s , t )

v 0 ( t ) < v ¯ 0 ( t )

and

v ( t ) < v ¯ ( t ) .

That is the rest of the advantages stated at the Remark 2.2 hold.

Example

Let X = C[0, 1] and consider the nonlinear integral equation of the mixed Hammerstein-type [1,2,6,7,8,9,12] defined by

x ( s ) = 0 1 G ( s , t ) ( x ( t ) 3 / 2 + x ( t ) 2 2 ) d t ,

where the kernel G is the Green's function defined on the interval [0, 1]×[0, 1] by

G ( s , t ) = { ( 1 s ) t , t s s ( 1 t ) , s t . .

The solution x*(s) = 0 is same as the solution of equation (1.2), where F : C[0.1]⟶C[0.1]) is defined by

F ( x ) ( s ) = x ( s ) 0 t G ( s , t ) ( x ( t ) 3 / 2 + x ( t ) 2 2 ) d t .

Notice that

0 t G ( s , t ) d t 1 8 .

Then, we have that

F ' ( x ) y ( s ) = y ( s ) 0 t G ( s , t ) ( 3 2 x ( t ) 1 / 2 + x ( t ) ) d t ,

so since F'(x*(s)) = I,

F ' ( x * ) 1 ( F ' ( x ) F ' ( y ) ) 1 8 ( 3 2 x y 1 / 2 + x y ) .

Therefore, we can choose

w 0 ( t ) = w ( t ) = 1 8 ( 3 2 t 1 / 2 + t )

and

v 0 ( t ) = v ( t ) = 1 + w 0 ( t ) , w 1 ( s , t ) = 1 2 ( w 0 ( s ) + w 0 ( t ) ) .

Then, result in [1] cannot be used to solve this problem, since F' is not Lipschitz. However, our results can apply. Indeed, using the above choice of functions we get that

r = 0 . 3965 .

Competing Interests

The authors declare no competing interests.

References

  1. S. Amat, S. Busquier, M. Grau-Sánchez, and M. A. Hernández-Verón, “On the Efficiency of a Family of Steffensen-Like Methods with Frozen Divided Differences,” Computational Methods in Applied Mathematics, vol. 17, no. 2, 2017. Publisher Full Text | Google Scholar
  2. I. K. Argyros, “Studies in Computational Mathematics 15,” in Computational Theory of Iterative Methods, vol. 15 of Studies in Computational Mathematics, p. ii, Elsevier, 2007.
  3. I. K. Argyros and S. George, “Ball convergence of a sixth order iterative method with one parameter for solving equations under weak conditions,” Calcolo. A Quarterly on Numerical Analysis and Theory of Computation, vol. 53, no. 4, pp. 585–595, 2016. Publisher Full Text | Google Scholar
  4. H. Ren and I. K. Argyros, “Improved local analysis for a certain class of iterative methods with cubic convergence,” Numerical Algorithms, vol. 59, no. 4, pp. 505–521, 2012. Publisher Full Text | Google Scholar
  5. I. K. Argyros, Y. J. Cho, and S. George, “Local convergence for some third-order iterative methods under weak conditions,” Journal of the Korean Mathematical Society, vol. 53, no. 4, pp. 781–793, 2016. Publisher Full Text | Google Scholar
  6. A. Cordero, J. L. Hueso, E. Mart\'\i nez, and J. R. Torregrosa, “A modified Newton-Jarratt's composition,” Numerical Algorithms, vol. 55, no. 1, pp. 87–99, 2010. Publisher Full Text | Google Scholar
  7. A. Cordero and J. R. Torregrosa, “Variants of Newton's method for functions of several variables,” Applied Mathematics and Computation, vol. 183, no. 1, pp. 199–208, 2006. Publisher Full Text | Google Scholar
  8. A. Cordero and J. R. Torregrosa, “Variants of Newton's method using fifth-order quadrature formulas,” Applied Mathematics and Computation, vol. 190, no. 1, pp. 686–698, 2007. Publisher Full Text | Google Scholar
  9. M. Grau-S\'anchez, \. Grau, and M. Noguera, “On the computational efficiency index and some iterative methods for solving systems of nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 236, no. 6, pp. 1259–1266, 2011. Publisher Full Text | Google Scholar
  10. H. H. Homeier, “On Newton-type methods with cubic convergence,” Journal of Computational and Applied Mathematics, vol. 176, no. 2, pp. 425–432, 2005. Publisher Full Text | Google Scholar
  11. J. Kou, Y. Li, and X. Wang, “Some modifications of Newton's method with fifth-order convergence,” Journal of Computational and Applied Mathematics, vol. 209, no. 2, pp. 146–152, 2007. Publisher Full Text | Google Scholar
  12. A. N, J. A. Romero, and A. Hernandez, Approximacion de soluciones de algunas equacuaciones integrals de Hammerstein mediante metodos iterativos tipo. Newton, XXI Congresode ecuaciones diferenciales y aplicaciones Universidad de Castilla-La Mancha,.
  13. W. C. Rheinboldt, “An adaptive continuation process for solving systems of nonlinear equations,” in Mathematical models and numerical methods (Papers, Fifth Semester, STEfan Banach Internat. Math. Center, Warsaw, 1975), vol. 3 of Banach Center Publ., pp. 129–142, PWN, Warsaw, 1978.
  14. J. R. Sharma and P. Gupta, “An efficient fifth order method for solving systems of nonlinear equations,” Computers \& Mathematics with Applications. An International Journal, vol. 67, no. 3, pp. 591–601, 2014. Publisher Full Text | Google Scholar
  15. J. F. Traub, Iterative methods for the solution of equations, AMS Chelsea Publishing, 1982.
Research Article
Research in Applied Mathematics
Vol. 1 (2017), Article ID 101259, 8 pages
doi:10.11131/2017/101259

Local Convergence for a Frozen Family of Steffensen-Like Methods under Weak Conditions

Ioannis K. Argyros1 and Santhosh George2

1Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA

2Department of Mathematical and Computational Sciences, NIT Karnataka, 575 025, India

Received 8 November 2016; Accepted 17 July 2017

Editor: Hyunsung Kim

Copyright © 2017 Ioannis K. Argyros and Santhosh George. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In the present paper, we study the local convergence analysis of a Steffensen-like method considered also in Amat et al. [1] modified suitably to solve equations in the Banach space. Using our idea of restricted convergence domains we extend the applicability of this method. Numerical examples where earlier results cannot apply to solve equations but our results can apply are also given in this study.