^{1}

^{2}

^{1}Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA

^{2}Department of Mathematical and Computational Sciences, NIT Karnataka, 575 025, India

^{*}Santhosh George;

In the present paper, we study the local convergence analysis of a Steffensen-like method considered also in Amat et al. [

Recently, Amat et al. in [

where _{
n+1} = _{
n
}
^{(k)}, _{
n
}
^{(0)} = _{
n
}, _{0}∈_{
n
} = [_{
n
}, _{
n
} + _{
n
});

where

Our goal is to weaken the assumptions in [

The rest of the paper is organized as follows. In Section 2 we present the local convergence analysis. We also provide a radius of convergence, computable error bounds and a uniqueness result. Numerical examples are given in the last section.

The local convergence of method (1.1) is based on some scalar functions and parameters. Let _{0}, _{0}, _{1} be continuous, non-negative, non-decreasing functions defined on the interval [0, +^{2}, respectively with values in the interval [0, +_{0}(0) = _{1}(0, 0) = 0.
Define the parameters _{1}, _{2} and _{0} by

and

Define functions _{0}) by

and

where

We have that _{0}
^{−}.
It then follows from the intermediate value theorem that function _{0}). Denote by

Let

Theorem 2.1

Let ^{2}→^{*}∈_{0}, _{0}, _{1} defined on the intervals [0, +^{2}, respectively with values on the interval [0, +_{0}(0) = _{1}(0, 0) = 0 such that for each

and

there exist continuous, non decreasing functions _{0}) with values on the interval [0, +_{0} = ^{*}, _{0})

and

where the _{0} is defined by (2.1), _{0}(_{
n
}} generated
for _{0}∈^{*}, ^{*}} by method (1.1) is well defined in ^{*}, ^{*}, ^{*}. Moreover, the following estimates hold

Furthermore, if there exists _{0}≥

then the limit point ^{*} is the only solution of equation

Proof

We shall show using mathematical induction that sequence {_{
n
}} satisfies (2.10) and converges to ^{*}. By hypothesis _{0}∈^{*}, ^{*}}, (2.4) and the definition of

It follows from (2.12) and the Banach Lemma on invertible operators [^{'}(^{−1}∈

We can write by (2.3) that

Then, we have by (2.6) that

We also have that

so _{0} + _{0})∈^{*}, _{0}
^{−1}∈

so,

We also have by method (1.1) that _{0}
^{(1)}, _{0}
^{(2)}, ..._{0}
^{(k)} = _{1} are well defined. Let

Using (2.2), (2.3), (2.7), (2.13), (2.17) and (2.18) we obtain, in turn that

which shows (2.10) for _{1}
^{(0)}∈^{*}, _{0}
^{(0)} = _{0}. Similarly, we get that

which shows (2.10) for

which shows (2.10) for _{0}
^{(m+1)}∈^{*}, _{0}
^{(0)}, _{0}
^{(1)}, ..._{0}
^{(k)} by _{
i
}
^{(0)}, _{
i
}
^{(1)}, ..._{
i
}
^{(k)} in the preceding estimates, we arrive at estimates (2.10). Then, from (2.13), we have the estimate

where _{0} − ^{*}∥)^{
k(n+1)}∈[0, 1), so we deduce that _{
k+1}∈^{*}, ^{*}∈_{1} with ^{*}) = 0. Define ^{*}, ^{*}; ^{*}, ^{*};

so ^{−1}∈^{*}) − ^{*}) = ^{*} − ^{*}), we conclude that ^{*} = ^{*}.

Remark 2.2

The sufficient semilocal convergence conditions were given in non-affine invariant form [

for each

and

Clearly conditions (2.4, (2.5), (2.7) are weaker than (2.27), (2.26) and (2.25), respectively (see also the numerical examples). Moreover, let _{0}(_{0}
_{1}(_{2}
_{3}
_{4}
_{0} = _{0} = _{2} = _{3} = _{4} = _{1} then our conditions (2.4), (2.5) and (2.7) reduce to (2.27), (2.26) and (2.25), respectively. Moreover, if _{0} is a strict subset of

and

Hence, even in this special case the new results are better leading to a wider choice of initial guesses ( the new radius of convergence will be at least as large); the error bounds on the distances ∥_{
n
} − ^{*}∥ at least as tight (leading to fewer iterations to obtain a desired error tolerance) and an at least as precise information on the location of the solution. Finally, it is also worth noticing that conditions (2.4), (2.5) and (2.7) are weaker than (2.24) used in [

We present two examples in this section. In the first one, we show that the claims at the end of the Remark 2.2 are justified. In the second example, we show that the results in [^{'}(

Example

Let ^{
T
} by

Then, the Fréchet-derivative is given by

Using the approach in [

Under the new approach, we can choose

Hence, we have that

Moreover, we have

and

That is the rest of the advantages stated at the Remark 2.2 hold.

Example

Let

where the kernel

The solution ^{*}(

Notice that

Then, we have that

so since ^{'}(^{*}(

Therefore, we can choose

and

Then, result in [^{'} is not Lipschitz. However, our results can apply. Indeed, using the above choice of functions we get that

The authors declare no competing interests.