更全的杂志信息网

A NEW ADAPTIVE TRUST REGION ALGORITHM FOR OPTIMIZATION PROBLEMS∗

更新时间:2016-07-05

1 Introduction

Consider the following minimization problem:

where f:Rn → R is a twice continuously differentiable function.This problem has appeared in many applications in the medical science,optimal control,functional approximation,curvefitting fields,and other areas of science and Engineering.Many methods are presented to solve this problem(1.1),including the conjugate gradient method(see[1–8]),the quasi-Newton method(see[9–14]),and the trust region method(see[15–28]).

As is known,the trust region method plays an important role in the area of nonlinear optimization,and is among efficient methods for solving problem(1.1).The trust region methods that was firstly proposed by Powell in[29]used of an iterative structure.At each iterative point xk,a trial step dkwas obtained by solving the following subproblem

where gk= ∇f(xk),Bkis an n×n symmetric approximation matrix of∇2f(xk),and Δkis the trust region radius.

Solving the trust region subproblem(1.2)plays a key role for solving problem(1.1)in the trust region algorithm.In traditional trust region methods,at each iterative point xk,the trust region radius Δkis independent of the variables gk,Bk,and ‖dk−1‖/‖yk−1‖,where

However,these variables contain first-and second-order information,which we do not use.We determine the initial trust region radius and some artificial parameters to adjust it,and the choice of these parameters has an important in fluence on our numerical results.Therefore,Zhang[25]proposed a modified trust region algorithm by replacing Δkwith,where

and p is a positive integer.With this modified trust region subproblem,instead of updating Δk,p is adjusted.However,at every iteration,we need to calculate the Hessian of f.On the basis of the technique described in[25],Zhang,Zhang,and Liao[19]presented a new trust region subproblem,where the trust region radius usesthe definitions of c,p,gk remain the same as in[25],andis a positive definite matrix based on the Schnable and Eskow[30]modified Cholesky factorization.We see that there are two drawbacks in computing the inverse of the matrixand the Euclidean norm ofat each iterative point xk.Cui and Wu[24]presented a trust region radius by replacing Δkwithwhere µk > 0 and satisfies an update rule,and for the same reason as that found by Zhang,Zhang,and Liao[19],this algorithm requires the calculation ofat every iteration.Motivated by the first-and second-order information of gkandLi[26]proposed a modified trust region radius that useswhich confers the advantage of avoiding to computation of the matrixof the inverse andat each iterative point xk.At the same time,it can decrease the workload and computational time involved.However,the algorithm only uses the gradient information.Some authors have also applied adaptive trust region methods to solve nonlinear equations in[31–33].

Algorithm 2.1

我国中南地区土壤重金属污染状况及其空间分布研究……………………………………… 徐启胜,李雨晴,陈 燕,马秀花(93)

The remainder of this article is organized as follows.In the next section,we brie fly review some basic results of a modified quasi-Newton secant equation and present an adaptive algorithm for solving problem(1.1).In Section 3,we prove the global convergence of the proposed method.In Section 4,we prove the superlinear convergence of the algorithm under suitable conditions.Numerical results are reported in Section 5.We conclude this article in Section 6.

Throughout this article,unless otherwise specified,‖ ·‖ denotes the Euclidean norm of vectors or matrices.

2 New Algorithm

2.1 Modified secant equation

In this subsection,we introduce a modified secant equation.In the general quasi-Newton secant method,the iterate xksatisfieswhere Bkis an approximated Hessian of f at each iterative point and{Bk}satisfies the following secant equation

where dk=xk+1−xk.However,the method is solved by only using the gradient information in(2.1).Motivated by the above observations,we would like Bkto use not only the gradient value information but also the function value information.This problem was studied by some authors.In particular,a modified secant equation was proposed by Wei,Li,and Qi[11];this equation uses both the gradient information and function information.The modified secant equation is defined as

From the definition of pk(k∈ Γ)in“Step 3–Step 4–Step 3”,we know that the solutioncorresponding to the following adaptive trust region subproblem

Spearman秩相关分析各气象因素与同期ILI发病数的相关性。因气象因素对传染病发生的影响可能存在滞后期,本研究同时计算了气象因素与滞后1~4 周发病数(Lag1~Lag4)间的相关关系,选择有统计学相关且相关系数最大的周为最佳滞后期。以ILI周发病数的对数值为因变量,气象因素作为自变量,采用多元logistic回归分析气象因素对ILI的影响。

This property holds for all k.

The results of Theorems 2.1 and 2.2 in[11]shows an advantage over(2.1)in this approximate relation,and some authors presented many efficient methods and proven their algorithms to be more effective by replacing ykwith qkin numerical experiments,on the basis of the modified secant equation(see[34–36]etc).

2.2 Algorithmic structure

In this subsection,we present an adaptive trust region method to solve(1.1),which is motivated by the general trust region method and the modified secant equation.We now introduce the trust region method.In each iteration,a trial step dkis generated by solving an adaptive trust region subproblem,in which both the value of the function of f(x)and the value of the gradient of f(x)at xkare used:

where 0< c< 1,p is a nonnegative integer,Δkis the trust region radius,Bkis generated by the modified BFGS update formula

and Bk+1satisfies the modified secant equation Bk+1dk=qk.

Let dkbe the optimal solution of(2.4).The actual reduction is defined by

虽然鱼儿们如此顽皮,但对我这个“钓鱼高手”来说,也只是雕虫小技。我先将一撮没有连着线的鱼食抛进了鱼群,那些鱼儿们过去嗅了嗅,都走开了。他们好像在嘲笑我:“哼!我们才不中计呢!想把我们钓走,没门儿!”可他们不知道,他们已经中计了!刚刚我把鱼食抛得很近,本来就是给他们设的圈套。他们一下子躲到了后面,却不料我的鱼钩上已经放了鱼食在等他们,其中一个调皮的,上去咬了一口,没有被钓起来,他的胆子就变大了,一口将鱼食吞了下去,我熟练的将鱼竿收回,哈!一条黑色的鱼!我用同样的方法,又钓了九条鱼!

通过对煤样在有机/酸复合溶液化学作用下破裂的微观实验观测,得到了煤样的表观形貌、矿物质成分、孔隙结构及宏观力学特性随浸泡时间的变化规律。主要结论如下:

Then,we define rkas the ratio between Aredkand Predk

We now list the steps of the new trust region algorithm as follows.

The purpose of this article is to present an efficient adaptive trust region algorithm to solve(1.1).Motivated by the adaptive technique,the proposed method possesses the following nice properties:(i)the trust region radius uses not only the gradient value but also the function value,(ii)computing the matrixof the inverse and the value ofat each iterative point xkis not required,and(iii)the associated workload and computational time involved are reduced,which is very important for medium-scale problems.

早期放疗反应(≤3个月):81例患者出现1、2级腹痛、恶心、呕吐,无3级及以上早期不良反应发生。晚期放疗反应(>3个月):1例CT+SBRT+CT组患者发生3级十二指肠炎。

Step 1 Given x0∈ Rnand the symmetric and positive definite matrix B0=I∈ Rn×n,let 0< α < 1,0< c< 1,p:=0,ε > 0 and Δ0:= ‖g0‖.Set k:=0.

Step 2 If‖gk‖ < ε,then stop.Otherwise,go to Step 3.

Step 3 Solve the adaptive trust region subproblem(2.4)to obtain dk.

Step 4 Determine the actual reduction Aredkand the predicted reduction Predkusing(2.6)and(2.7),respectively.Calculate rk=If rk< α,then let p:=p+1,go to Step 3;otherwise,go to Step 5.

Step 5 Let xk+1:=xk+dk,and compute qk.If>0,Bk+1is updated by the modified BFGS formula(2.5);otherwise,let Bk+1:=Bk.

Step 6 Let k:=k+1,p:=0,and go to Step 2.

Remark 2.1 (i)The procedure of“Step 3–Step 4–Step 3”of Algorithm 2.1 is named as inner cycle.

where pkis the largest p value obtained in“Step 3–Step 4–Step 3”at iterate xk.Then,we have

Similar to the famous result of Powell[38],we provide the following lemma.

张雨生还有部分杂文,关注文艺界与学术界的乱象。《姑妄言文》当是这方面的力作。此文说的乱象,一是“著名”。作者说:有些著名作家艺术家,除了名字“著名”,其作品连叫个什么名,也没有几个人能记得。二是“大赛”。作者说:“费时一两年,花钱几百万,为的是到上面参加大赛。”回来后就散场了,没有别的功能。三是“研讨”。作者说:“开作品研讨会,照例要宴请入会者。照例要发个红包。”这“照例”的后果,就是研讨成为捧场。如此种种,归纳起来,就是功利、浮躁与不诚实。

We now give the following lemma.

Lemma 2.1 If Bk+1is updated by the BFGS formula(2.5),then>0 holds if and only if Bk+1inherits the positive property of Bk.

Proof Similar to the proofs in[32,37],we obtain this result and omit this proof.

3 Convergence Analysis

To establish the global convergence of Algorithm 2.1,we make the following common assumption.

Assumption 3.1

(i)The level set Ω ={x ∈ Rn|f(x)≤ f(x0)}is bounded.

(ii){Bk}is uniformly bounded,that is,there exists a positive constant M>0 such that

在基坑土体回填过程中,由于土层已经受到扰动,虽然进行夯实但已经不能形成拱效应,回填部分土体将增加部分荷载作用在隧道结构上。对基坑回填过程进行模拟,每个荷载步回填1 m。对套拱结构应力及变形进行分析,并计算结构的安全系数。图9是基坑回填过程中套拱结构的应力图。

and the trust region radiusin subproblem(2.4).The remaining steps of Algorithm 2.1 remain unchanged,and we can also obtain an adaptive trust region algorithm using BFGS formula(2.9).We can see that the trust region radius of this algorithm only uses the gradient information.However,when we use the modified BFGS formula(2.5),our algorithm uses not only the function value information but also the gradient value information.

Lemma 3.1 If dkis the solution of(2.4),then

By the definitions of Algorithm 2.1 and Lemma 3.1,one can obtain the following result.

Lemma 3.2 Suppose that Assumption 3.1(i)holds,and{xk}is a sequence generated from Algorithm 2.1.Then,for all k,{xk}⊂Ω.

第二,控制好平台的水平度也是滑模施工过程中必须做好的一项关键内容。这主要因为,操作平台如果倾斜,将会墩台出现扭转,平台滑升也会变得更加困难。为了避免操作平台出现倾斜,要合理堆放材料,材料要保持均匀,各地点堆放的材料的重量应当保持一致。施工期间,利用水平仪观察千斤顶高度差的具体情况,并且做好相应的记录。在该过程中,需要相关人员注意的是,在同一水平面的千斤顶,高度差不得超过2cm,而这对相邻的千斤顶来说,则应当将高度差控制在1cm以内,避免对工程施工造成不良影响。

and we define the predicted reduction as

Lemma 3.3 Suppose that Assumption 3.1 holds;then,

where pkis the largest p value obtained in internal iterates at current iterate k.

Proof Let dkbe the optimal solution of the trust region subproblem(2.4).It follows from Step 4 of Algorithm 2.1 that we have at the current iteration pointThus,we know thatis a feasible solution of(2.4);then,

(5)实验课教学组织。每个实验包含预习实验、实验操作、实验结果及处理、实验报告4个环节。其中,实验前预习实验指导,有些实验可以采用PBL、TBL等教学模式让学生完成一定的综合性实验任务;实验过程中,根据实验方案的操作步骤,要求学生认真进行实验,记录实验过程和现象;实验结束后根据要求写出实验报告,有利于学生对实验内容的加深理解。同时,通过实验培养学生严肃、认真、实事求是的科学态度和严谨细致、整洁的良好实验习惯。

This completes the proof.

Lemma 3.4 Suppose that we have Aredkand Predkusing(2.6)and(2.7),respectively.Then,we have

Proof By the definitions of Aredk,Predk,and the Taylor expansion,this is trivial.

The following lemma indicates that“Step 3–Step 4–Step 3”of Algorithm 2.1 does not cycle in the inner loop in finitely.

Lemma 3.5 Suppose that Assumption 3.1 holds and that the sequence{xk}is generated by Algorithm 2.1.Then,Algorithm 2.1 is well defined,that is,the inner cycle of Algorithm 2.1 can be left in a finite number of internal iterates.

Proof By contradiction,suppose that Algorithm 2.1 can not be left in a finite number of internal iterates,that is,Algorithm 2.1 cycles in finitely in“Step 3–Step 4–Step 3”at iterate k.Then,we have

and thus,cp→0 as p→∞.

Therefore,we get

whereis a solution of the adaptive trust region subproblem(1.1)corresponding to p in the kth iterate,

Note that dkis not the optimum solution of(2.4),and it is not difficult to conclude that‖gk‖ ≥ ε.

Now,Lemmas 3.3 and 3.4 lead to

which implies that there exists a sufficiently large p such that

This contradicts the fact that< α,which means that Algorithm 2.1 is well-defined,and therefore,the proof is completed.

On the basis of the above lemmas and analysis,we can obtain the global convergence result of Algorithm 2.1 as follows.

佛教传入我国是在西汉时期,而有关我国僧俗吃腊八粥的文字记载,则表明此俗始于宋代。《清稗类钞》记有:“‘腊八粥’始于宋,十二月初八日,东京诸大寺以七宝五味和糯米而煮成粥,相沿至今。”文中的东京,即今日的河南开封。诸大寺,指开封的铁佛寺、铁塔寺、繁塔寺、相国寺等。《东京梦华录》卷十“十二月”条中说:“初八日……诸大寺作浴沸会,并送七宝五味粥与门徒,谓之‘腊八粥’,都(汴京)人是日,各家亦以果子杂料煮粥而食也。”由此看来,我国腊八食粥习俗,至迟在宋代,就已在寺僧和民间盛行了。

Theorem 3.6(Global Convergence) Suppose that Assumption 3.1 holds,and{xk}is a sequence generated from Algorithm 2.1.If ε=0,then Algorithm 2.1 either stops finitely or satisfies

Proof Suppose,on the contrary,that Algorithm 2.1 does not stop.If(3.4)is not true,then there exists a positive constant ε0and an in finite set such that

材料科学知识点涵盖面广,包括物理、化学、力学和热力学等多个方面,教学过程中应该以工程材料的结构—工艺—性能—应用贯彻教学过程;对必要的知识点,如主要力学性能、常见晶体结构和钢铁的各种转变等需要学生强化记忆。运用启发式、案例式、参与式和讨论式等多种教学方法,使学生在课堂上就能够充分掌握知识,达到对融会贯通的目标。同时注意教学内容的前沿性,应在讲述每类材料时都向学生介绍该类材料的发展、涉及的新技术、新工艺,以适应新时期科技发展对人才培养的要求。

Because the sequence{f(xk)}is bounded below,we obtain

By Assumption 3.1(ii),there exists a positive constant M such that‖Bk‖ ≤ M for all k,and from Lemma 3.3,we get

(ii)Suppose that Step 5 of Algorithm 2.1 with Bk+1is updated by the following rules:if>0 holds,Bk+1is updated by the following BFGS formula

Hence,we can assume that pk≥1 for all k∈Γ.

whereandWhen f is twice continuously differentiable and Bk+1is generated by the BFGS formula,where Bk=I if k=0,this modified secant equation(2.2)possesses the following nice property:

is unacceptable.

Letwe obtain

Note that from Lemmas 3.1 and 3.4,we have

护理与法课程属于人文课程,在完成课程内容体系构建、教学方法大胆革新后,单一的考核方法又成为新问题。在更新教学内容、创新教学方法基础上,如果不进行考核方法改革,难免使教学改革发展成一味追求方法创新而不问教学实绩的创新范式[1]。为使前期的改革成果落到实处,完成教学评价这一重要环节的改革实践,构建科学、高效的考核评价体系势在必行。通过对我院护生进行调查,了解其对考核方法改革的认同度,为建立适应教学方法改革的护理与法课程考核评价体系提供依据。

Therefore,we obtain

Because pk→+∞as k→+∞,and k∈Γ,we have

This is a contradiction to inequality(3.9).The contradiction shows that(3.4)holds.This completes the proof.

现如今,串串、烧烤和火锅成为了年轻人休闲聚会的首选。爱吃串串的你们,是否想过,这些用过的竹签是会直接丢弃还是重复使用?重复使用的竹签卫生又过关了吗?记者调查了解到,目前,海口市面上多数串串、麻辣烫店所使用的竹签大多都存在重复使用的情况,卫生隐患十分堪忧。

4 Local Convergence

In this section,we will prove the superlinear convergence of Algorithm 2.1 under suitable conditions.

Theorem 4.1(Superlinear Convergence)Suppose that Assumption 3.1 holds,and{xk}is generated by Algorithm 2.1,which converges to xand where dkis the solution of(2.4).∇2f(x)is positive definite and ∇2f(x)is Lipschitz continuous in a neighborhood of x.If the following condition

holds,then Algorithm 2.1 is superlinearly convergent,that is,

Proof By the definition of dk,from[37],we have

Note that by Theorem 3.6,it is implied that

and thus,we have dk→0.

Consider

where tk∈(0,1).Then,by the property of the norm,we have

Dividing the both sides of the above inequality(4.5)by ‖dk‖,we obtain

Because∇2f(x)is Lipschitz continuous in a neighborhood of x,we have

Moreover,becausethen

and note that∇2f(x)is positive definite.Then,there exist η > 0,k0≥ 0 such that

Hence,we have

where

Thus,(4.7)implies that

that is,

which means that the sequence{xk}generated by Algorithm 2.1 converges to xsuperlinearly.This completes the proof.

5 Numerical Results

In this section,the numerical results obtained using the proposed method are reported.We call our modified adaptive trust region algorithm NNTR,Zhang’s adaptive trust region algorithm[19]is denoted by ZTR,and Li’s trust region algorithm[26]is denoted by LTR.In the experiments,all parameters were chosen as follows:c=0.5,α =0.1,ε=10−5,and B0=I∈ Rn×n.The parameter for LTR was chosen as c6=0.25.In the tests for NNTR,ZTR,and LTR,we obtain the trial step dkby Steihaug’s algorithm[39]for solving the trust region subproblem(1.2),and to compare the methods,we use the same subroutine to solve the related trust region subproblems.If p≤15 holds,then we will accept the trial step dkin the inner loop“Step 3–Step 4–Step 3”of NNTR,and ZTR.We will also terminate algorithms and consider them to have failed if the number of iterations is larger than 1,200 and 2,500 for small-and medium-scale problems,respectively.All algorithms were implemented in MATLAB R2010b.The numerical experiments were performed on a PC with an Intel Pentium(R)Dual-Core CPU at 3.20 GHz,2.00 GB of RAM and using the Windows 7 operating system.We will test NNTR for both some small-scale problems and medium-scale problems,and compare it with ZTR[19]and LTR[26].

5.1 Specific dimension problems

In this subsection,we list our numerical results for some specific dimension problems(socalled small-scale problems).These problems are listed in Table 5.1,and the test problems can be found in Moré et al[30].In Table 5.1,“No.” denotes the test problem,“Dim” denotes the dimension of problems,“Function” denotes the name of problems,“x0” denotes the initial point, “fopt(x)”refers to the optimum function values,and “–” denotes that the optimum function values are varied.To demonstrate the performance of NNTR for the problems in Table 5.1,we also list the numerical results of ZTR and LTR.The detailed numerical results are reported in Table 5.2,and the following notations are used in Table 5.2: “NI” is the total number of iterations,“NF” is the number of the function evaluations,“f(x)” is the function value at the final iteration,and “fail” is the number of iterations exceeding 1,200,or if the algorithm fails to solve the problem.

Table 5.1 Problem descriptions for the small-scale testing problems

?

Remark 5.1 For problems 13 and 14 in Table 5.1,we use n=m=30 in all experiments.From Moré[40],we know that the optimum function values areand for functions 13 and 14,respectively.

Table 5.2 Test results for the small-scale problems

?

From Table 5.2,we see that the performance of NNTR is better than that of ZTR and LTR for the problems in Table 5.1.Dolan and Moré[41]provided a new tool to analyze the efficiency of these algorithms,where P(t)is the(cumulative)distribution function of the performance ratio.Figures 1 and 2 show that the performance profiles of these algorithms correspond to those of NI and NF,respectively,in Table 5.2 for the small-scale problems in Table 5.1.These two figures show that the NNTR has a good performance for all the problems,when compared it with the ZTR and LTR.

Figure 1 Performance profiles of the methods for these small-scale problems(NI)

Figure 2 Performance profiles of the methods for these small-scale problems(NF)

5.2 Variable dimension problems

Numerical results are reported for some variable dimension problems whose dimension n is chosen in the range[30,1500](so-called medium-scale problems)in this subsection.We list these problems in Table 5.3,and these test problems can be found in Andrei[42].The columns in Table 5.3 are similar to those in Table 5.1.Similar to the numerical results of the smallscale problems,to demonstrate the performance of NNTR for these problems in Table 5.3,the detailed numerical results of NNTR,LTR,and ZTR are listed in Tables 5.4,5.5,and 5.6.The notations in Tables 5.4,5.5,and 5.6 are also similar to those in Table 5.2.It is worth noting that “fail” indicates either the number of iterations exceeding 2,500 or that the algorithm fails to solve the problem,“CPU time” represents the CPU time in seconds required to solve the problem,and “‖g(x)‖” indicates norm of the final gradient.

Table 5.3 Problem descriptions for some testing problems

?

Table 5.4 Test results for these problems with NNTR

?

Table 5.5 Test results for these problems with LTR

?

Table 5.6 Test results for these problems with ZTR

?

Table 5.7 The sum of NI,NF,and CPU time in Tables 5.4,5.5,and 5.6

?

Figure 3 Performance profiles of the methods for these problems(NI)

Figure 4 Performance profiles of the methods for these problems(NF)

Figure 5 Performance profiles of the methods for these problems(CPU time)

For these problems,from Tables 5.4,5.5,and 5.6,we know that the performance of NNTR is overall the best among the three algorithms.The NI and NF of ZTR are less than those of NNTR in problems 6,7,31,37-39,44,46,47,and 68-70;however,the CPU time required is larger than that for NNTR.In particular,we see that the total values of NI,NF,and CPU time of NNTR are less than those for LTR and ZTR from Table 5.7,and the difference is significant,about 6332.78 s and 52664.86 s,respectively,for solving all of the test problems in Table 5.3.Moreover,we see that subject to the CPU time metric,NNTR is the top performer.We used the performance profiles that Dolan and Moré[41]used to compare these algorithms again.Figures 3,4,and 5 show that the performance profiles of these algorithms are comparable to the values of NI,NF,and CPU time,respectively,in Tables 5.4,5.5,and 5.6 for the mediumscale problems in Table 5.3.These three figures show that NNTR has a good performance for all problems compared with LTR and ZTR.

6 Conclusions

In this article,using a modified secant equation with the BFGS updated formula and the new trust region radius of the trust region subproblem by replacing Δkwithwe present a new adaptive trust region method to solve optimization problems.

Our method possesses the following attractive properties:(i)the trust region radius uses not only the gradient information but also the function value information,(ii)the adaptive trust region radius is automatically based on dk−1,qk−1,and gk,(iii)the computation of the matrixof the inverse and the value of‖^B−1k‖are not required at each iterative point xk,which therefore reduces the workload and time involved,and(iv)global convergence is established under some mild conditions,and we show that locally,our method shows superlinear convergence.

It is well known that trust region methods are efficient techniques for optimization problems.The numerical results show that the proposed method is competitive with ZTR and LTR for small-scale problems and medium-scale problems with a maximum dimension of 1,500,in the sense of the performance profile introduced by Dolan and Moré[41].We therefore consider the numerical performance of the algorithm to be quite attractive.However,NNTR,ZTR,and LTR fail to solve the test problems at dimensions larger than 2,000 as memory issues with MATLAB.

It would be interesting to test our method’s performance when it is applied to solve nonsmooth optimization problems and some large-scale problems.It would also be interesting to see how the choice of trust region radius affects the numerical results obtained upon solving optimization problems and nonlinear equations with constrained conditions.These topics will be the focus of our future research.

References

[1]Andrei N.An adaptive conjugate gradient algorithm for lagre-scale unconstrained optimization.J Comput Appl Math,2016,292:83–91

[2]Yuan G.Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems.Optim Lett,2009,3:11–21

[3]Yuan G,Lu X.A modified PRP conjugate gradient method.Anna Operat Res,2009,166:73–90

[4]Yuan Lu X,Wei Z.A conjugate gradient method with descent direction for unconstrained optimization.J Comput Appl Math,2009,233:519–530

[5]Yuan G,Wei Z,Zhao Q.A modified Polak-Ribière-Polyak conjugate gradient algorithm for large-scale optimization problems.IEEE Tran,2014,46:397–413

[6]Yuan G,Meng Z,Li Y.A modified Hestenes and Stiefel conjugate gradient algorithm for large-scale nonsmooth minimizations and nonlinear equations.J Optimiz Theory App,2016,168:129–152

[7]Yuan G,Zhang M.A three-terms Polak-Ribière-Polyak conjugate gradient algorithm for large-scale nonlinear equations.J Comput Appl Math,2015,286:186–195

[8]Yuan G,Zhang M.A modified Hestenes-Stiefel conjugate gradient algorithm for large-scale optimization.Numer Func Anal Opt,2013,34:914–937

[9]Zhang H,Ni Q.A new regularized quasi-Newton algorithm for unconstrained optimization.Appl Math Comput,2015,259:460–469

[10]Lu X,Ni Q.A quasi-Newton trust region method with a new conic model for the unconstrained optimization.Appl Math Comput,2008,204:373–384

[11]Wei Z,Li G,Qi L.New quasi-Newton methods for unconstrained optimization problems.Appl Math Comput,2006,175:1156–1188

[12]Yuan G,Wei Z.The superlinear convergence analysis of a nonmonotone BFGS algorithm on convex objective functions.Acta Math Sci,2008,24B:35–42

[13]Yuan G,Wei Z,Lu X.Global convergence of the BFGS method and the PRP method for general functions under a modified weak Wolfe-Powell line search.Appl Math Model,2017,47:811–825

[14]Yuan G,Sheng Z,Wang B,et al,The global convergence of a modified BFGS method for nonconvex functions.J Comput Appl Math,2018,327:274–294

[15]Fan J,Yuan Y.A new trust region algorithm with trust region radius converging to zero//Li D.Proceedings of the 5th International Conference on Optimization:Techniques and Applications(December 2001,Hongkong),2001:786–794

[16]Hei L.A self-adaptive trust region algorithm.J Comput Math,2003,21:229–236

[17]Shi Z,Guo J.A new trust region method for unconstrained optimization.J Comput Appl Math,2008,213:509–520

[18]Shi Z,Wang S.Nonmonotone adaptive trust region mthod.Eur J Oper Res,2011,208:28–36

[19]Zhang X,Zhang J,Liao L.An adaptive trust region method and its convergence.Sci in China,2002,45A:620–631

[20]Zhou Q,Zhang Y,Xu F,et al.An improved trust region method for unconstrained optimization.Sci China Math,2013,56:425–434

[21]Ahookhosh M,Amini K.A nonmonotone trust region method with adaptive radius for unconstrained optimization problems.Comput Math Appl,2010,60:411–422

[22]Amini K,Ahookhosh M.A hybrid of adjustable trust-region and nonmonotone algorithms for unconstrained optimization.Appl Math Model,2014,38:2601–2612

[23]Sang Z,Sun Q.A new non-monotone self-adaptive trust region method for unconstrained optimization.J Appl Math Comput,2011,35:53–62

[24]Cui Z,Wu B.A new modified nonmonotone adaptive trust region method for unconstrained optimization.Comput Optim Appl,2012,53:795–806

[25]Zhang X.NN models for general nonlinear programming,in Neural Networks in optimization.Dordrecht/Boston/London:Kluwer Academic Publishers,2000

[26]Li G.A trust region method with automatic determination of the trust region radius.Chinese J Eng Math(in Chinese),2006,23:843–848

[27]Yuan G,Wei Z.A trust region algorithm with conjugate gradient technique for optimization problems.Numer Func Anal Opt,2011,32:212–232

[28]Yuan Y.Recent advances in trust region algorithms.Math Program,2015,151:249-281

[29]Powell M J D.A new algorithm for unconstrained optimization//Rosen J B,Mangasarian O L,Ritter K.Nonlinear Programming.New York:Academic Press,1970:31–65

[30]Schnabel R B,Eskow E.A new modified Cholesky factorization.SIAM J Sci Comput,1990,11:1136–1158

[31]Zhang J,Wang Y.A new trust region method for nonlinear equations.Math Method Oper Res,2003,58:283–298

[32]Yuan G,Wei Z,Lu X.A BFGS trust-region method for nonlinear equations.Computing,2011,92:317–333

[33]Fan J,Pan J.An improve trust region algorithm for nonlinear equations.Comput Optim Appl,2011,48:59–70

[34]Yuan G,Wei Z,Li G.A modified Polak-Ribière-Polyak conjugate gradient algorithm for nonsmooth convex programs.J Comput Appl Math,2014,255:86–96

[35]Yuan G,Wei Z.Convergence analysis of a modified BFGS method on convex minimizations.Comput Optim Appl,2010,47:237–255

[36]Xiao Y,Wei Z,Wang Z.A limited memory BFGS-type method for large-scale unconstrained optimization.Comput Math Appl,2008,56:1001–1009

[37]Yuan Y,Sun W.Optimization Theory and Methods.Beijing:Science Press,1997

[38]Powell M J D.Convergence properties of a class of minimization algorithms//Mangasarian Q L,Meyer R R,Robinson S M.Nonlinear Programming.Vol 2.New York:Academic Press,1975

[39]Steihaug T.The conjugate gradient method and trust regions in large scale optimization.SIAM J Numer Anal,1983,20:626–637

[40]Moré J J,Garbow B S,Hillstrom K H.Testing unconstrained optimization software.ACM Tran Math Software,1981,7:17–41

[41]Dolan E D,Moré J J.Benchmarking optimization software with performance profiles.Math Program,2002,91:201–213

[42]Andrei N.An unconstrained optimization test function collection.Advan Model Optim,2008,10:147–161

ZhouSHENG(,洲)GonglinYUAN(袁功林),ZengruCUI(崔曾如)
《Acta Mathematica Scientia(English Series)》2018年第2期文献

服务严谨可靠 7×14小时在线支持 支持宝特邀商家 不满意退款

本站非杂志社官网,上千家国家级期刊、省级期刊、北大核心、南大核心、专业的职称论文发表网站。
职称论文发表、杂志论文发表、期刊征稿、期刊投稿,论文发表指导正规机构。是您首选最可靠,最快速的期刊论文发表网站。
免责声明:本网站部分资源、信息来源于网络,完全免费共享,仅供学习和研究使用,版权和著作权归原作者所有
如有不愿意被转载的情况,请通知我们删除已转载的信息 粤ICP备2023046998号