更全的杂志信息网

Modeling and game strategy analysis of suppressing IADS for multiple fighters’cooperation

更新时间:2016-07-05

1.Introduction

It is full of uncertainty in the modern war,and the battlefi eld situation changes rapidly.For a single fighter in campaign,it is hard to not only improve its own safety but also weaken the fighting capability of the enemy as much as possible,since it always faces with the problems of limited resources of the weapon and information.However,there are many advantages for multiple fighters’campaign,such as information sharing,resources optimization,actions coordination and complementary capabilities.It is beneficial for improving the operational efficiency.

In the research of coordination missions for multiple fighters,theoretical breakthrough and great progress in Engineering field have been obtained.The adaptive control system of multi-agents was proposed by Honeywell in[1],which supports the real-time control and coordination of unmanned combat air vehicle(UCAV).The hierarchical distributed architecture for the coordinated control was proposed by the U.S.Air Force Research Laboratory(AFRL)and Institute of Technology in[2]and[3].The problem of real-time coordination for the heterogeneous platform of many kinds of unmanned air vehicle(UAV)was studied in[4].The problem of target tracking identification for multiple fighters was researched by Beard et al.[5]by using the consistency dynamics method.In recent years,the study of coordination missions has become a hot topic,and traditional methods and intelligent optimization methods are mostly used on the theoretical research of this area.A path planning system structure with the collaborative management layer,the path planning layer and the trajectory control layer was presented in[6],based on the strategy of hierarchical decomposition.Collaborative functions were introduced in real-time path planning for combat in[7]by using the Voronoi graph method.The problem of cooperative reconnaissance was researched in[8]by employing evolutionary algorithms.In addition,the solution to the multiple traveling salesman problem with time windows and the simulated annealing method are also used in researching cooperative reconnaissance for the UAV[9,10].

At present,the research of suppression of enemy air defense(SEAD)missions for multiple fighters[11–17]mostly employs network flow optimization and intelligent optimization methods such as particle swarm.The problem of operational control for the UAV is researched based on the theories of multi-agents and the complex system in[14].The UAV’s offensive and defensive problems under the condition with the interval number information were considered in[17].SEAD missions are the typical dynamic game problems.Thus far,there is no study under the condition that the number of nodes is changing in the battle evolution process.Therefore,by modeling the combat resources as multi-agents networks nodes[18],a complicated operational process integrating kinds of differentcombat resources and accompanying detecting,jamming and attacking is researched in this paper,and the problem of dynamic changing of the numbers and positions for the notes in the operational process is overcomed.A profit model is developed for both offense and defensive sides under the confrontation game,and a distributed virtual learning game strategy is proposed for solving the mixed strategy Nash equilibrium(MSNE)with n-person and n-strategy in the system countermeasures by using this model.

2.Modeling process of multi-agents game

The integrated air defense system(IADS)is an integrated operational process based on the network,which consists of three communication subnets, namely, the early warning subnet, the command and control subnet, and the intercepting operation subnet. Therefore, it has the characteristics of interconnection and interoperability.

Aiming at solving the complicated problem of suppressingIADS,we investigate this issue by developing a multiagent countermeasure network system model.

In a game, there are three essential factors: participators,actions(or strategies)and payoffs.Based on these three factors,the game can be established as shown as follows.

通过人为模拟犯罪现场污染情况,将污染样品洗净并用无水乙醇棉球擦干后使用拉曼光谱仪进行检测(结果见图11)。可以得出结论:污染样品经过处理后,对检验结果基本无影响,仍可通过拉曼光谱图进行分析。

2.1 Participators and strategies of the game

We consider two multi-agent counter measure network systems which participate in the dynamic game and consist of n nodes and m nodes,respectively.

As shown in Fig.1,these two systems are denoted as N and M,which are the attacking side fighters and the IADS defending side fighters,respectively.The countermeasure purpose of this game is summarized as follows:the attacking individuals will always try to suppress the defending side,such as weakening the scope of counteraction or damage capacity,striking the defending individuals,making them lose the confrontation capacity thoroughly,and ensuring own safety at the same time.In order to protect their own safety and minimize the loss,the defending individuals will try to anti-attack,jam or detect the attacking side.

Fig.1 Two multi-agent countermeasure network systems

In this game,each side has n or m agent participators.Denote the agent entity nodes X={X1,X2,...,Xn}as the attacking side,and R={R1,R2,...,Rr},S={S1,S2,...,Ss},T={T1,T2,...,Tz}as the early warning nodes,the command and control nodes,and the intercepting operation nodes of the defending side respectively,where r,s and z are the number of early warning nodes,the command and control nodes,and the intercepting operation nodes,respectively,and r+s+z=m.Let the attributes of the entity nodes be:the detecting scope with radius defined as A, the offensive scope with radius defined as B,the damage capacity C,the communication capacity D,the speed V,the value of the entity node P,the detecting probability Pdj,and the attacking probability Pkj.Assume that the information of the nodes of both sides in this game is completely known by the opposition,which means the numbers and properties of all the nodes are known by the opposition.

Let G={O,E,F}be the strategy action space of the attacking side and G ={O,E,F }be the strategy action space of the defending side.E,F and O represent three kinds of strategic actions:jamming,attacking and no action,respectively.Every multi-agent node chooses one kind of strategic action giG or giGin each round of the game,so as to increase threats to the opposition and keep the value of the actor,as well as maximize the expected payoffs of the actor.In addition,the probability of the strategy sat is fies the constraints:gG,O+gG,E+gG,F=1andg′G,O+g′G,E+g′G,F=1.

2.2 Payoffs of the game

Referring to[13],the nodes of the multi-agent countermeasure systems take into account three performance indicators when choosing the strategy:(i)the estimated value of the comprehensive threat to the opposition;(ii)the estimated value of the nodes;(iii)the estimated influence of the mission’s cost for the whole system.

对汾河流域节水灌溉发展水平的准确评价,是正确认识汾河流域节水灌溉发展水平、推动本区域节水灌溉发展的基础,是制定区域节水政策、方案和措施的科学依据。近年来,节水灌溉发展水平综合评价已经由最初的定性描述分析或定量数据比较发展到定性与定量相结合[1],由依靠主要指标构建简单的评价体系发展到利用多指标或多目标构建综合评价体系。

(i)Considering the estimated value of the comprehensive threat to the opposition

We estimate the comprehensive threat to the opposition according to four aspects:the detecting scope Aij,the offensive scope of node j denoted as Bj,the damage capacity Cj,the communication ability Dj.The value of the node’s comprehensive threat is calculated one by one.

Because the unit is different,we normalize Aji,Bj,Cj and Djwith the formula:and are defined as the normalized value of Aji,Bj,and respectively which are all in the interval[0,1].

Suppose that the detecting domain and the offensive domain are sectorial or circular areas.Ajand Bjare the radii.For the sake of brevity,suppose that the multi-agent can be omni-directional jamming and multi-frequency jamming.When the node does not encounter jamming,the detect threat Ajiis shown as Fig.2,and it is calculated aswhere ajis the value of the detection angle.Otherwise,set the distance of two nodes asij,which is calculated from the coordinates of node i and node j.The detect threat will decrease with the jamming effect increased.Notice that the jamming effect andare in inverse proportion according to the experience.As a result,the detect threat will decrease withbeing reduced.Therefore,Ajican be modeled as the form ofas shown in Fig.3.If dij>Ai+Aj,the node will not be detected and jammed,and the detecting threat achieves the maximum.When the distance of two nodes dijis decreasing,the jamming effect will increase,and the detecting threat will decrease.

Fig.2 Detect threat Ajiwithout encountering jamming

Fig.3 The relationship curve of Ajiand dij

Similarly,if dij>Bior dij>Bj,the node will not be attacked.The communication ability is defined asTh e element of the adjacency matrix dmjrepresents the nodes link status.wmjrepresents the weight of connection between two neighboring nodes.The larger wmjis,the more important the communication link is.Pdjand Pkjare the detecting probability and the attacking probability,respectively.In a round game,the node will be attacked by one or more nodes and the total attacking probability isIfCthreshold,the node is regarded as a destroyed node.If the node has been destroyed,then the node loses its opposed function:Aj=Bj=Cj=Dj=0,fji=0.λ1234are the weights of threat and satisfy λ1+ λ2+ λ3+ λ4=1.

In addition,set three dimensional vectors for nodes N or nodes Mto store the decision-making vector at a certain time.When the multi-agent node chooses jamming,attacking or no action,the decision-making vector value is[0,1,0],[0,0,1]or[1,0,0],respectively.

(ii)Considering the value of nodes

Suppose that the economic and strategic value of the node i is estimated as Pi,and the economic and strategic value of the node j is estimated as Pj.This type of value is determined by its own economic value and strategic position.Then,the total value of the whole system is?where Pi∈[0,1]and Pj∈[0,1]are the economic and strategic value of the node i and node j,respectively.

(iii)Considering the influence of the mission’s cost on the whole system

Here,we mainly think about the influence of the destroyed node on the mission’s cost for the whole system.Equation(2)calculates the probability of the damage from the opposition node j as follows[13].

It is worth mentioning that the probability of the damage from the opposition node j will reduce if some opposition nodes are destroyed.The reduction of this probability which is caused by the destroyed nodes is viewed as the total influence of the mission’s cost on the whole system,which will be shown later.

Suppose that the node l is destroyed,then Pdl(X)=Pkl(X)=0.According to[13],the above probability is calculated by

To sum up,because node l is destroyed,let Δ represent the variation value,the total influence on the whole system is regarded as

The second group experimental data for the attack defense game consist of five nodes of suppressing fighters and fifteen nodes of the IADS,namely,three command and control nodes, five intercepting operation nodes and seven early warning nodes.

When the player chooses no action or the jamming strategy,we suppose the cost is small such that it can be ignored in this game.

Define the cost of performing attacking action as cj.Denote cjas the normalized value of cj,with cjbeing the value of the damage capacity declined in each attacking,in practice,cjimplies the weapon and firepower used in performing attacking action.kjis the times of the attacking having happened. If the attacking happens, the declined damage capacity is calculated by

(v)The variation of the comprehensive threat ΔFjand the variation of the value of the opposition nodes ΔPj should be considered under any strategy action.When the strategy action is O,both ΔFjand ΔPjare 0.When the strategy action is E,ΔPjis 0.

The mixed strategy vectors are described as follows.Consider that node i∈ N jams the opposition node with the probability πi,gi,let Π i= {πi,gi|giG}be the mixed strategy vector of all the possible strategies Πi={πi,giR:?gi∈Gπi,gi=1},and denote Π-i={πi′,iN{i}}.Similarly,node j ∈ M jams the opposition node with the probability φj,g

j,let Φj=be the mixed strategy vector of all the possible strategiesand denote Φ-j={φj′,jM{j}}.

Definition:(MSNE for n-person and n-strategy game)Suppose the mixed game is N+M players,{G,G}is the strategy space.Then πi,jis an MSNE,if and only if:

(i)The jamming strategy should be selected under the condition dij≤Ai+Aj;

(ii)The attacking strategy of the attacking side should be selected under the condition>0,∃dnj<An,(n∈N)and dij≤Bi,since the target node must be detected by some nodes inN and within the offensive scope of node i.

(iii)The attacking strategy of the defending side should be selected under the condition >0,∃dim<Am,(m∈M)and dij≤Bj.

从图3可以看出,在低频区,出现斜率接近90°的直线,表明该电极材料具有较好的容抗性能;在高频区,半圆直径约0.18Ω,说明电容器具有较小的电荷转移阻抗;曲线与阻抗实轴的焦点表示溶液欧姆电阻,该电极材料的欧姆电阻为0.26Ω,说明电容器具有非常小的内阻和rGO电极材料具有良好的导电性。

(iv)When the strategy action is F,some nodes of the opposition may be destroyed as long as its suffered total attacking probability is larger than Cthreshold.Accordingly,the total influence of the destroyed nodes on the whole system should be considered,and the cost of conducting attack action and the declined damage capacity should also be considered.

Thus,the reduction of the total damage capacityis pm.

(vi)If the distance of two nodes is smaller than the offensive scope of any node in the opposition,the node will be attacked by the opposition,its suffered total attacking probability is.Thus the node’s comprehensive threat and value will be reduced,and the reduction

Define Vi(gi-i,Φ)as the payoff of node i in N.Let Δ represent the variation value,therefore ΔFjdenotes the variation value of the estimated value of the comprehensive threat for node j,and so on.Then,based on the above scenario analysis and assumptions,there are four cases for the payoff of each node of N.Take node i of N for example.

Case 1 When the strategy action of node i is selected as F and the distance of nodes i and j is larger than the offensive scope of any node of M,the payoff of node i is Vi(gi-i,Φ)=q1(ΔFj+PajFM{j})+q2ΔPj-q3pi,where Vi(gi-i,Φ)consists of three parts:the variation of the estimated value of the comprehensive threat q1(ΔFj+PajFM{j}),the variation value of node q2ΔPj and the declined damage capacity q3pi,with q1,q2,q3being the weight coefficients.

Case 2 When the strategy action of node i is not F and the distance of nodes i and j is larger than the offensive scope of any node of M,the payoff of node i is Vi(gi-i,Φ)=q1ΔFj+q2ΔPj,which consists of two parts:the variation of the estimated value of the comprehensive threat q1ΔFjand the variation value of node q2ΔPj.

Case 3 When the strategy action of node i is selected as F and there exist two nodes i and j such that the dis-tance of nodes i and j is smaller than the offensive scope of node j in M,the payoff of node i is

which consists of the following parts:the variation of the estimated value of the comprehensive threat q1(ΔFj+PajFM{j}),the variation value of node q2ΔPj,and the reduction of comprehensive threat and values for node i by considering the total attacking probability of nodei suffer-

Case 4 When the strategy action of node i is not F and there exist two nodes i and j such that the distance of nodes i and j is smaller than the offensive scope of node j in M,the payoff of node i is

which consists of three parts:the variation of the estimated value of the comprehensive threat q1ΔFj,the variation value of node q2ΔPj,and the reduction of comprehensive threat and values for node i by considering the total attacking probability of node i sufferingPkj))(Pi+Fi).

菏泽境内洙赵新河有海头、史庄、侯集、安兴、赵楼、毛张庄、于楼等7个节制闸,平均两闸之间的距离为15 km。每个节制闸对应一个管理所,目前各个管理所中有各自的变压器,支持节制闸的启闭。可以从管理所的变压器向堤防架设电线,也可以在单堤上架设高压电线,供所有闸管所的生产、生活用电,又兼作抗旱灌溉用电。随着水利工程现代化的建设,未来的自动化监控、远程监测、视频监视系统均需要电力支持。从而,应以信息化管理手段为基础,推进堤防现代化管理,确保堤防管理精细化、规范化、专业化。

In conclusion,we construct the payoff function of node i in N as(6),and each node i tries to optimize the payoff function.

Define Uj(-j,Π)as the payoff of node j in M.Then,there are also four cases for the payoff of each node of M.Similar as the analysis of the payoff of node i in N,for node j in M,we can construct the payoff function of node j in M as(7),and each node j tries to optimize this utility function(7).

近年来的调查报告显示,在2018年6月期间,综合所有热门短视频APP的用户量来看,用户规模达到5.94亿,足足占据整体网民的74.1%。从整体趋势来看,用户呈现年轻化的特点,大部分集中在25-35岁人群。

where q1,q2,q3,q4(q1+q2+q3+q4=1)are the weight coefficients.

3.Analyzing the evolution of the game

As the game theory knows,this game model is a finite nonzero-sum mixed game.According to the MSNE existence theory,the MSNE exists for a finite game[19].In order to achieve the MSNE,a distributing virtual policy learning algorithm is proposed.On the MSNE of the game,the attacking side and the defending side will choose their own strategies with a certain probability.

When a game reaches MSNE,no one can obtain benefits by changing the strategy.In other words,no one has motivation to change the strategy actions.

3.1MSNE

In the game,the strategy is selected from the action strategy space,and it is a selection from multiple choices rather than a selection from two choices,which is not as usual.Thus,in this paper,we give an extensive definition of the MSNE as follows.

The scenario analysis and assumptions for this game are summarized as follows:

矿坑疏干排水形成的地下水水位降落漏斗势必造成地下水力坡度与地下水流速加大,潜蚀与真空吸蚀作用加强,破坏地壳的相对稳定,随之地面开裂、陷落。

网络的出现拉近了人与人交流的距离,对思想政治教育的时间和空间起到了延伸的作用,也丰富了思想政治教育的内容,但是与此同时,也带来了一定的挑战和问题,因此教育者要在网络文化的背景下,开创高校思想政治教育创新途径,不断为国家培养创新型人才。

Take node i∈N into consideration and the expected payoff is calculated by(8).Analogously,for node j∈M,the expected payoff is calculated by(9),where Eπ,φ is the expected action on the probability distribution{Π,Φ}.

3.2 Distributing virtual policy learning algorithm

To achieve the optimization of the expected payoffs,in each learning time t,the attacking node i and the defending node j choose a pure strategyG orG,respectively,as the optimal response based on other players’mixed strategies.Letdenote the mixed strategies of nodes iN{i}at the time t-1,and Φt-1denote the mixed strategies of nodes j∈M at the time t-1.Similarly,letand Πt-1N}.As a consequence,and can be expressed as

If the attacking node is motional,then,the faster motion it is,the smaller threat it suffers.Set Vi∈[Vmin,Vmax],the direction of Viis determined by the strategy target.The coefficient of threat about Viis regarded as λvi=for Vmax/=Vminand λvi=1 for Vmax=Vmin.Above all,the total threat encountered by the whole system of the attacking side is FN =Since V has little effect on the defending side,the total threat encountered by the whole system of the defendingside is

According to[20],the strategies are updated by(12)and(13).

It is the deviation linear combination of the last mixed strategy and the cumulative mixed strategy in every step’s update.The learning process will continue iterating until its convergence precision reaches a small value.Then the MSNE is found,and both sides take actions with stable probabilities.

The distributing virtual policy learning algorithm for nperson and n-strategy game is described in Fig.4.

Fig.4 The flow diagram of the algorithm

4.Experiment and result

In this simulation,both nodes N and nodes M of the two multi-agent countermeasure systems are distributed randomly in the space of 1 000 km×1 000 km.Assume that some nodes are marked as the important command and control nodes,and they are protected by other nodes.The initial strategy probability vectors are assumed to be[0.33,0.33,0.34]and[0.5,0,0.5];set weight coefficient q1=0.35,q2=0.4,q3=0.1,q4=0.15 for nodes N and q1=0.4,q2=0.35,q3=0.1,q4=0.15 for nodes M;the convergence precision is assumed to be 0.000 1;the threshold value Cthresholdis set as 0.7.αjis randomly set as 45°,60°,90°,180° or 360°.dmjis set as all 1’s matrix,where the initial communication is fully connected.The weight of connection wmjis assumed to be generated randomly between 0 and 1.

老福和他母亲沉默地吃完了饭,独自想了很久,在烟缸里使劲摁灭了烟头对她说:“好了好了,您放心,我一定把罗素青的死因搞清楚,给您一个满意的答复。行了吧?”

To deeply show the effectiveness of the proposed method and algorithm,two groups of experimental data are considered as follows.

The first group experimental data for the attack-defense game consists of three nodes of suppressing fighters and ten nodes of the IADS,namely,two command and control nodes, three intercepting operation nodes and five early warning nodes.

(iv)Considering the payoffs function

Other parameters used in the simulation are generated randomly from the ranges as shown in Table 1.On the case of the same parameters, we compare the distributing virtual policy learning algorithm designed in this paper with the traditional adjacent algorithm,and analyze the evolution process of the offensive and defensive game(sample the experimental data every ten iteration to draw the results).

Table 1 Stochastic assignment of related parameters

Parameter Attacking probability Pk Range (0,100) (5,30) (10,100) (10,1 000) (100,1 000) (0,1) (0.5,0.99) (0.5,0.99)Detecting domain A/km Offensive domain B/km Damage capacity C Value Pc/(104CNY)Speed V/(km/h)Weight of connection wmj Detecting probability Pd

Fig.5 and Fig.6 show that the nodes adaptively adjust their strategy and optimize the payoffs funtion by using the distributing virtual policy learning algorithm in this complex interaction of the attack-defense countermeasures game.Many times of experiments show that the presented algorithm has a good advantage for the problem of suppressing the IADS for multiple fighters in the case that the number of nodes and the iterations increase,even though it dose not has an obvious improvement when the number of nodes and the iterations is small compared with the traditional adjacent algorithm.In Fig.5(c),there is a jump phenomenon in simulation results of the average expected payoffs because a suppressing fighter destroys an important command and control node of the IADS successfully between t=18 and t=19.Moreover,the number of iterations for the two game algorithms to achieve MSNE may not be the same.Table 2 shows the average of the nodes’average expected payoffs and the nodes’total expected payoffs at the MSNE point for 30 times experiments,in each time experiment,and the parameter values are produced randomly for both groups of experiments as shown in Table 1.Taking the attraking side for example,it can be known from Fig.5 and Table 2 that,the presented algorithm is better than the traditional adjacent algorithm on the aspect of the nodes’average expected payoffs and the nodes’total expected payoffs when the number of total nodes to be either 13 or 20,since the target node is always selected according to the max expected payoffs.For the number of total nodes to be 13 and 20,the average of the nodes’average expected payoffs is increased by 21.94%and 35.84%,respectively,and the average of the nodes’total expected payoffs is increased by 20.68%and 27.13%,respectively.The simulation results indicate that the proposed distributing virtual policy learning algorithm used in suppressing IADS for multiple fighters can significantly improve battle effectiveness.

根据导则,常用的海绵设施有下沉绿地、透水铺装地面、透水水泥混凝土路面、透水沥青混凝土路面等,本项目“海绵城市”设计主要内容包括路缘石开口、下沉式绿化带、非机动车道透水混凝土、人行道渗透性铺装。

Table 2 Average of the nodes’average expected payoffs and the nodes’total expected payoffs on the point of MSNE for 30 times experiments

The nodes’total expected payoffs(adjacent algorithm,virtual learning)n=3,m=10 (0.023 7,0.028 9) (17.99,21.71)n=5,m=15 (0.017 3,0.023 5) (21.34,27.13)Number of nodes The nodes’average expected payoffs(adjacent algorithm,virtual learning)

Fig.5 Meanexpectation pay and whole expectation payof the nodes of the suppressing side

Fig.6 Track of the third node of the suppressing side when n=3,m=10

Furthermore,employing the presented algorithm,the fighter fleet has the adaptive and self optimization ability for the dynamic battle field,and they can automatically implement target assignment,route planning and strategy selection,which takes advantage of the collaborative function in executing the task.

家中对这件事自然照例不大明白情形,以为只是教师管理方面太宽的过失,因此又为我换一位教师。这事对我说来,倒又得感谢我的家中,因为先前那个学校比较近些,虽常常绕道上学,终不是个办法,且因绕道过远,把时间耽误太久时,无可托词。现在的学校可真是很远很远了,不必包绕偏街,我便应当经过许多有趣

5.Conclusions

By modeling the combat resources as multi-agents networks nodes,a complicated operational process integrating kinds of different combat resources and accompanying detecting,jamming and attacking is researched.The dimension of the payoff matrix of every node is n×3 or m×3 at a certain time slot t,and the dimension of the payoff matrix space of all nodes is n×m×3,so the dimension of payoff matrix space of all nodes is 3t×n×m at all time slots.Employing the distributing virtual policy learning algorithm to simulate the evolution of this game,it chooses the appropriate strategy from the large payoff matrix space successfully for playing the game.The experiment results show that the designed distributing virtual policy learning algorithm solves the problem of suppressing the IADS by multiple fighters’cooperation very well,and the fighter fleet could make mission planning dynamically according to the battle field situation.For the actual combat,the designed algorithm is more effective than the traditional adjacent algorithm,and hence it can decrease the damage of combat fighters in the offensive operations considerably.To some extent,designing the payoff functions appropriately by the reconnaissance information and stability of the equilibrium solution can predict the strategies of the enemy and obtain the optimal combination strategy.

References

[1]Honeywell Technology Center.Multi-agent self-adaptive CIRCA.[2016-10-10].http://www.htc.honeywell.com/pmjects/ants/6-00-quadcharts.ppt.

[2]CHANDLER P R,PACHTER M.Research issues in autonomous control of tactical UAVs.Proc.of the American Control Conference,1998:394–398.

[3]JOHNSON C L.Inverting the control ratio human control of large autonomous teams.Proc.of the International Conference on Autonomous Agents and Multi-Agent Systems,2003:458–465.

[4]COMETS project official web page.[2016-10-10].http://www.com,ets-uavs.org.

[5]BEARD R W,MCHAN T W,NELSON D D,et al.Decentralized cooperative aerial surveillance using fixed-wing miniature UAVs.Proceedings of the IEEE,2006,94(7):1306–1324.

[6]WANG G,GUO L,DUAN H.A hybrid metaheuristic DE CS algorithm for UCAV three-dimension path planning.The Scientific World Journal,2012:583973.

[7]ZHANG L,SUN Z J,WANG D B.An improved voronoi diagram for suppression of enemy air defense.Journal of National University of Defense Technology,2010,32(3):121–125.

[8]RASMUSSEN S,CHANDLER P R,OPTHNAL V S.Heuristic assignment of cooperative autonomous unmanned air vehicles.Proc.of the AIAA Guidance,Navigation,and Control Conference,2003:5586–5597.

[9]CHEN J,ZHA W Z,PENG Z H,et al.Cooperative area reconnaissance for multi-UAV in dynamic environment.Proc.of the 9th IEEE Asian Control Conference,2013:1299–1304

[10]WU Q P,ZHOU S L,YAN S.A cooperative region surveillance strategy for multiple UAVs.Proc.of the IEEE Chinese Guidance,Navigation and Control Conference,2014:1744–1748.

[11]HAQUE M,EGERSTEDT M.Multilevel coalition formation strategy for suppression of enemy air defenses missions.Journal of Aerospace Information Systems,2013,10(6):287–296.

[12]POLAT C,IBRAHIM K,ABDULLAH S,et al.The small and silent force multiplier:a swarm UAV—electronic attack.Journal of Intelligent&Robotic Systems,2013,70(12):595–608.

[13]SU F.Research on distributed online cooperative mission planning for multiple unmanned combat aerial vehicles in dynamic environment.Changsha,China:National University of Defense Technology,2013.(in Chinese)

[14]SUMANTA K D.Modeling intelligent decision-making command and control agents:an application to air defense.IEEE Intelligent Systems,2014,29(5):1541–1672.

[15]NICK E,KELLY C.Fuzzy logic based intelligent agents for unmanned combat aerial vehicle control.Journal of Defense Management,2015,6(1):1–3.

[16]YOO D W,LEE C H,TAHK M J,et al.Optimal resource management algorithm for unmanned aerial vehicle missions in hostile territories.Proceedings of the Institution of Mechanical Engineers,Part G:Journal of Aerospace Engineering,2013,228(12):2157–2167.

[17]CHEN X,LIU M,HU Y X.Study on UAV offensive/defensive game strategy based on uncertain information.Acta Armamentarii,2012,33(12):1510–1515.

[18]JIN Y,LIU J Y,LI H W,et al.The research on the autonomous power balance framework for distribution network based on multi-agent modeling.Proc.of the International Conference on Power System Technology,2014:20–22.

[19]ROGER M B.Game theory:analysis of conflict.New York:Springer,2010.

[20]SUN Y Q.The research on key technologies of jamming attacks in wireless sensor networks.Changsha,China:National University of Defense Technology,2012.(in Chinese)

LI Qiuni,YANG Rennong,LI Haoliang,ZHANG Huan,FENG Chao
《Journal of Systems Engineering and Electronics》2018年第2期文献

服务严谨可靠 7×14小时在线支持 支持宝特邀商家 不满意退款

本站非杂志社官网,上千家国家级期刊、省级期刊、北大核心、南大核心、专业的职称论文发表网站。
职称论文发表、杂志论文发表、期刊征稿、期刊投稿,论文发表指导正规机构。是您首选最可靠,最快速的期刊论文发表网站。
免责声明:本网站部分资源、信息来源于网络,完全免费共享,仅供学习和研究使用,版权和著作权归原作者所有
如有不愿意被转载的情况,请通知我们删除已转载的信息 粤ICP备2023046998号