01.引言
秃鹰搜索算法(BES)通过模拟秃鹰捕鱼的三阶段智能行为实现优化:首先在空间选择阶段定位猎物密集区域,接着在搜索阶段进行精细勘探,最后基于最优位置发起定向俯冲捕猎。研究者采用多维度验证框架,通过基准测试、横向性能对比以及包含均值分析、Wilcoxon检验的统计评估,证实BES在收敛速度、优化精度等指标上显著优于传统方法和其他元启发式算法,尤其在复杂优化问题中展现出卓越的全局搜索能力与稳定性。
02.优化算法的流程
秃鹰搜索算法(BES)作为一种基于群体智能的元启发式优化算法,其流程遵循典型的优化框架:算法初始化阶段随机生成候选解集(秃鹰种群),通过仿生行为规则逐步逼近最优解。核心流程分为仿生驱动的三阶段迭代机制:空间选择阶段通过适应度评估定位高潜力搜索区域,实现全局探索;空间搜索阶段在选定区域内执行螺旋式局部勘探,平衡广度与深度搜索;俯冲捕猎阶段基于前两阶段的最优解进行定向加速收敛,完成精准利用。算法通过迭代更新秃鹰位置向量,数学上表现为候选解在多维空间中的动态调整,其位置更新公式整合了随机扰动因子与历史最优引导项,有效协调探索与开发的矛盾。终止条件通常设置为达到最大迭代次数或收敛精度阈值,最终输出全局最优解。BES通过模拟秃鹰捕猎的生态行为,构建了从粗粒度区域筛选到细粒度精准捕杀的递进式优化机制。
其伪代码如下:
3.论文中算法对比图
04.部分代码
function [Best_score,Best_x,cg_curve]=BES(nPop,MaxIt,low,high,dim,fobj)
%nPop: size of population
%MaxIt:number of iterations
%low, high : space of Decision variables
%dim : number of Decision variables
%fobj : funcation
st=cputime;
% Initialize Best Solution
BestSol.cost = inf;
for i=1:nPop
pop.pos(i,:) = low+(high-low).*rand(1,dim);
pop.cost(i)=fobj(pop.pos(i,:));
if pop.cost(i) < BestSol.cost
BestSol.pos = pop.pos(i,:);
BestSol.cost = pop.cost(i);
end
end
disp(num2str([0 BestSol.cost]))
for t=1:MaxIt
%% 1- select_space
[pop BestSol s1(t)]=select_space(fobj,pop,nPop,BestSol,low,high,dim);
%% 2- search in space
[pop BestSol s2(t)]=search_space(fobj,pop,BestSol,nPop,low,high);
%% 3- swoop
[pop BestSol s3(t)]=swoop(fobj,pop,BestSol,nPop,low,high);
Convergence_curve(t)=BestSol.cost;
disp(num2str([t BestSol.cost]))
ed=cputime;
timep=ed-st;
Best_score=BestSol.cost ;Best_x=BestSol.pos;cg_curve= Convergence_curve;
end
function [pop BestSol s1]=select_space(fobj,pop,npop,BestSol,low,high,dim)
Mean=mean(pop.pos);
% Empty Structure for Individuals
empty_individual.pos = [];
empty_individual.cost = [];
lm= 2;
s1=0;
for i=1:npop
newsol=empty_individual;
newsol.pos= BestSol.pos+ lm*rand(1,dim).*(Mean - pop.pos(i,:));
newsol.pos = max(newsol.pos, low);
newsol.pos = min(newsol.pos, high);
newsol.cost=fobj(newsol.pos);
if newsol.cost<pop.cost(i)
pop.pos(i,:) = newsol.pos;
pop.cost(i)= newsol.cost;
s1=s1+1;
if pop.cost(i) < BestSol.cost
BestSol.pos= pop.pos(i,:);
BestSol.cost=pop.cost(i);
end
end
end
end
function [pop best s1]=search_space(fobj,pop,best,npop,low,high)
Mean=mean(pop.pos);
a=10;
R=1.5;
% Empty Structure for Individuals
empty_individual.pos = [];
empty_individual.cost = [];
s1=0;
for i=1:npop-1
A=randperm(npop);
pop.pos=pop.pos(A,:);
pop.cost=pop.cost(A);
[x y]=polr(a,R,npop);
newsol=empty_individual;
Step = pop.pos(i,:) - pop.pos(i+1,:);
Step1=pop.pos(i,:)-Mean;
newsol.pos = pop.pos(i,:) +y(i)*Step+x(i)*Step1;
newsol.pos = max(newsol.pos, low);
newsol.pos = min(newsol.pos, high);
newsol.cost=fobj(newsol.pos);
if newsol.cost<pop.cost(i)
pop.pos(i,:) = newsol.pos;
pop.cost(i)= newsol.cost;
s1=s1+1;
if pop.cost(i) < best.cost
best.pos= pop.pos(i,:);
best.cost=pop.cost(i);
end
end
end
end
function [pop best s1]=swoop(fobj,pop,best,npop,low,high)
Mean=mean(pop.pos);
a=10;
R=1.5;
% Empty Structure for Individuals
empty_individual.pos = [];
empty_individual.cost = [];
s1=0;
for i=1:npop
A=randperm(npop);
pop.pos=pop.pos(A,:);
pop.cost=pop.cost(A);
[x y]=swoo_p(a,R,npop);
newsol=empty_individual;
Step = pop.pos(i,:) - 2*Mean;
Step1= pop.pos(i,:)-2*best.pos;
newsol.pos = rand(1,length(Mean)).*best.pos+x(i)*Step+y(i)*Step1;
newsol.pos = max(newsol.pos, low);
newsol.pos = min(newsol.pos, high);
newsol.cost=fobj(newsol.pos);
if newsol.cost<pop.cost(i)
pop.pos(i,:) = newsol.pos;
pop.cost(i)= newsol.cost;
s1=s1+1;
if pop.cost(i) < best.cost
best.pos= pop.pos(i,:);
best.cost=pop.cost(i);
end
end
end
end
function [xR yR]=swoo_p(a,R,N)
th = a*pi*exp(rand(N,1));
r =th; %R*rand(N,1);
xR = r.*sinh(th);
yR = r.*cosh(th);
xR=xR/max(abs(xR));
yR=yR/max(abs(yR));
end
function [xR yR]=polr(a,R,N)
%// Set parameters
th = a*pi*rand(N,1);
r =th+R*rand(N,1);
xR = r.*sin(th);
yR = r.*cos(th);
xR=xR/max(abs(xR));
yR=yR/max(abs(yR));
end
end
05.本代码效果图
获取代码请关注MATLAB科研小白的个人公众号(即文章下方二维码),并回复智能优化算本公众号致力于解决找代码难,写代码怵。各位有什么急需的代码,欢迎后台留言~不定时更新科研技巧类推文,可以一起探讨科研,写作,文献,代码等诸多学术问题,我们一起进步。