Mealpy

Latest version: v3.0.1

Safety actively analyzes 693883 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 9

1.0.1

Change models
+ Added Slime Mould Algorithm (SMA) to bio_based group:
+ OriginalSMA: the original version of SMA
+ BaseSMA: my modified version:
+ Selected 2 unique and random solution to create new solution (not to create variable) --> remove third loop in original version
+ Check bound and update fitness after each individual move instead of after the whole population move in the original version
+ My version not only faster but also better

+ Added Spotted Hyena Optimizer (SHO) to swarm_based group:
+ OriginalSHO: my modified version


+ Add category for questionable algorithm or papers (called fake):
+ Butterfly Optimization Algorithm (BOA) to swarm_based group:
+ OriginalBOA: this algorithm is made up one
+ AdaptiveBOA:
+ BaseBOA:
+ Look at the author of this algorithm
+ https://scholar.google.co.in/citations?hl=en&user=KvcHovcAAAAJ&view_op=list_works&sortby=pubdate
+ It is interesting to note that there have been many variant versions of BOA created since 2015, even though the inventor of BOA only published it in 2019. This raises some questions about the origins of these variant algorithms and how they came to be.

+ Sandpiper Optimization Algorithm (SOA) to swarm_based group:
+ OriginalSOA: the original version is made up one
+ This algorithm suffers from local optimal and lower convergence rate.
+ It cannot update the position, so how to converge without update position?
+ I am curious about the algorithm's publication history, as I have found it submitted to multiple journals.
+ A detailed explain in this comment section
(https://www.researchgate.net/publication/334897831_Sandpiper_optimization_algorithm_a_novel_approach_for_solving_real-life_engineering_problems/comments)
+ BaseSOA: my modified version which changed some equations and flow.

+ Sooty Tern Optimization Algorithm (STOA) is another name of Sandpiper Optimization Algorithm (SOA)
+ If you read the paper, you will see the similarity between these two

+ Blue Monkey Optimization (BMO) to swarm_based group:
+ OriginalBMO:
+ It is a made-up algorithm with a similar idea to "Chicken Swarm Optimization," which raises questions about its originality.
+ The pseudo-code is confusing, particularly the "Rate equation," which starts as a random number and then becomes a vector after the first loop.
+ The movement of the blue monkey and children is the same equations???
+ The algorithm does not check the bound after updating the position, which can cause issues with the search space.
+ The algorithm does not provide guidance on how to find the global best from the blue monkey group or child group.
+ BaseBMO: my modified version which used my knowledge about meta-heuristics to do it.


Change others
+ models_history.csv: Update history of meta-heuristic algorithms
+ examples:
+ Update and Add examples for all algorithms
+ All examples tested with CEC benchmark and large-scale problem (1000 - 2000 dimensions)

---------------------------------------------------------------------

1.0.0

Change models
+ Change root model, then all of the algorithms are now change
+ domain_range -> lower bound and upper bound
+ log -> verbose
+ objective_func -> obj_func
+ batch-size training -> Inspired by the idea of batch-size training in gradient descent algorithm
+ Idea of batch-size training in meta-heuristics
+ Some algorithms update the global best solution when all of the individuals in the population have moved to a new position.
+ This idea is a similarity to the training the whole dataset in GD
+ But some algorithms update after each move to a new position.
+ This idea is a similarity as SGD
+ But the point here is if the algorithm doesn't take advantage of the global best solution when updating individual
the position then GD or SGD gives the same results.

+ So my idea of batch-size training here is very simple, after batch-size of individuals move, then we will update
the global best solution. So:
+ batch-size = 1 ==> SGD
+ batch-size = population-size ==> GD
+ batch-size should set = 10% / 25% / 50% of your population size

+ Some algorithms can't apply the idea of batch-size. For examples:
+ If the original algorithm has already divided the population into m-clan (m-group) --> No need batch-size here
+ If the original algorithm contains multiple-part. Each part contains several types of updating --> No need too.

+ For music_based:
+ BaseHS (HS): Is the one can't use batch-size idea, but not belong to any reason above.

+ For math_based:
+ BaseSCA (SCA): Updated with batch-size idea. Keep the original version for reference.

+ For system_based:
+ BaseAEO (AEO): Updated with the batch-size ideas and some of my new ideas. Still keep the original version
+ BaseGCO (GCO): Updated with batch-size idea. Keep the original version

+ For bio_based:
+ BaseIWO (IWO):
+ OriginalWHO (WHO):
+ BaseBBO (BBO):
+ Remove all third loop, make algorithm n-times faster than original
+ In the migration step, instead of select solution based on the wheel in every variable in position,
using the wheel and select a single position and update based on its all variable of that position.
+ BaseVCS (VCS):
+ Remove all third loop, make algorithm n-times faster than original
+ In Immune response process, updating the whole position instead of updating each variable in position
+ Drop batch-size idea to 3 main processes of this algorithm, make it more robust
+ BaseSBO (SBO):
+ Remove all third loop, n-times faster than original
+ No need equation (1, 2) in the paper, calculate the probability by roulette-wheel. Also can handle negative values
+ Apply batch-size idea
+ BaseBWO (BWO): This is my changed version and worked.
+ Using k-way tournament selection to select parent instead of randomizing
+ Repeat cross-over population_size / 2 instead of n_var/2
+ Mutation 50% of position instead of swap only 2 variable in a single position
+ OriginalBWO: is made up algorithm and just a variant of Genetic Algorithm
+ BaseAAA (AAA): This is my changed version but still not working
+ OriginalAAA: is made up algorithm taken from DE and CRO
+ I realize in the original paper, parameters, and equations not clear.
+ In the Adaptation phase, what is the point of saving starving value when it doesn't affect the solution at all?
+ The size of the solution always = 2/3, so the friction surface will always stay at the same value.
+ The idea of the equation seems like taken from DE, the adaptation and reproduction process seem like taken from CRO.
+ Appearance from 2015, but still now 2020 none of Matlab code or python code about this algorithm.
+ EOA:
+ OriginalEOA: My modified version from original Matlab version
+ The original version from Matlab code above will not work well, even with small dimensions.
+ I changed updating process
+ Changed the Cauchy process using x_mean
+ Used global best solution
+ Remove the third loop for faster

+ For human_based:
+ BaseTLO (TLO):
+ Remove all third loop
+ Apply batch-size idea
+ BSO:
+ OriginalBSO: This is original version
+ ImprovedBSO: My improved version with levy-flight and removal of some parameters.
+ QSA: 4 variant version now runs faster than n-times Original version
+ BaseQSA: Remove all third loop, apply the idea of the global best solution
+ OppoQSA: Based on BaseQSA, apply the idea of opposition-based learning technique
+ LevyQSA: Based on BaseQSA, apply the idea of levy-flight in business 2
+ ImprovedQSA: Combination of OppoQSA and LevyQSA
+ OriginalQSA: The original version of QSA. Not working well
+ SARO:
+ BaseSARO: My version but not better than the original version, just faster than
+ OriginalSARO: Convergence rate better than base version but very slow in time comparison.
+ LCBO:
+ BaseLCBO: Is the original version
+ LevyLCBO: Use levy-flight and is the best among 3 version
+ ImprovedLCBO:
+ SSDO:
+ OriginalSSDO: This is the original version
+ LevySSDO: Apply the idea of levy-flight
+ GSKA:
+ OriginalGSKA: This is the original version, very slow for large-scale and slow convergence
+ BaseGSKA: Remove all third loop, change equations and ideas, faster than Original version
+ CHIO: This algorithm hasn't done yet. Don't use it yet
+ OriginalCHIO: Can fail at any time
+ BaseCHIO: Can't convergence


+ For physics_based group:
+ WDO:
+ OriginalWDO: is the original version
+ MVO:
+ OriginalMVO: is weak and slow algorithm
+ BaseMVO: can solve large-scale optimization problems
+ TWO:
+ OriginalTWO: is the original version
+ OppoTWO: using opposition-based techniques (better than original version)
+ LevyTWO: using only levy-flight and better than OppoTWO
+ ImprovedTWO: using opposition-based and levy-flight and better than all others
+ EFO:
+ OriginalEFO: is the original version, run fast but slow convergence
+ BaseEFO: using levy-flight for large-scale dimension
+ NRO:
+ OriginalNRO: is the original version, efficient even with large-scale due to levy-flight techniques
but running-time will slow because third loop.
+ HGSO:
+ OriginalHGSO: is the original version
+ OppoHGSO: uses opposition-based technique
+ LevyHGSO: uses levy-flight technique
+ ASO:
+ OriginalASO: is the original version
+ EO:
+ OriginalEO: is the original version
+ LevyEO: uses levy-flight technique for large-scale dimensions

+ For probabilistic_based group:
+ CEM:
+ OriginalCEM: is the original version
+ CEBaseSBO: is the hybrid version of Satin Bowerbird Optimizer (SBO) and CEM
+ CEBaseSSDO: is the hybrid version of Social-Sky Driving Optimization (SSDO) and CEM
+ CEBaseLCBO and CEBaseLCBONew: are the hybrid version of Life Choice Based Optimization and CEM

+ For evolutionary_based group: (Not good for large-scale problems)
+ EP:
+ OriginalEP: is the original version
+ LevyEP: applied levy-flight
+ ES:
+ OriginalES: is the original version
+ LevyES: applied levy-flight
+ MA:
+ OriginalMA: is the original version, can't remove third loop, very slow algorithm
+ GA:
+ BaseGA: is the original version
+ DE:
+ BaseDE: is the original version
+ FPA:
+ OriginalFPA: is the original version (already use levy-flight in it)
+ CRO:
+ OriginalCRO: is the original version
+ OCRO: is the opposition-based version

+ For swarm_based group:
+ PSO:
+ OriginalPSO: is the original version
+ PPSO: Phasor particle swarm optimization: a simple and efficient variant of PSO
+ PSO_W: A modified particle swarm optimizer
+ HPSO_TVA: New cls-organising hierarchical PSO with jumping time-varying acceleration coefficients
+ ABC:
+ OriginalABC: my version and taken from Clever Algorithms
+ FA:
+ OriginalFA: is the original version, running slow even the all third loop already removed
+ BA:
+ OriginalBA: is the original version
+ BasicBA: is also the original version with improved parameters
+ AdaptiveBA: my modified version without A parameter
+ PIO:
+ This is made up algorithm, after changing almost everything, the algorithm works
+ BasePIO: My base version
+ LevyPIO: My version based on levy-flight for large-scale dimensions
+ GWO:
+ OriginalGWO: is the original version
+ ALO:
+ OriginalALO: is the original version, slow and less efficient
+ BaseALO: my modified version which using matrix multiplication for faster
+ MFO:
+ OriginalMFO: is the original version
+ BaseMFO: my modified version which remove third loop, change equations and flow
+ EHO:
+ OriginalEHO: is the original version
+ LevyEHO: my levy-flight version of EHO
+ WOA:
+ OriginalWOA: is the original version
+ BSA:
+ OriginalBSA: is the original version
+ SRSR:
+ OriginalSRSR: is the original version
+ GOA:
+ OriginalGOA: is the original version with some changed from me:
+ I added normal() component to Eq, 2.7
+ Changed the way to calculate distance between two location
+ Used batch-size idea
+ MSA:
+ OriginalMSA: is my modified version with some changed from original matlab code version
+ RHO:
+ OriginalRHO: is the original version, not working
+ BaseRHO: my changed version
+ LevyRHO: levy-flight for large-scale dimensions
+ Change the flow of algorithm
+ Uses normal in equation instead of uniform
+ Uses levy-flight instead of uniform-equation
+ EPO:
+ Original: is the original version, can't converge at all
+ BaseEPO: my modified version:
+ First: I changed the Eq. T_s and no need T and random R.
+ Second: Updated the old position if fitness value better or kept the old position if otherwise
+ Third: Remove the third loop for faster
+ Fourth: Batch size idea
+ Fifth: Add normal() component and change minus sign to a plus
+ NMRA:
+ OriginalNMRA: The original version
+ The Matlab code of paper's author here: https://github.com/rohitsalgotra/Naked-Mole-Rat-Algorithm
+ Matlab code and paper are very different.
+ LevyNMRA: My levy-flight version
+ ImprovedNMRA:
+ Using mutation probability
+ Using levy-flight
+ Using crossover operator
+ BES:
+ OriginalBES: the original version
+ PFA:
+ OriginalPFA: is the original version, I did redesign the equation based on distance.
+ The problem with using the distance is that when increasing the bound and dimensions
--> distance increase very fast --> new position will always over the bound
--> we should divide the distance to a number of dimensions and the distance of the bound (upper-lower) to
stabilize the distance
+ The second problem is a new solution based on all other solutions --> we should also divide the new solution
by the population size to stabilize it.
+ OPFA: is an enhanced version of PFA based on Opposition-based Learning (better than OriginalPFA)
+ ImprovedPFA: (sometime better than OPFA)
+ using opposition-based learning
+ using levy-flight 2 times
+ SFO:
+ OriginalSFO: is the original version
+ ImprovedSFO: my improved version in which
+ Reform Energy equation,
+ No need parameter A and epxilon
+ Based on idea of Opposition-based Learning
+ SLO:
+ OriginalSLO: is the changed version from my student
+ ImprovedSLO: is the improved version
+ SpaSA:
+ BaseSpaSA: is my modified version, the original paper has several unclear parameters and equations
+ MRFO:
+ OriginalMRFO: is the original version
+ LevyMRFO: is my modified version based on levy-flight
+ HHO:
+ OriginalHHO: is the original version
+ SSA:
+ OriginalSSA: is the original version
+ BaseSSA: my modified version
+ CSO:
+ OriginalCSO: is the original version
+ BFO:
+ BaseBFO: is the adaptive version of BFO
+ OriginalBFO: is the original version taken from Clever Algorithms
+ SSO:
+ OriginalSSO: is the original version


Change others
+ models_history.csv: Update history of meta-heuristic algorithms
+ examples:
+ Update and Add examples for all algorithms
+ All examples tested with CEC benchmark and large-scale problem (1000 - 2000 dimensions)

---------------------------------------------------------------------

0.8.6

Change models
+ Fix bug return position instead of fitness value in:
+ TLO
+ SARO
+ Update some algorithms:
+ SLO
+ NRO
+ ABC

+ Added some variant version of PSO:
+ PPSO (Phasor particle swarm optimization: a simple and efficient variant of PSO)
+ PSO_W (A modified particle swarm optimizer)
+ HPSO_TVA (New cls-organising hierarchical PSO with jumping time-varying acceleration coefficients)

+ Added more algorithm in Swarm-based algorithm
+ SpaSA: Sparrow Search Algorithm (Same name SSA as Social Spider Algorithm --> I changed it to SpaSA)

Change others
+ models_history.csv: Update history of meta-heuristic algorithms
+ examples: Added new examples of:
+ PSO and variant of PSO
+ Update all examples which now using CEC functions

---------------------------------------------------------------------

0.8.5

Change models
+ Fix bugs in several algorithm related to Division by 0, sqrt(0),
+ Added more algorithm in Probabilistic-based algorithm
+ CEBaseSBO
+ Added selection by roulette wheel in root (This method now can handle negative fitness values)
+ Changed GA using roulette wheel selection instead of k-tournament method

Change others
+ models_history.csv: Update history of meta-heuristic algorithms
+ examples: Added new examples of:
+ CE_SSDO, CE_SBO
+ GA, SBO

---------------------------------------------------------------------

0.8.4

Change models
+ Fix bugs in Probabilistic-based algorithm
+ OriginalCEM
+ CEBaseLCBO
+ CEBaseLCBONew: No levy
+ CEBaseSSDO
+ Fix bugs in Physics-based algorithm
+ LevyEO
+ Fix bug in Human-based algorithm
+ LCBO

+ Added Coronavirus Herd Immunity Optimization (CHIO) in Human-based group
+ Original version: OriginalCHIO
+ This version stuck in local optimal and early stopping because the infected case quickly become immunity
+ In my version, when infected case all change to immunity. I make 1/3 population become infected then
optimization step keep going.
+ My version: BaseCHIO

---------------------------------------------------------------------

0.8.3

Change models
+ Probabilistic-based algorithm
+ Added Cross-Entropy Method (CEM)
+ Added CEM + LCBO
+ Added CEM + SSDO

---------------------------------------------------------------------

Page 5 of 9

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.