Exporting Rain-Fall Optimization concepts to Artificial Bee Colony Alberto de la Encina, Natalia López, and Fernando Rubio1 Abstract— In this paper we take inspiration from Rain- Fall Optimization metaheuristic (RFO) to modify other meta- heuristics. An interesting concept of RFO is its merit list. We argue that this concept could be successfully exported to other metaheuristics. As a concrete case study, we have implemented a modified version of Artificial Bee Colony (ABC) including merit lists. The experiments we have conducted with a concrete well- known benchmark show that it can help improving the quality of the solutions found by ABC. Index Terms— Water-based metaheuristics, Bio-inspired al- gorithms; Swarm Intelligence; Optimization. I. INTRODUCTION Swarm Intelligence [1], [2], [3], [4] is an interesting part of bioinspired algorithms where the behavior of simple agents that interact with the environment, as well as among them, leads to the appearance of an intelligent global behavior. Many interesting swarm intelligence metaheuristics have been introduced during the last years, both to deal with continuous domain optimization problems (see e.g. Particle Swarm Optimization [5], [6], Artificial Bee Colony [7], [8], or Differential Evolution [9], [10]) and with combinatorial domains optimization problems (see e.g. Ant Colony Op- timization [11], [12], Modified Turbulent Particle Swarm Optimization [13], [14], or River Formation Dynamics [15], [16]). Although most of them take inspiration from Biology, there are also several new water-based metaheuristics [17] whose inspiration is taken from the behavior of water (see e.g. River Formation Dynamics [15], [18], Intelligent Wa- ter Drops [19], Water-flow Algorithm [20], Water Cycle Algorithm [21], Water Wave Optimization [22] or Water Evaporation Optimization [23]). Although the inspiration can be taken from different sources, what makes a metaheuristic interesting is not the nice analogy it can be based on, but the mathematical char- acterization that makes it find good solutions. In this sense, we think that any swarm intelligence metaheuristic must be analyzed independently of its nature-based analogy, trying to catch useful technical ideas that can be applied in different contexts. In particular, in this paper we concentrate on looking over a very recent water-based metaheuristic called Rain-fall Optimization (RFO [24]). The aim of our work is to investigate whether basic concepts underlying RFO can be exported to improve other metaheuristics. As a concrete This work has been partially supported by project TIN2015-67522-C3-3- R, and by Comunidad de Madrid as part of the program S2018/TCS-4339 (BLOQUES-CM) co-funded by EIE Funds of the European Union. 1Dpto. Sistemas Informáticos y Computación, Facultad de Informática, Universidad Complutense de Madrid, 28040 Madrid, Spain. albertoe@sip.ucm.es, natalia@sip.ucm.es, fernando@sip.ucm.es case study, we consider a very well-known metaheuristic, Artificial Bee Colony (ABC [7], [8], [25]), and we analyze if it can be improved by using RFO ideas. In order to assess the usefulness of this particular modi- fication of ABC (based on RFO), we will consider a stan- dard benchmark to analyze continuous domain optimization heuristics. The original ABC algorithm and several versions of the modified ABC method will be executed to solve the given benchmark. Then, a non-parametric statistics test will be used to compare the obtained results. Our implementation of the modified ABC method will be done by extending our library of parallel implementations of metaheuristics [26]. Thus, we will be able to take advantage of both multicore and distributed parallel architectures to speedup the execution of ABC. The rest of the paper is structured as follows. Sections II and III describe Rain-Fall Optimization (RFO) and Artificial Bee Colony (ABC) metaheuristics. Then, in Section IV we describe how to modify ABC by introducing RFO concepts. Afterwards, in Section V we present the results obtained on a well-known standard benchmark. Finally, Section VI presents our conclusions and some lines for future work. II. RAIN-FALL OPTIMIZATION The Rain-fall Optimization metaheuristic was introduced in 2017 in [24]. This method is based on the behavior of raindrops flowing down from high positions to lower positions, (e.g. going down from mountains to valleys), simulating the tendency to go down through the steeper slopes. Let us remark that the main idea is closely related to other classic optimization methods like hill-climbing or gradient-descent. In the first stage, a random distribution is used to initiate the position of rain drops all around the search space. Then, each drop has to find the position in its neighborhood that has the lowest value. For that reason, in each iteration of the algorithm, each drop tries to move to a new position among its neighbors with a lower value to keep moving toward its goal. The neighborhood is defined as all points whose distance to the current position is lower than a given radius r. In order to intensify the local search during the last iterations of the algorithm the value of r is slightly reduced after each iteration. In a continuous environment, the number of points in the neighborhood is infinite. Thus, it is not possible to check all of them. Hence, in each iteration each drop selects randomly np points in its neighborhood. If the best point of such set is better than the current position of the drop, then it moves to this new position. Otherwise, the same method is repeated up to Ne times, that is, the drop evaluates again points in its neighborhood (increasing also in each repetition the number np of points to be checked) trying to find a better position. After Ne unsuccessful repetitions, the drop gets inactive. Once the drop either reaches a better position or gets inactive, the algorithm keeps searching the solution with the next active rain drop. After each iteration, once all the rain drops have improved or get inactive, active drops are sorted according to their ranks. Each ranking position is computed by taking into account two values: the fitness of the drop position and the difference between its current fitness and its first fitness. Let us remark that most metaheuristics are only guided by the quality of the solution found by each element. However, in this case drops that have improved more during their life are also taken into account: they are considered more promising even when their current fitness are not so high. After creating the list ordered by merit, those drops having the lowest ranking are turned into inactive mode, while higher values of Ne are assigned to the individuals with the best ranking. By doing so, their neighborhoods will be explored in a deeper way, allowing checking more points. The algorithm finishes when the ending condition is reached. In this case there are two options: when there are not any active drops in the system, or after a given number of iterations. The main characteristic of the RFO is how the merit list is created: the rank of this list does not only depend on the individual’s fitness function but also on its improvement over time. That is, individuals which are progressing more in their solutions are considered promising even if their current quality is not good enough. The pseudocode of RFO can be summarized as follows: initialization repeat for each active drop initialize np value repeat generate np neighbors select best neighbor if neighbor improves drop then move drop to neighbor else if Ne iterations reached then turn drop inactive else update np value until drop improves or drop inactive compute merit list turn inactive the worst drops increase Ne of the best drops until all drops are inactive or maximum iterations reached III. ARTIFICIAL BEE COLONY ALGORITHM The Artificial Bee Colony algorithm [7], [27] was de- veloped for solving numeric optimization problems by the simulation of the behavior of a swarm of bees looking for the best food source. ABC combines local and global search methods with the aim of balancing the exploration and the exploitation processes. For that reason, in this algorithm two kind of bees are used to explore the solution space. On one side, initially employed bees are randomly distributed among the solution space. They explore the areas in their neighborhood, searching for the best nectar source, that is the best solution, depending on their own experience and the experience of their beehive mates. At the moment that an employed bee has consumed a given number of trials (defined by a scouting counter parameter) for finding a better solution in a certain area, the bee randomly changes the position (during this reinitialization process, the bee is said to be a scout bee). On the other side, onlooker bees produce new solutions without using their own experience. In contrast, they use the experience of the employed bees. In particular, in each iteration each onlooker bee selects helping an employed bee, selecting with more probability those employed bees that are working on better solutions. The optimization problem of a given function with ABC is resolved as follows: Let us consider a function f : Rn→ R to be minimized in the range [bot1,up1]× ·· · × [botn,upn], denoted by [bot,up], where boti and upi are the lower and upper bounds of dimension i of the search space. Each bee i of the colony has five attributes: xi ∈ Rn, the position value in the search space; f (xi), the function value in that position; fit(xi), the fitness value, that is, the quality of this solution; pi, the normalized fitness; and Li, the scouting counter which controls when an exploration area has to be abandoned. In addition, three more values are stored: xbest, the best nectar source found; f (xbest), its value function; and fit(xbest), its fitness value. An employed bee moves from x to a new position value v if this new solution improves the previous one. The position v depends on the current position x, and a position x′ of another employed bee that is randomly chosen. An onlooker bee takes into consideration the dances of the employed bees, that is, the positions of the employed bees and their normalized fitness, to choose the position x of a selected em- ployed bee. After that, a new random solution v is generated depending on x, and both solutions are compared. The best solution is the new position of the selected employed bee. At the begining we need to set some initial values: the colony size, CS; the number of employed bees, CS 2 ; the number of onlooker bees, CS 2 ; the scouting limit L, which is defined as follows: L = CS·n 2 , where n is the dimension of the function f to be optimized. After setting the previous values, the main steps of the ABC algorithm are shown below: initializeFoodSources() Repeat moveEmployedBees() normalizeFitness() moveOnlookerBees() replaceAbandonedSolutions() Until endingCondition() In the initializeFoodSources() phase, the po- sition xi of each employed bee i is randomly created in the range [bot,up], and the values of f (xi) and fit(xi) are calculated. The fitness function fit is defined as: fit(xi) = { 1 1+ f (xi) if f (xi)≥ 0 1+abs( f (xi)) otherwise Once the first bee position is set, the best solution found is also initialized: xbest = x1, f (xbest) = f (x1), and fit(xbest) = fit(x1). For the rest of employed bees (for all 1 < i ≤ n), if fit(xi) > fit(xbest) then xbest = xi, f (xbest) = f (xi), and fit(xbest) = fit(xi). After the initialization of positions of the employed bees, the body of the Repeat loop is executed until the endingCondition() is satisfied. This ending condition usually limits the number of iterations performed, the time consumed, or the fitness adequation. The loop has four steps: the movement of the employed bees, the set of the normalized fitness parameter of each employed bee, the movement of the employed bees due to the onlooker bees decisions, and the replacement of the abandoned solutions. Next, each step is explained in detail: moveEmployedBees(): This step consists in moving all the employed bees. For each bee i in the position xi, a new solution vi is produced by changing one randomly chosen dimension k of xi. That is, if xi = (xi1, . . . ,xin) is the current position, then vi = (vi1, . . . ,vin) is obtained by applying the following formula: vim = { xik +Φ · (xik− x jk) if m = k xim otherwise where Φ is a random number in the range [−1,1] and j ∈ [1, CS 2 ] is a random selected index. After that, f (vi) and fit(vi) are computed. And if vi improves the fitness value of xi, that is, fit(vi) > fit(xi), the solution is replaced: xi = vi, and Li is set to L. In other case, Li is decreased by one unit. The best solution is also updated if fit(xi)> fit(xbest). normalizeFitness(): In this step the fitness of the position of each employed bee is normalized into [0,1] with the following equation: pi = fit(xi) Σ CS 2 k=1fit(xk) This value represents the probability for an onlooker bee to choose the source of the employed bee i, and it is used in the following step. moveOnlookerBees(): This step consists in the move- ment of the onlooker bees. Each onlooker bee selects the solution xi (of an employed bee) depending on pi. Once the selection is done, as in the case of the employed bees, a new solution vi is created by changing one dimension of xi with the same equation used for employed bees. Again, if fit(vi) > fit(xi), the solution of the corresponding employed bee is replaced: xi = vi, f (xi) = f (vi), fit(xi) = fit(vi), and Li = L. Otherwise, Li = Li − 1. The best solution is also updated if fit(xi)> fit(xbest). replaceAbandonedSolutions(): The last step in each iteration is the treatment of the abandoned solutions. Once the scouting counter Li is smaller than 0, the employed bee has to find a new solution to replace the current one. In this moment, the employed bee is said to be an scout bee. The new solution is randomly created in the range [bot,up], and its corresponding f (xi) and fit(xi) are calculated again. Then, the normal process of an employed bee is restarted in the new position. IV. INTRODUCING MERIT LISTS IN ABC Every metaheuristic has to find a balance between ex- ploitation and exploration, so that the most promising regions are explored in a deeper way but without abandoning the option of finding good candidates in the overall area. In the case of ABC, employed bees guarantees exploration, as they work on their own regions until the scouting counter is reached, and then they transform themselves into scout bees that also search for new positions in the whole search space. Meanwhile, the onlookers bees deal with the exploitation part, as they work with higher probability in the areas where the employed bees have found better solutions so far. The merit lists introduced in RFO define a new rule to decide how to perform exploitation, so that promising solutions are not only those where very good solutions have been found, but also those where the solutions are improving a lot (although perhaps they are not yet so good). Thus, in order to include merit lists in ABC, we should modify the rules used by onlooker bees to select their food source. As we need to compute the improvement done in each food source, each employed bee has to record an additional data: the value obtained in its first trial. Thus, its improvement is computed as a simple substraction between the current and the initial value of its position. Let us remark that, in fact, we have two options to perform this computation: we can record the difference with respect to the original value of the food source, or we can compute the improvement with respect to the value obtained n steps before, so that we concentrate on the improvement done in the last movements. Anyway, both strategies can be done with the same information, for the reason that the only difference between them is the moment when the value representing the initial value of the employed bee is updated. Once we have such parameters, we can redefine the probability of an onlooker bee to select a concrete food source i as: pi = improvement(xi) Σ CS 2 k=1improvement(xk) where improvement(xi) = fit(xi)− initial fit(xi). By doing so, we only consider how much improvement has been done in each food source. Obviously, we can find a tradeoff between a pure merit list approach and a classical ABC approach by introducing a new merit ratio parameter MR ∈ [0..1], where 1 means considering only the quality improvement, 0 means using only the total quality of the solutions, and any other intermediate value combines both values appropiately by using the following formula: pi = MR improvement(xi) Σ CS 2 k=1improvement(xk) +(1−MR) fit(xi) Σ CS 2 k=1fit(xk) In order to provide a concrete implementation, we have used the parallel functional language Eden (see e.g. [28], [29], [30]). The higher-order nature of this language sim- plifies the task of developing parallel generic skeletons for different metaheuristics. Moreover, it also simplifies the task of comparing different methods on a common set of benchmarks. In fact, we have extended the skeletons library presented in [26], so that we can reuse the tools provided in such library. Moreover, we reuse its capability to provide generic parallel versions of our implementations. Actually, the method we have implemented can be executed on any multicore computer, but also on any distributed system pro- vided that a suitable MPI library (see e.g. [31]) is available. V. EXPERIMENTS A. Experimental Setup In order to perform the comparison between the original and the modified version of ABC, we use some known benchmark functions. In particular, we consider several func- tions studied in [32]. Let us remark that the aim of this work is not to compare ABC with other methods, but to analyze if we can improve the quality of ABC itself. The reader interested in comparisons with other methods can check [26], where ABC is compared against Particle Swarm Optimization and Differential Evolution. The mathematical description of each benchmark function is the following: • Sphere Model: f1(x) = ∑ n i=1 x2 i • Schwefel’s Problem 2.22: f2(x) = ∑ n i=1 |xi|+∏ n i=1 |xi| • Schwefel’s Problem 1.2: f3(x) = ∑ n i=1(∑ i j=1 x j) 2 • Schwefel’s Problem 2.21: f4(x) = maxi{|xi|,1≤ i≤ n} • Generalized Rosenbrock’s Function: f5(x) = ∑ n−1 i=1 [100(xi+1− x2 i ) 2 +(xi−1)2] • Step Function: f6(x) = ∑ n i=1(bxi +0.5c)2 • Generalized Schwefel’s Problem 2.26: f7(x) = ∑ n i=1−xi sin( √ |xi|) • Generalized Rastrigin’s Function: f8(x) = ∑ n i=1[x 2 i −10cos(2πxi)+10] • Ackley’s Function: f9(x) = −20exp ( −0.2 √ 1 n ∑ n i=1 x2 i ) − exp ( 1 n ∑ n i=1 cos2πxi ) +20+ e • Generalized Griewank Function: f10(x) = 1 4000 ∑ n i=1 x2 i −∏ n i=1 cos ( xi√ i ) +1 • Generalized Penalized Function I: f11(x) = π n { 10sin2(πyi) + ∑ n−1 i=1 (yi − 1)2[1 + 10sin2(πyi+1)] + (yn − 1)2 } + ∑ n i=1 u(xi,10,100,4), yi = 1+ 1 4 (xi +1), u(xi,a,k,m) =  k(xi−a)m, xi > a 0, −a≤ xi ≤ a k(−xi−a)m, xi <−a • Generalized Penalized Function II: f12(x) = 0.1 { sin2(3πx1) + ∑ n−1 i=1 (xi − 1)2[1 + sin2(3πxi+1)] + (xn − 1)2[1 + sin2(2πxn)] } + ∑ n i=1 u(xi,5,100,4) • Shekel’s Foxholes Function: f13(x) = [ 1 500 +∑ 25 j=1 1 j+∑ 2 i=1(xi−ai j)6 ] • Kowalik’s Function: f14(x) = ∑ 11 i=1 [ ai− x1(b2 i +bix2) b2 i +bix3+x4 ]2 • Six-Hump Camel-Back Function: f15(x) = 4x2 1−2.1x4 1 + 1 3 x6 1 + x1x2−4x2 2 +4x4 2 • Branin Function: f16(x) = ( x2− 5.1 4π2 x2 1+ 5 π x1−6 )2 +10 ( 1− 1 8π ) cosx1+ 10 The most relevant features of these functions are depicted in Table I. For each function, the number of dimensions and the boundaries of the solutions space are shown. Taking into account these features and the functions themselves, they can be divided into two groups. The first group consists of all the functions with a high number of dimensions, that is formed by the first twelve functions. However, this group can also be divided into two subgroups, the first six functions that are relatively simple in spite of their number of dimensions, and the functions from f7 to f12 where finding the minimum value is harder. The reason of the hardness of finding the solution in these functions is that there are more local minima in them. In fact, the number of local minima is exponential with respect to the number of dimensions of these problems. Finally, the second group is composed of the last four functions, from f13 to f16. These functions are low- dimensional functions which have only a few local minima. B. Results and Discussion As it was explained in the previous section, we have introduced a new parameter MR controlling how much relevance has the merit list and how much relevance has the quality of the solutions found by the employed bees. We have performed experiments with six different values of MR. In particular, we have used 0, 0.1, 0.3, 0.7, 0.9, and 1, where 0 means classical ABC and 1 means that we only consider how much improvement has been obtained in each source, not the overall quality of the solution. For each MR value, we have solved the complete benchmark, using 100 bees and 5000 iterations of the ABC algorithm to deal with each problem. In order to reduce statistical perturbations, each experiment has been repeated 40 times, and the average value of such 40 experiments has been used. As a first issue, it is worth to point out the computational efficiency of the modified ABC version: the extra cost required to take into account the merit lists is negligible. Moreover, the parallelism degree obtained with the alterna- Function Dimensions (n) Boundings f1(x) 30 [−100,100]n f2(x) 30 [−10,10]n f3(x) 30 [−100,100]n f4(x) 30 [−100,100]n f5(x) 30 [−30,30]n f6(x) 30 [−100,100]n f7(x) 30 [−500,500]n f8(x) 30 [−5.12,5.12]n f9(x) 30 [−32,32]n f10(x) 30 [−600,600]n f11(x) 30 [−50,50]n f12(x) 30 [−50,50]n f13(x) 2 [−65.536,65.536]n f14(x) 4 [−5,5]n f15(x) 2 [−5,5]n f16(x) 2 [−5,10]× [0,15] TABLE I BENCHMARK FUNCTIONS: DIMENSIONS AND BOUNDINGS tive versions of ABC equals the original parallelism degree. Thus, the experiments do not find any difference in this regard. To appropriately compare the quality of the results, we perform a statistical test. In particular, a Friedman test has been used. Table II shows the results of applying a Friedman test, considering the aforementioned six different configurations and the sixteen case studies. Let us remind that in Friedman rankings the lower the values the better the algorithm. As it can be seen, the best results are obtained by the original ABC method. Thus, a first view of the results seems to discourage us from taking into account the merit list information. However, in case we analyze in more detail the results for each of the problems, we discover an interesting characteristic: the advantage of the original ABC method is obtained in the low-dimensional functions ( f13− f16), but its behavior is worse in the high-dimensional problems. In fact, we have also computed a Friedman test analyzing the six configurations but considering only the twelve 30-dimensional problems, and the results can be seen in Table III. Let us remark that in this case the best results are obtained when we only consider the information of the merit list, while the original ABC method obtains the fifth position out of the six methods. Thus, although the difference is very small, the results suggest that introducing the information of the merit list could allow to improve the results of ABC, but only when the problems under consideration have a high number of MR value Ranking 0 3.2813 0.1 3.3438 0.3 3.3438 1 3.4063 0.9 3.7813 0.7 3.8438 TABLE II RANKING FRIEDMAN RESULTS INCLUDING LOW DIMENSIONAL FUNCTIONS MR value Ranking 1 3.0417 0.1 3.4583 0.3 3.4583 0.9 3.625 0 3.625 0.7 3.7917 TABLE III RANKING FRIEDMAN RESULTS INCLUDING ONLY HIGH DIMENSIONAL FUNCTIONS dimensions. Otherwise, when the search space has only two or four dimensions, the original ABC finds better solutions. That is, the improvement only takes place with harder prob- lems. Anyway, it is interesting to note that the differences in all the situations (with any number of dimensions) are small. Thus, we can conclude that the usefulness of ABC is not based on using the current fitness value, as it works in a very similar way considering how much has improved the fitness value. VI. CONCLUSIONS AND FUTURE WORK In this paper we have presented how we can improve Artificial Bee Colony by including merit lists, a concept bor- rowed from Rain-Fall Optimization. The experimental results show that onlooker bees could improve their behavior by using merit lists when the search space under consideration has a high number of dimensions. In particular, better results have been found in the case of 30-dimensional problems, but not with problems with only 2 or 4 dimensions. This modification can also be extended to other meta- heuristics. In fact, we are currently working on using merit lists with other well-known methods like Particle Swarm Optimization [33] or Differential Evolution [26]. In the first case, particles tend to move in the direction of the best particle of the swarm. However, we could also consider (partially) moving towards the particle that is improving faster its quality. Analogously, in the case of Differential Evolution we could select with higher probability those individuals whose quality has improved in a higher rate during the execution of the algorithm. Finally, we are also interested in using the modified versions of these metaheuristics to deal with a set of concrete practical NP-hard problems we are currently working with, including those presented in [34], [35], [36]. ACKNOWLEDGMENTS The authors would like to thank Mercedes Hidalgo- Herrero for valuable suggestions about the content of this work. REFERENCES [1] G. Beni and J. Wang, “Swarm intelligence in cellular robotic systems,” in NATO Advanced Workshop on Robotics and Biological Systems, vol. 102, 1989. [2] E. Bonabeau, M. Dorigo, and G. Theraulaz, Swarm intelligence: from natural to artificial systems. Oxford University Press, Inc., 1999. [3] J. Kennedy and R. Eberhart, Swarm intelligence. Morgan Kaufmann Publishers Inc., 2001. [4] J. Kennedy, “Swarm intelligence,” in Handbook of Nature-Inspired and Innovative Computing, A. Zomaya, Ed. Springer US, 2006, pp. 187–219. [5] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in IEEE International Conference on Neural Networks, vol. 4. IEEE Computer Society Press, 1995, pp. 1942–1948. [6] A. P. Engelbrecht, “Particle Swarm Optimization,” in Genetic and Evo- lutionary Computation Conference, GECCO’15, Companion Material Proceedings. ACM, 2015, pp. 65–91. [7] D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,” Applied Soft Computing, vol. 8, no. 1, pp. 687 – 697, 2008. [8] D. Karaboga, B. Görkemli, C. Ozturk, and N. Karaboga, “A com- prehensive survey: artificial bee colony (ABC) algorithm and applica- tions,” Artificial Intelligence Review, vol. 42, no. 1, pp. 21–57, 2014. [9] R. Storn, “On the usage of differential evolution for function optimiza- tion,” in Biennial Conference of the North American Fuzzy Information Processing Society (NAFIPS). IEEE, 1996, pp. 519–523. [10] S. Das and P. N. Suganthan, “Differential evolution: a survey of the state-of-the-art,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 4–31, 2011. [11] M. Dorigo, V. Maniezzo, and A. Colorni, “Ant system: optimization by a colony of cooperating agents,” IEEE Transactions on Systems, Man and Cybernetics, Part B, vol. 26, no. 1, pp. 29–41, 1996. [12] M. Dorigo and M. Birattari, “Ant colony optimization,” in Encyclope- dia of Machine Learning. Springer, 2010, pp. 36–39. [13] G. Cui, L. Qin, S. Liu, Y. Wang, X. Zhang, and X. Cao, “Modified PSO algorithm for solving planar graph coloring problem,” Progress in Natural Science, vol. 18, no. 3, pp. 353–357, 2008. [14] L.-Y. Hsu, S.-J. Horng, P. Fan, M. Khan, Y.-R. Wang, R.-S. Run, J.- L. Lai, and R.-J. Chen, “MTPSO algorithm for solving planar graph coloring problem,” Expert systems with Applications, vol. 38, no. 5, pp. 5525–5531, 2011. [15] P. Rabanal, I. Rodrı́guez, and F. Rubio, “Using river formation dynam- ics to design heuristic algorithms,” in Unconventional Computation, UC’07, LNCS 4618. Springer, 2007, pp. 163–177. [16] ——, “Applications of river formation dynamics,” Journal of compu- tational science, vol. 22, pp. 26–35, 2017. [17] F. Rubio and I. Rodrı́guez, “Water-based metaheuristics: How water dynamics can help us to solve NP-hard problems,” Complexity, 2019. [18] P. Rabanal, I. Rodrı́guez, and F. Rubio, “Towards applying river forma- tion dynamics in continuous optimization problems,” in International Work-Conference on Artificial Neural Networks. Springer, 2019, pp. 823–832. [19] H. S. Hosseini, “Problem solving by intelligent water drops,” in Proceedings of the 2007 IEEE Congress on Evolutionary Computation, CEC’07. IEEE, 2007, pp. 3226–3231. [20] F.-C. Yang and Y.-P. Wang, “Water flow-like algorithm for object grouping problems,” Journal of the Chinese Institute of Industrial Engineers, vol. 24, no. 6, pp. 475–488, 2007. [21] H. Eskandar, A. Sadollah, A. Bahreininejad, and M. Hamdi, “Water cycle algorithm–a novel metaheuristic optimization method for solving constrained engineering optimization problems,” Computers & Struc- tures, vol. 110, pp. 151–166, 2012. [22] Y.-J. Zheng, “Water wave optimization: a new nature-inspired meta- heuristic,” Computers & Operations Research, vol. 55, pp. 1–11, 2015. [23] A. Kaveh and T. Bakhshpoori, “Water evaporation optimization: a novel physically inspired optimization algorithm,” Computers & Structures, vol. 167, pp. 69–85, 2016. [24] S. H. A. Kaboli, J. Selvaraj, and N. Rahim, “Rain-fall optimization algorithm: A population based algorithm for solving constrained optimization problems,” Journal of Computational Science, vol. 19, pp. 31–42, 2017. [25] F. Rubio, A. de la Encina, P. Rabanal, and I. Rodrı́guez, “Eden’s bees: Parallelizing Artificial Bee Colony in a functional environment,” in ICCS’13, 2013, pp. 661–670. [26] F. Rubio, A. de la Encina, P. Rabanal, and I. Rodrı́guez, “A parallel swarm library based on functional programming,” in International Work-Conference on Artificial Neural Networks. Springer, 2017, pp. 3–15. [27] D. Karaboga and B. Akay, “A survey: algorithms simulating bee swarm intelligence,” Artificial Intelligence Review, vol. 31, no. 1-4, pp. 61–85, 2009. [28] U. Klusik, R. Peña, and F. Rubio, “Replicated workers in Eden,” in Constructive Methods for Parallel Programming (CMPP’00). Nova Science, 2000. [29] M. Hidalgo-Herrero, Y. Ortega-Mallén, and F. Rubio, “Analyzing the influence of mixed evaluation on the performance of Eden skeletons,” Parallel Computing, vol. 32, no. 7-8, pp. 523–538, 2006. [30] R. Loogen, “Eden–parallel functional programming with Haskell,” in Central European Functional Programming School. Springer, 2011, pp. 142–206. [31] M. Snir, W. Gropp, S. Otto, S. Huss-Lederman, J. Dongarra, and D. Walker, MPI–the Complete Reference: The MPI core. MIT press, 1998, vol. 1. [32] X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Trans. Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. [33] P. Rabanal, I. Rodrı́guez, and F. Rubio, “Parallelizing particle swarm optimization in a functional programming environment,” Algorithms, vol. 7, no. 4, pp. 554–581, 2014. [34] I. Rodrı́guez, F. Rubio, and P. Rabanal, “Automatic media planning: optimal advertisement placement problems,” in 2016 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2016, pp. 5170–5177. [35] I. Rodrı́guez, P. Rabanal, and F. Rubio, “How to make a best-seller: Optimal product design problems,” Applied Soft Computing, vol. 55, pp. 178–196, June 2017. [36] A. de la Encina, N. López, I. Rodrı́guez, and F. Rubio, “The problems of selecting problems,” in International Work-Conference on Artificial Neural Networks. Springer, 2019, pp. 760–772.