• Tiada Hasil Ditemukan

THEORETICAL BACKGROUND

2.1 The HS Algorithm

HS (Geem et al., 2001) is a relatively new population-based metaheuristic optimization al-gorithm that imitates the music improvisation process where the musicians improvise their instruments’ pitch by searching for a perfect state of harmony. It has been successfully tailored to various scientific and engineering applications such as music composition (Geem and Choi, 2007), sudoku puzzle solving (Geem, 2007a), tour planning (Geem, Tseng and Park, 2005), web page clustering (Forsati et al., 2008; Mahdavi and Abolhassani, 2009), structural design (Lee and Geem, 2004; Geem, 2009a), water network design (Geem, 2009c), vehicle routing (Geem, Lee and Park, 2005), dam scheduling (Geem, 2007b), ground water modeling (Ay-vaz, 2009, 2007), soil stability analysis (Cheng et al., 2008), ecological conservation (Geem and Williams, 2008), energy system dispatch (Vasebi et al., 2007), heat exchanger design (Fe-sanghary et al., 2009), transportation energy modeling (Ceylan et al., 2008), satellite heat pipe

design (Geem and Hwangbo, 2006), medical physics (Panchal, 2009), timetabling (Al-Betar et al., 2008; Al-Betar, Khader and Liao, 2010), RNA structure prediction (Mohsen et al., 2010), etc. For further information on these applications see (Ingram and Zhang, 2009) and references therein. It is a very successful metaheuristic algorithm that can explore the search space of a given data in parallel optimization environment, where each solution (harmony) vector is gen-erated by intelligently exploring and exploiting a search space (Geem, 2009c).

HS as mentioned mimic the improvisation process of musicians’ with an intelligent way as can be seen in Figure 2.1. The analogy between improvisation and optimization is likely as follows (Geem, 2010):

1. Each musician corresponds to each decision variable.

2. Musical instrument’s pitch range corresponds to the decision variable’s value range.

3. Musical harmony at a certain time corresponds to the solution vector at a certain iteration.

4. Audience’s aesthetics corresponds to the objective function.

Just like musical harmony is improved time after time, solution vector is improved iteration by iteration. In general, HS has five steps and they are described as in (Geem, Tseng and Park, 2005) as follows:

Step 1. Initialize the Optimization Problem and HS Parameters The optimization problem is defined as follows:

minimize/maximize f(a),

subject to ai∈Ai, i=1,2, . . . ,N

(2.1)

Figure 2.1: Analogy between Improvisation and Optimization, obtained from (Geem, 2010) where f(a)is an objective function;ais the set of decision variable(ai);Ai is the set of possible range of values for each decision variable,Lai≤AiUai;LaiandUai are the lower and upper bounds for each decision variable(ai)respectively. The variable Nis the number of decision variables.

Then, the parameters of the HS are initialized. These parameters are:

(a) Harmony Memory Size (HMS) (i.e., number of solution vectors in harmony memory);

(b) Harmony Memory Considering Rate (HMCR), where HMCR∈[0,1]; (c) Pitch Adjusting Rate (PAR), where PAR∈[0,1];

(d) Stopping Criteria (i.e., number of improvisation (NI));

More explanation of these parameters is provided in the next steps.

Step 2. Initialize Harmony Memory

The harmony memory (HM) is a matrix of solutions with a size of HMS, where each HM vector represents one solution as can be seen in Eq.2.2. In this step, the solutions

are randomly constructed and optionally rearranged in a reversed order to HM, based on their objective function values such as f(a1)≤ f(a2)· · · ≤ f(aHMS), where f(aj)is the objective function value for jthharmony memory vector.

HM=

a11 a12 . . . a1N f(a1)

a21 a22 . . . a2N f(a2)

... ... . . . ... ...

aHMS1 aHMS2 . . . aHMSN f(aHMS)

(2.2)

Step 3. Improvise New Harmony

This step is the essence of the HS algorithm and the cornerstone that has been laid for building this algorithm. In this step, the HS generates (improvises) a new harmony vector,a0= (a01,a02,a03, . . . ,a0N). It is based on three operators: memory consideration;

pitch adjustment; or random consideration. In the memory consideration, the values of the new harmony vector are randomly inherited from the historical values stored in HM with a probability of HMCR. Therefore, the value of decision variable(a01)is chosen from (a11,a21,a31, . . . ,aHMS1 ) that is stored in HM. The next decision variable (a02) is chosen from(a12,a22,a32, . . . ,aHMS2 ), and the other decision variables,(a03,a04,a05, . . . ,a0N), are chosen consecutively in the same manner with the probability of HMCR∈[0,1].

The usage of HM is similar to the step where the musician uses his or her memory to generate a good tune. This cumulative step ensures that good harmonies are considered as the elements of new harmony vectors.

Alternativally, where the other decision variable values are not chosen from HM, ac-cording to the HMCR probability test, they are randomly chosen acac-cording to their possible range,a0i∈Ai. This case is referred to as random consideration (with a prob-ability of (1-HMCR)), which increases the diversity of the solutions and drives the

system further to explore various diverse solutions so that global optimality can be attained.

The following equation summarizes these two steps i.e., memory consideration and random consideration.

a0i





 a0i

a1i,a2i,a3i, . . . ,aHMSi w.p.HMCR a0i∈Ai w.p. (1−HMCR)

(2.3)

For example, a HMCR of 0.95 indicates that the HS algorithm will choose the decision variable value from historically stored values in the HM with a 95% probability or from the entire possible range with a 5% probability. Therefore, if a generated random numberw∈[0,1]is equal to 0.78 then, the value of the decision variable is picked out from HM since it is less than the HMCR(0.95).

Furthermore, an additional search for good solutions in the search space is achieved through tuning each decision variable in the new harmony vector,a0= (a01,a02,a03, . . . , a0N), inherited from HM using PAR operator. These decision variables are examined to be tuned with the probability of PAR∈[0,1]as in Eq.5.3.

a0i





Ad justing Pitch w.p.PAR Doing Nothing w.p. (1−PAR)

(2.4)

If a generated random numberrnd∈[0,1]within the probability of PAR then, the new decision variable(a0i)will be adjusted based on the following equation:

(a0i) = (a0i)±rand()∗bw (2.5)

Here,bwis an arbitrary distance bandwidth, or fret width f was renamed in the updated version of HS (Geem, 2010), used to improve the performance of HS and (rand()) is a

function that generates a random number∈[0,1]. Actually,bwdetermines the amount of movement or changes that may have occurred to the components of the new vector.

The value ofbwis based on the optimization problem itself i.e., continuous or discrete.

In general, the way that the parameter (PAR) modifies the components of the new harmony vector is an analogy to the musicians’ behaviors when they slightly change their tone frequencies in order to get better harmonies. Consequently, it explores more solutions in the search space and improves the searching abilities.

All of these operators are well-illustrated using pseudo code as in Fig 2.2.

Step 4. Update the Harmony Memory

In order to update HM with the new generated vectora0 = (a01,a02,a03, . . . ,a0N), the objective function is calculated for each new harmony vector f(a0). If the objective function value for the new vector is better than the worst harmony vector stored in HM, then the worst harmony vector is replaced by the new vector. Otherwise, this new vector is ignored.

a0 ∈HM ∧ aworst ∈/HM (2.6)

However, for the diversity of harmonies in HM, other harmonies (in terms of least-similarity) can be considered. Also, the maximum number of identical harmonies in HM can be considered in order to prevent premature HM.

Step 5. Check the Stopping Criterion

The iteration process in steps 3 and 4 is terminated when the maximum number of im-provisations (NI) is reached. Finally, the best HM vector is selected and is considered to be the best solution to the problem under investigation.

HS Algorithm begin

Define fitness function f(a),a= (a1,a2, . . . ,aN)T Define (HMCR), (PAR), (HMS), (NI), (bw) HM←GenerateInitialPoplution()

min = minimum visible value.

max = maximum visible value.

while(iter≤NI)do while(i≤N)do

if(rand∈(0,1)≤HMCR)then

randLoc=1+f loor(rand∈(0,1)×N) a0i=arandLoci

if(rand∈(0,1)≤PAR)then adjust the value ofa0iby:

a0i=a0i±rand∈(0,1)×bw end if

else

choose a random variable:

a0i=min+rand∈(0,1)×(max−min) end if

end while

if(f(a0)≤ f(aworst))then a0 ∈HM,aworst ∈/HM end if

end while

best=find the current best solution end

Figure 2.2: Pseudo code of the HS algorithm.

2.1.1 HS Characteristics

The HS algorithm has several characteristics that make it one of the most important meta-heuristic algorithms (Geem et al., 2001). These distinguish it from other metameta-heuristics such as (1) the generation of a new vector after considering all existing vectors, rather than consid-ering only two vectors as in GA (parents); (2) the independent consideration for each decision variable in HM vector; (3) the consideration of continuous decision variable values without any loss of precision; (4) no requirement of decimal-binary conversions or a fixed number (2n) of decision variable values as in GA; and (5) no need for any starting values of the decision variables or require complex derivatives as in gradient-based methods.

The other important strengths of HS are their improvisation operators, memory

considera-tion; pitch adjustment; and random consideration, that play a major rule in achieving the desired balance between the two major extremes for any optimization algorithm, Intensification and di-versification (Yang, 2009a). Essentially, both pitch adjustment and random consideration are the key components of achieving the desired diversification in HS. In random consideration, the new vector’s components are generated at random mode, have the same level of efficiency as in other algorithms that handle randomization, where this property allows HS to explore new regions that may not have been visited in the search space. While, the pitch adjustment adds a new way for HS to enhance its diversification ability by tuning the new vector’s component within a given bandwidth. A small random amount is added to or subtracted from an exist-ing component stored in HM. This operator, pitch adjustment, is a fine-tunexist-ing process of local solutions that ensures that good local solutions are retained, while it adds a new room for ex-ploring new solutions. Further to that, the pitch adjustment operator can also be considered as a mechanism to support the intensification of HS through controlling the probability of PAR. The intensification in the HS algorithm is represented by the third HS operator, memory consider-ation. A high harmony acceptance rate means that good solutions from the history/memory are more likely to be selected or inherited. This is equivalent to a certain degree of elitism.

Obviously, if the acceptance rate is too low, solutions will converge more slowly.

Finally, the structure of the HS algorithm is relatively easy. This advantage makes it very flexible to combine HS with other metaheuristic algorithms as can be seen in Section 3.1.