–––––––––––––––––––––-2.Abstract 5… [612295]

1

1.Introduction 4
–––––––––––––––––––––-2.Abstract 5
–––––––––––––––––––––––3.Genetic Algorithms 6
––––––––––––––––––-3.1 History of genetic algorithms 6
……………………………………………………………..3.2 Initialization 7
………………………………………………………………………………………3.2.1 Number of solutions 7
……………………………………………………………………3.2.2 The correct solution 8
……………………………………………………………………3.3 Types of Representation 8
…………………………………………………………………….3.3.1 Binary Representations 8
……………………………………………………………….3.3.2 Integer Representation 9
………………………………………………………………..3.3.3 Real-valued Representations 9
……………………………………………………….3.3.4 Messy Representations 10
……………………………………………………………..3.4 Selection 10
……………………………………………………………………………………….3.4.1 Roulette Wheel Selection (RWS) 11
…………………………………………………3.4.2 Stochastic Universal Sampling (SUS) 12
………………………………………….3.4.3 Linear Rank Selection (LRS) 12
……………………………………………………….3.4.4 Exponential Rank Selection (ERS) 13
………………………………………………3.4.5 Tournament Selection 14
……………………………………………………………….3.4.6 Truncation Selection 15
………………………………………………………………….3.5 Genetic Operators 16
…………………………………………………………………………..3.5.1 Crossover 16
………………………………………………………………………………..3.5.1.1 Single Point Crossover 17
……………………………………………………….3.5.1.2 Double Point Crossover 17
………………………………………………………3.5.1.3 Uniform Crossover 18
……………………………………………………………..3.5.1.4 Arithmetic Crossover (floating point only) 19
………………………………3.5.1.5 Heuristic Crossover (floating point only) 20
………………………………..3.5.2 Mutation 20
………………………………………………………………………………….3.5.2.1 Uniform Mutation 21
……………………………………………………………….3.5.2.2 Non-uniform Mutation (floating point only) 21
…………………………….3.5.2.3 Boundary Mutation (floating point only) 22
…………………………………3.5.2.4 Creep Mutation (floating point only) 23
………………………………………3.6 Final Results 23
…………………………………………………………………………………..4.Smart Fit iPhone application 25
–––––––––––––––4.1 The genetic algorithm 26
………………………………………………………………………2

4.1.1 Individual 26
…………………………………………………………………………………4.1.1.1 Crossover 27
………………………………………………………………………….4.1.1.2 Mutation 27
……………………………………………………………………………4.1.2 Population 28
……………………………………………………………………………….4.1.2.1 Tournament selection 29
………………………………………………………….4.2 My Workouts Feature 30
………………………………………………………………………4.3 My Food Plan Feature 31
……………………………………………………………………..4.4 What’s This? Feature 32
……………………………………………………………………….4.5 Statistics and Results 34
………………………………………………………………………5.Bibliography 37–––––––––––––––––––––
3

1.Introduction Starting a healthy lifestyle might seem simple for someone, but it isn’t. For beginners it is hard to find a good source for workout materials or food plans, they have to search hours or even days and read a lot until they can learn just a bit to know whats good and whats bad to do. Not a lot of applications allow you to generate workouts based on some preferences, most of them will already have some plans that you can follow if you buy them or it will let you create a workout from 0 where you have to add each exercise, know how to combine them and so on. This is where AI comes in help. By using a genetic algorithm the application is able to generate a workout for a specific group of muscles or a food plan based on your desired goals. If you don’t know the nutrition values of a specific food item you can scan that item using the application and it will tell you every detail you need to know in order to take advantage of it. The application is designed with a easy to use interface, so that everyone can get used to it and make the most out of using it. The workouts generated are not the best ones possible but for a total beginner in the gym they are a good source. The food plans that are generated are pretty good as they will tell you what you can eat in order to achieve your goal and in what quantity.
4

2.Abstract The main goal of this thesis is to give a general description of what a genetic algorithm is and also present a good way to create an iPhone application using a genetic algorithm. The thesis is structured in two big chapters, the first one describing each aspect of a genetic algorithm, the second one describing the implementation of the application. In the first chapter, the genetic algorithm and all its methods are presented, how it works and how each step, crossover, mutation, selection works in order to obtain a solution from a set of possible candidates. The problem is given a representation into an individual, then a random initial population is generated, each individual representing a possible solution, on which the genetic algorithm iterates to find a best find candidate. On each iteration, crossover is applied between two candidates in order to obtain the desired characteristics of both, then a mutation method is used in order to add diversity to the population and reduce the premature convergence effect. In the end we apply a selection algorithm is applied to select a few candidates, on which the algorithm will iterate again until a desired ending condition is reached. In the second chapter, a genetic algorithm is put in use into an iPhone application in order to generate and create good workouts for the user of the phone. The algorithm is based on single point crossover, uniform mutation and tournament selection. The algorithm will stop after a number of generations is reached. This number is obtain after running some tests with the algorithm. The most important part is the evaluation of the fitness which is carefully designed for the workout / food plan generation according to some well defined rules.
5

3.Genetic Algorithms 3.1 History of genetic algorithms A group of computer scientist have studied evolutionary systems with the purpose of finding a way to use evolution as a tool for solving various engineering problems. The general idea for solving all of this problems was to generate a number of iterations using operators inspired by natural genetic variation and natural selection, to evolve set of solutions. Rechenberg[9](1965, 1973) mentions one of his idea with the name “evolution strategies”, that had the purpose of optimizing parameters for devices such as airfoils. This idea was then continued by Schwefel[13](1975, 1977). Evolution strategy has been developing independently from the field of genetic algorithms remaining till today an active research area. Genetic Algorithms are the most widely known type of Evolutionary Algorithms, now receiving remarkable attention among every software developer. Probably the earliest predecessor of these algorithms emerged in the work of Fraser, a biologist who wanted to simulate evolution with special emphasis on the interaction of epistasis with selection.[1] In the 1960s, John Holland invented the Genetic Algorithm, algorithm that himself and his students and colleagues at the University of Michigan developed between 1960 and 1970. His goal was not to design algorithms that could solve a specific set of problems but rather to introduce in computer systems an evolution mechanism as found in nature. In his book, “Adaptation in Natural and Artificial Systems” he describes the GA to be a generalization of the evolution found in nature[13]. Holland himself pointed out how to interpret his reproductive plans in terms of genetics, economics, game-playing, pattern recognition, statistical inference, control, and parameter optimization.
6

3.2 Initialization 3.2.1 Number of solutions One common idea in computer science, is searching for a candidate solution in a set of possible candidates. This idea is so common in this field, thus the set of possible candidates getting the name of “search space”. Most often one is looking for the best solution in a specific set of solutions. The space of all feasible solutions (the set of solutions where the desired solution is contained) is also called the “state space”. Each and every point in the search represents a possible solution of the problem. Therefore each possible solution can be “marked” by its fitness value, depending on the definition of the given problem. In Genetic Algorithms the candidate is being searched in set where every candidate represents a point. The difficulties in this ease are the local minima and the starting point of the search.[11]
Figure 1: Example of a search space, where the x axis represent the candidates, and the y axis the respective fitness value for the candidates Genetic Algorithm are seen as algorithms with two main characteristics: The first characteristic is that they are stochastic algorithms, while the second characteristic is the essential role of randomness. Both selection and reproduction 7017,53552,570
0204060801001502003005001000
Search space

needs random procedures. A second very important point is the fact that genetic algorithms will always consider a group, collection, of solutions named population. By keeping in memory more than one solution at a time, this algorithms are presented with some benefits. The algorithm can recombine different, previous, solutions to get better ones with each iteration thus using the benefits of assortment. A population base algorithm is also opened to parallelization.[2] 3.2.2 The correct solution Genetic algorithms are iterated until the fitness value of the “best-so-far” chromosome stabilizes and the later iterations will start providing less and less improvements. This means the algorithm has converged to a solution. The whole process of iterations is called a run. At the end of each run there is usually at least one chromosome that is a highly fit solution to the original problem. Depending on how the algorithm is written, this could be the most fit of all the “best-so-far” chromosomes, or the most fit of the final generation. The “performance” of a genetic algorithm depends highly on the method used to encode candidate solutions into chromosomes and “the particular criterion for success,” or what the fitness function is actually measuring.[1] 3.3 Types of Representation 3.3.1 Binary Representations Binary representations are the most widely used representations for selectorecombinative GAs. Selectorecombinative GAs use crossover as the primary search operator, and process schemata. In this types of GAs, mutation only serves as background noise to the algorithm. The search space ! is denoted by !, where ! is the length of a binary vector !. ϕgϕg={0,1}llxg=(xg1,…xgl)∈{0,1}l8

When using binary representations the genotype-phenotype mapping ! i s dependent on the specific type of problems that is desired to be solved. For many combinatorial optimization problems, binary representation is seen as a very natural way of encoding the problem. Specific genotype-phenotype mappings are necessary when encoding integer problems using the binary representation. Different types of binary representations for integers assign the integers ! (phenotypes) in a different way to the binary vectors ! (genotypes). When encoding continuous variables by using binary vectors, the number of bits that are used to represent a phenotype continuous variable gives the accuracy of the representation of the problem. By increasing the number of bits that are used for representing one continuous variable, the accuracy of the representation can be increased. The maximal accuracy that we can obtain when encoding a continuous phenotypic variable ! by using a binary vector of length ! is !.[12] 3.3.2 Integer Representation Instead of using binary strings with cardinality !, where !, we can use higher !-ary alphabets for the genotypes. Then, instead of a binary alphabet, a !-ary alphabet is used for the string of length !. We are able to encode ! different possibilities, as opposed to binary representation where we can encode only ! different individuals. The size of the search space increases from! to !.[12] In many integer problems, it is often preferred to use binary instead of integer representations thus allowing maximum schema processing with binary alphabets when using some of the standard recombination operators. 3.3.3 Real-valued Representations fg
xp∈ϕpxg∈ϕg
xp∈[0,1]l1/2l+1X=2x∈{N+∖{0,1}}XXlXl2l|ϕg|=2l|ϕg|=Xl
9

When using real-valued representations, the search space ! is defined as !, where ! is the length of the real-valued chromosome. The most used Genetic Algorithm that uses the real-valued representation in a favorable way, is the mutation-based one. Real-valued representations can not exclusively be used in the encoding of real-valued problems, and also not for other permutation and combinatorial problems which would rather take advantage of the more natural encoding that the binary representation offers. Trees, schedules, tours problems can easily be represented by using real-valued vector and special genotype-phenotype mappings.[6] 3.3.4 Messy Representations In all the previously explained representations, the position of each allele has a fixed position in the chromosome and only the value that corresponds to it is specified. Holland (1975) [3] proposed one of the first gene-independent representation. He proposed an operator that would change the relative order of each allele in the string, called the inversion operator. A tuple is formed from the position of the allele and its corresponding value, and this tuple is coded in a string. This type of representation can be used for binary, integer, and real-valued representations and allows independent encodings from the position of the alleles in the chromosome. Later, Goldberg (1989) used this position-independent representation for the messy genetic algorithm.[1] 3.4 Selection In the Genetic Algorithms world there are six known selection methods that are used: the roulette wheel selection (RWS), the stochastic universal sampling (SUS), the linear rank selection (LRS), the exponential rank selection (ERS), the tournament selection (TOS), and the truncation selection (TRS) [3]. ϕgϕg=ℝll
10

3.4.1 Roulette Wheel Selection (RWS) The benefits of the RWS is that for each individual ! of the current population it will assign a probability ! of being selected that is proportional to its fitness value given by !: ! where n is the population size given by the number of individuals. A possible RWS implementation in pseudo-code could look like this: RWS_pseudo-code {
Calculate the sum !
For each individual ! do { – Generate a random number !; – ! – Do { – ! – ! } while (! and !) – Select the individual j; } } One possible risk of this selection method is the risk of premature convergence of the algorithm to a local optimum, due to an individual being dominant over the others, winning all the competition and being always selected as a parent.[9] Ip(i)f()p(i)=f()∑nj=if(j)
S=n∑j=if(j)1≤i≤na∈[0,S]iSum=0;j=0;iSum←iSum+f(j);j←j+1;iSum<αj<n
11

3.4.2 Stochastic Universal Sampling (SUS) The Stochastic Universal Sampling is based on RWS with a small enhancement designed to reduce premature convergences. A possible SUS implementation in pseudo-code could look like this: SUS_pseudo-code {
Calculate the mean !
Generate a random number !
!
Do { – If (!) { – select the ! individual; – ! } else { – ! – ! } } while (!) } 3.4.3 Linear Rank Selection (LRS) LRS is another attempt at solving the risk of premature convergence of the GA to a local optimum. Instead of using each individuals fitness, the selection method uses their rank. The rank ! will be accorded to the best individual while the worst ¯f=1/nn∑i=1f(i)α∈[0,1];Sum=f(1);delta=αׯf;j=0;delta<Sumjthdelta=delta+Sum;j=j+1;Sum=Sum+f(j);j<n
n12

individual will be assigned a rank with the value of 1. Thus, using the rank, the probability of an individual to be selected is given by the following formula: ! After each individual is given a rank the LRS selection method can take place. A possible LRS implementation in pseudo-code could look like this: LRS_pseudo-code {
Calculate !
For each individual ! do { – Generate a random number ! – For each ! do { – If (!) { – Select the ! individual; – Break; } } } } 3.4.4 Exponential Rank Selection (ERS) Based on the same principle as LRS, ERS comes with a new method of computing the probability of selecting each individual that is given by the following formula: ! p(i)=rank(i)n×(n−1)
v=1n−2.001;1≤i≤nα∈[0,v];1≤j≤np(j)≤αjth
p(i)=1.0*exp(−rank(i)c)13

with ! After each individual is assigned a probability of being selected the ERS method can be used. A possible ERS implementation in pseudo-code could look like this: ERS_pseudo-code {
For each individual ! do { – Generate a random number ! – For each ! do { – Select the ! individual; – Break; } } } 3.4.5 Tournament Selection The tournament selection method is a variant of rank-based selection methods based on a set of ! individuals randomly selected. The ranking of the individuals will take place based on their fitness and the fittest individual will be further selected for reproduction. This entire process is repeated ! times, where ! is the size of the individuals in the population. The probability of each individual to be selected is given by the following formula: ! c=(n*2*(n−1))(6*(n−1)+n)
1≤i≤nα∈[19c,2c];1≤j≤njth
knnp(i)={Ck−1n−1ifi∈[1,n−k−1]0ifi∈[n−k,n]14

After each individual is assigned a probability of being selected the TSO method can be used. A possible TSO implementation in pseudo-code could look like this: TSO_pseudo-code {
Create a table t where the n individuals are placed in a randomly chosen order
For ! do { – For ! do { – ! – For ! do { – ! – If (!) { – select i1 } else { – select i2 } } – ! } } } 3.4.6 Truncation Selection The truncation selection method will order each individual in the population based on their fitness. Then, only the ! most fittest individuals from the population will be selected and reproduced ! times. This selection method is most common in problems where a large population is required.[9] 1≤i≤n1≤j≤ni1=t(j);1≤m≤ni2=t(j+m);f(i1)>f(i2)
j=j+k;
p1/p15

A possible TRS implementation in pseudo-code could look like this: TRS_pseudo-code {
Order the n individuals of P(t) according to their fitness;
Set the portion p of individuals to select;
!
Select the first sp individuals; } 3.5 Genetic Operators After the parents are selected using one the methods described in the above sections, they will be randomly paired. Offsprings are the results of applying genetic operators on the paired parents. The two main categories of genetic operators are crossover and mutation. 3.5.1 Crossover Crossover will combine the desired characteristics of two parents in order to create a variation in the quality of the newly created individual. In the past years a few variants of crossover have emerged in the literature of GAs, each of them unique in a way. To make a statement of which method is better than another is hard because studies have been made on small sets of problems, and each method is better than another on a specific set of problem.[8] sp=abs(n×p);
16

3.5.1.1 Single Point Crossover
This method is the classical and simplest of all: A segment of the parents is exchanged in order to form two offsprings. The position at which the segment is chosen is picked randomly between the first and the last allele. For example, in the following table the crossover point is picked to be position 2. The process that creates the offspring can be seen as follows: the segment that start on position two of the first parent is exchanged with the segment that starts at position two in the second parent, resulting in two new offsprings with characteristics from both parents. In this table ! and ! stand for a bit for bit-string chromosome:
Figure 2 : single point crossover in action Despite the fact that this is the simplest crossover, we can encounter several drawbacks. One of them is called the positional bias: Using this method we can’t create certain schemas. The other is called the endpoint effect: Each endpoint of each parent is exchanged, which in some cases might not be an optimal effect.[6] 3.5.1.2 Double Point Crossover
In the Double Point Crossover method the segment exchanged between two parents in order to generate two offsprings is between two points that are randomly selected. For example, in the following table the randomly picked points are 2 and 5 and the segments between them are exchanged in order two obtain two different offsprings: xiyiParentOffspring[y1,y2,x3,x4,x5,x6][x1,x2,y3,y4,y5,y6][y1,y2,y3,y4,y5,y6][x1,x2,x3,x4,x5,x6]
17

Figure 3 : double point crossover in action In comparison with the single point crossover, the double point crossover is capable of combining more schemas and also in the process the ending point effect drawback is removed. It is still not capable of creating certain schemas, just like the single point crossover.[5] 3.5.1.3 Uniform Crossover
Uniform Crossover comes as an extension to the single and double point crossover, allowing each allele in the parents to be exchanged with a probability !. For example, in the following table considering the probability !, and the series of generated random numbers ! for each allele !: 0.4, 0.7, 0.3, 0.8, 0.9, 0.5, the exchange will happen for alleles that have ! in the following way:
Figure 4 : uniform crossover in action The uniform crossover allows any recombination of any schemas, since it allows any point in the parent to be exchanged. However, this can lead to some disadvantages: The randomness of the segment that is exchanged may prevent valuable building blocks to be created (blocks that are short and highly fitted and through recombination might generate strings of even higher fitness), because of ParentOffspring[y1,y2,x3,x4,y5,y6][x1,x2,y3,y4,x5,x6][y1,y2,y3,y4,y5,y6][x1,x2,x3,x4,x5,x6]
pp=0.6gi∈[0,1]igi≥pParentOffspring[y1,x2,y3,x4,x5,y6][x1,y2,x3,y4,y5,x6][y1,y2,y3,y4,y5,y6][x1,x2,x3,x4,x5,x6]
18

this the random exchange that is present in the uniform crossover can be highly disruptive to any schemas. [7] 3.5.1.4 Arithmetic Crossover (floating point only)
The crossover methods presented above work well with bit-string representations, but, in the case of floating-point encoding a major drawback appears: the crossed parameter values from the parents that are propagated into the offspring remain unchanged for most of the possible schemas, thus no new values are introduced.[7] This issue has been solved with the arithmetical crossover, in which, the ! parameter in offspring ! is obtained using a linear combination of the ! parameters of the two parents, ! and !, using the following formula: ! where ! is a random value belonging to the interval !. Using this linear formula it is guaranteed that the newly created offspring contains values that are always within the domain of the problem. For example, in the following table considering that the crossover happens between position 2 and 5, the new offsprings will be generated in the following way:
Figure 5 : arithmetic crossover in action ithZiithXiYiZi=aXi+(1−a)Yia[0,1]
ParentOffspring[y1,ay2+(1−a)x2,y3,y4,ay5+(1−a)x5,y6][x1,ax2+(1−a)y2,x3,x4,ax5+(1−a)y5,x6][y1,y2,y3,y4,y5,y6][x1,x2,x3,x4,x5,x6]
19

3.5.1.5 Heuristic Crossover (floating point only)
Unlike the other methods described above, the direction of search is given, using the values of the fitness function according to the following formula: ! where ! is a random value that belongs in the interval !; ! and ! are the parameter vectors of the two parents, ! is the fitness of the first parent and ! is the fitness of the second parent. Contrary to the arithmetic crossovers offsprings that are always within the range of their parents, heuristic crossover will always generate offspring that are outside of the range of their parents. This can be a drawback as in some cases the new generated offspring values might be so far aways from their parents that they stop respecting the allowed range of that variable. In this case, the offspring will be ignored and a new attempt will be tried where another ! is tried and a new offspring created. If after after some attempts no offspring that satisfies the variables bounds can be obtained, the operator will stop and no resulting offspring will be created.[7] 3.5.2 Mutation The second category of genetic operators is mutation. Mutation allow introduction of diversity into a population and thus decreases the chance of a premature convergence. As in the case with crossover, there are numerous variants of this method that have been developed over the years by GA researchers, each of this method coming with its weakness and strength, thus comparison between them is irrelevant. Z={a(Y−X)+Yiff(Y)≤f(X)a(X+Y)+Xotherwisea[0,1]XYf(X)f(Y)
a
20

3.5.2.1 Uniform Mutation
Uniform Mutation is the most classic and simple mutation there is, in which each allele is presented with a chance to be mutated. For each allele, a random number will be generated and compared with a mutation rate denoted ! , which is a user specified parameter. If ! then the allele to which r correspond will be mutated. In the case of a binary representation, bit-string or Gray-coded integer, the mutation operator does a flip of the bit from 0 to 1 or vice versa. In the case of floating-point representation, the allele that is selected for mutation will be replaced with a random generated value that is within the domain of that variable. For example, considering the chromosome !, if the selected allele is ! and its domain is !, the resulting chromosome after the mutation is applied might be !. 3.5.2.2 Non-uniform Mutation (floating point only)
Michalewicz[9] introduced a dynamic mutation operator call non-uniform mutation, with the idea to optimize the fine tuning of the chromosome and in contrast with uniform mutation, to reduce the randomness of the selected allele to be mutated. This mutation works in the following way: if element ! is selected for mutation, then the resulting offspring is obtained applying the following formula: ! where ! is a random generated number belonging to !, ! and ! are the lower and upper bounds of variable !. The function ! is defined by: pmr>pm
X=[3.45,2.33,3.5]x3=3.5×3∈[2,5]X=[3.45,2.33,4.3]
xiZi={xi+△(t,UBi−xi)ifa>0.5xi−△(t,xi−LBi)ifa≤0.5a[0,1]LBiUBixi△(t,x)21

! where ! is also a random generated number belonging to !, T is maximum generation number, and ! is a value defined by the user, depending on the problem, that gives the degree of dependency on iteration !. This simulated annealing formula will return a value that belongs to ! with the property that the probability of ! being close to 0 will increase as ! increases. Because of this property the newly created offspring respects the bounds of the variable. Moreover it gives the operator the ability explore the space in a more broadly manner having an initially large mutation step, that becomes smaller later, thus making the later generated offspring closer to its predecessor as opposed to a randomly generated one. 3.5.2.3 Boundary Mutation (floating point only)
Boundary Mutation is a variant of the uniform mutation, in which each newly generated allele is a random value contained in the domain of the respective allele. As opposed to the uniform mutation the newly generated allele is either the upper bound or the lower bound of the domain with an equal probability. ! where the parameters have the same meaning as in non-uniform mutation. This mutation operator is a perfect fit for problem optimizations where the optimal solution is found near the boundary constrained search space. Combining this operator with arithmetic crossover will improve the “contraction effect” of the generated population. Although this operator is good for this types of problems, it presents great disadvantages for the others. △(t,x)=x(1−r(1−t/T)b)r[0,1]btth[0,x]△(t,x)t
Zi={UBifa>0.5LBifa≤0.5
22

3.5.2.4 Creep Mutation (floating point only)
Creep mutation was introduced by Davis[1], in which the allele values which are to be mutated are incremented or decremented by the creep fraction – the fraction of the valid range for that variable’s value. The selected variable ! i s mutated according to the following formula: ! where ! and ! are the upper bound and the lower bound of the variable’s domain respectively; ! is a random number in the range !; ! is the creep factor, a user defined number belonging to the interval !. It is equally likely that the allele value will be incremented or decremented, and the value is adjusted to the boundary value if the newly generated allele falls out of the valid range of the allele. 3.6 Final Results A simple Genetic Algorithm will exhibit the following features: finite population, bit representation, one-point crossover, bit-flip mutation and roulette wheel selection. The most widely employed form of GA is the simple GA. In particular, most of the scarce theoretical formalizations of GAs that are available in the literature are focused on simple GAs. For this simple GA we have three traditional termination conditions that have been used throughout the years: – An upper limit on the number of generations is reached. – An upper limit on the number of evaluations of the fitness function is reached, or – The chance of achieving significant changes in the next generations is excessively low. xizi=min(iMax,max(iMin,xi+rs(iMax−iMin)))iMaxiMinr[−1,1]s[0,1]
23

In comparison to the third option, the first one requires some knowledge about the problem in order to estimate a reasonable maximum search length. In this case two variants are presented, genotypical termination criteria and phenotypical termination criteria. The former end when the population reaches certain convergence levels with respect to the chromosomes in the population, in short, the number of genes that have converged to a certain value of the allele is checked. The convergence or divergence of a gene to a certain allele is set by the GA designer, thus giving a threshold to be reached depending on the problem. Unlike the genotypical approach, phenotypical termination criteria measure the progress achieved by the algorithm in the last n generations, where n is a value preset by the GA designer. When this measurement, which may be expressed in terms of the average fitness value for the population, yields a value beyond a certain limit !, it is said that the algorithm has converged and the evolution is immediately interrupted. Whatever the problem, it is nowadays considered inappropriate to employ sets of fixed values for parameters in an evolutionary algorithm. Therefore, choosing a termination condition based on the number of current iteration us unadvisable. ϵ
24

4.Smart Fit iPhone application

Figure 6. UML use case diagram for the application The smart fit iPhone application has three main features: The workout generations feature that allows the user to generate new workouts, view workouts and remove existing ones. The second important feature is a meal plan generator that will allow the user to generate a meal plan to respect. The final feature is a 25

camera scanner which will allow the user to scan an object and in return they will get its nutrition value and key benefits of it. 4.1 The genetic algorithm The genetic algorithm is the heart of the application, allowing both workout and food plan generation with very good results. The problem is encoded using Integer representation where every allele in a chromosome represent an exercise. The population only has a list of individuals and a method to find the fittest one. 4.1.1 Individual Each individual contains the data for a possible workout / food plan. It has a method to randomize its values that is used to create a initial individual for an initial population.
Figure 7. Initial population generation algorithm This randomization will ensure that every workout generation will start from a random point. 26func createInitialPopulation() -> Population{
var population = Population()
for _ in 1…POPULATION_SIZE {
var individual = Individual()
individual.randomize()
individual.calculateFitness()
population.individuals.append(individual)
}
return population
}

4.1.1.1 Crossover
In the structure of the individual we will also find a singe point crossover method. In our implementation we will not encounter the end point drawback so often because the workout will contain mixed exercises for different groups of muscle that will be ordered at the end. The implementation of the crossover method in the swift programming language is as follows:
Figure 8. Crossover method for genetic algorithm Each individual can be crossed with another one, first finding a crossIndex that is a random number from 0 to DNA_LENGTH where in the case of the workout generation is the workout length and respectively in the case of the food plan the diversity of foods and a maximum number of food items to eat. After the crossIndex is selected the segments are combined and a new Individual is born. 4.1.1.2 Mutation
Each individual also includes a mutation method that is implemented using the uniform mutation algorithm. 27func cross(_ otherIndividual: Individual) -> Individual {

var individual = Individual()

let crossIndex = Int(arc4random_uniform(UInt32(DNA_LENGTH)))
for i in 0…crossIndex {
individual.data[i] = self.data[i]
}
for i in crossIndex…DNA_LENGTH-1 {
individual.data[i] = otherIndividual.data[i]
}

return individual
}

Figure 9. Mutation method for the algorithm For each allele a random number that belongs to the interval ! is generated representing the probability of that allele to be mutated. If this probability is less or equal than the mutation chance the allele which represents a exercise or a food object will be replace with another. Here A V AILABLE_GENES are the available exercises or the available food objects and MUTATION_CHANCE is a value that is best fit for the respective type of generation. This value is set after running some tests and comparing the results which we will get to in the respective part for the workout / food generation feature. 4.1.2 Population Each population in the algorithm is composed of a list of individuals and a method to obtain the most fit individual in that respective population.
Figure 10. The structure of an individual [0,1]
28mutating func mutate() {
for index in 0…DNA_LENGTH-1 {
if Double(Float(arc4random()) / Float(UINT32_MAX)) <= MUTATION_CHANCE {
self.data[index] = AVAILABLE_GENES[Int(arc4random_uniform(
UInt32(AVAILABLE_GENES.count)))]
}
}
}
struct Population {
var individuals = [Individual]()
var fittestIndividual: Individual?

mutating func calculateFittestIndividual() {
fittestIndividual = individuals.sorted(by: { (first, second) -> Bool in
first.fitness < second.fitness
}).first!
}
}

The most fit individual is the one which the fitness value is the lowest. In both of the workout / food generation the solution is the individual with the lowest fitness value. 4.1.2.1 Tournament selection
The selection method for this algorithm is the tournament method. A parent is selected, being the fittest from a randomly picked set of candidates from a population
Figure 11. Tournament selection method for the algorithm After two individuals are selected as parents from the population, crossover is applied to form an offspring, mutation is then applied and the offspring is added to a new population is obtained. To make the algorithm more efficient we also save the fittest individual from the last generated population. This algorithm is repeated MAX_GENERATIONS_COUNT times, where MAX_GENERATIONS_COUNT is a set number determined by running some tests on the algorithm and comparing the results, the test being detailed in the workout generation feature and the food plan generation feature which will be described in the following pages. 29func selectParentTournament(_ population:Population) -> Individual{
var tournamentPopulation = Population()
for _ in 1…TOURNAMENT_SIZE {
tournamentPopulation.individuals.append(
population.individuals[Int(
arc4random_uniform(UInt32(
population.individuals.count)))])
}
tournamentPopulation.calculateFittestIndividual()
return tournamentPopulation.fittestIndividual!
}

Figure 12. The genetic algorithm with all the methods added 4.2 My Workouts Feature This is the first feature in our application and also the landing page when the app starts, the other features are on separate pages with the ability to navigate between them using a tab controller. On this page the user will be presented with his list of generated workouts, with the ability to delete from the list a selected workout and the ability to generate a new workout. When generating a new workout the user will be able to select the type of the workout, the day in which he would like to use this workout and two groups of muscles he would like to train. After this he will select how many exercises he would like to have in this workout and also he must give this workout a name. The number of exercises in this workout will represent the DNA_LENGTH in the genetic algorithm and the fitness will be computed using this algorithm: First off all, the exercises that are in that specific workout are added to a helper repository that can manage which exercises are related to one another. To the fitness value we assigned an initial value that is computed using the grades of the exercises 30for generation in 1…MAX_GENERATIONS_COUNT {

var newPopulation = Population()
if SAVE_FITTEST {
newPopulation.individuals.append(population.fittestIndividual!)
}

for _ in newPopulation.individuals.count…POPULATION_SIZE-1 {

let individual1 = selectParentTournament(population)
let individual2 = selectParentTournament(population)

var childrenIndividual = individual1.cross(individual2)
childrenIndividual.mutate()
childrenIndividual.calculateFitness()
newPopulation.individuals.append(childrenIndividual)
}

population = newPopulation
}

Figure 13. Part of fitness evaluation Then for each exercise we check if there are related exercises in the workout using the helper repository and increment the fitness value as follows:
Figure 14. Evaluation of the fitness based on related exercises In the final part of the fitness computation we will check for the exercises not to be mixed and be ordered by group, we will also check for the number of exercises to be well balanced between groups and also to respect the number of exercises, the DNA_LENGTH, that is set by the user to be generated. 4.3 My Food Plan Feature This is the second feature of the application. To get to this page the user will have to click on the “My Food Plan” tab in the bottom menu. In this page the user will be presented with his food plans that he generated, with the ability to delete a food plan or generate a new one. When generating a new food plan the user will be able to select the type of the food plan, then, he will be asked to give the desired weight to achieve. The fitness evaluation will take into account the type of plan he wants and will compute on 31var fitness = 0.0
for exercise in exercises {
fitness += (10 – exercise.getGrade())
}
for i in 0…exercises.count – 2 {
for j in i + 1…exercises.count – 1 {
if repo.areRelated(exercises[i], exercises[j]) {
fitness += 5
}
}
}

that type using the desired body weight values for how much protein, carbohydrates, and fat the plan should include. The actual values in the generate individual are then compared to this ones and the fitness is obtained:
Figure 15. Evaluation of the fitness based on protein, carbohydrates and fat values 4.4 What’s This? Feature This is the third and final feature of the application where the use will be allowed to scan an object using the iPhones camera. When scanning the object the user will be presented with what that object might be and after pressing the “What’s This?” button, information of the current food item scanned will appear in the form of a popup. 32var fitness = 0.0
var proteinToAchieve = 0.0
var carbohydratesToAchieve = 0.0
var fatToAchieve = 0.0
if type == "gain weight" {
proteinToAchieve = desiredBodyWeight * 2.4
carbohydratesToAchieve = desiredBodyWeight * 4
fatToAchieve = desiredBodyWeight * 1
} else {
proteinToAchieve = desiredBodyWeight * 0.9
carbohydratesToAchieve = desiredBodyWeight * 2.9
fatToAchieve = desiredBodyWeight * 0.7
}
var proteinInFoodPlan = 0.0
var carbohydratesInFoodPlan = 0.0
var fatInFoodPlan = 0.0
for food in foodList {
proteinInFoodPlan += food.protein
carbohydratesInFoodPlan += food.carbohydrates
fatInFoodPlan += food.fat
}
fitness += (abs(proteinInFoodPlan – proteinToAchieve) * 3)
fitness += (abs(carbohydratesInFoodPlan – carbohydratesToAchieve) * 5)
fitness += (abs(fatInFoodPlan – fatToAchieve) * 8)
return fitness

This feature is implemented using Apples CoreML framework which allows easy integration of trained machine learning models. For this feature we use a trained machine model for recognizing objects, we need apply a filter which will search that object in a list of food items, for example in a list of fruits or vegetables.
Figure 16. CoreML requests handled on each frame captured by the camera The method captureOutput, provided in the image above, will be called for each frame that the camera captures. A pixel buffer will be created from the sampleBuffer that is provided from the iPhones camera and then a request will be performed on the model provided. When the request is finished the result will be obtained as an object of type firstObservation where the identifier member will represent the name of the object scanned and the confidence will be a value that shows how good the prediction is in a value belonging to !. After the camera (0,1]33func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
else { return }

guard let model = try? VNCoreMLModel(for: VGG16().model )
else { return }

let request = VNCoreMLRequest(model: model)
{ (finishedRequest, err) in

guard let results = finishedRequest.results as?
[VNClassificationObservation] else {return}

guard let firstObservation = results.first else
{ return }

print(firstObservation.identifier,
firstObservation.confidence)

DispatchQueue.main.async {
captureLabel.text = String(firstObservation.identifier)
}
}

try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
}

start scanning the user can get a popup with detailed information about the scanned object if it is a food item, such as the quantity of proteins, carbohydrates, or different types of vitamins provided in a short description of the object. 4.5 Statistics and Results For determining the best number of populations a test is ran with a maximum generations count of 30 and a range of populations from 10 to 100
Figure 17. X axis represent the number of population, Y axis the fitness value of the found solution Also in order to determine the best number of generations to be set as limit, a test is run with the population size set to 100 and the number of maximum generations set from 10 to 100. The results are found in the following figure.
3404080120160
102030405060708090100
Fitness Value vs Population Size

Figure 18. X axis represent the number of maximum generations, Y axis the fitness value of the found solution From this graphs presented above (Figure.17 and Figure.18 we can see that the best solutions are obtained for a population of size greater than 70 and a maximum generations count of 50-60. This result is depicted in the following figure.
Figure 19. X axis represent the size of the population, Y axis the maximum number of generations, and the bars the average fitness function 35017,53552,570
3070100
Population and GenerationsAverage Fitness Value027,55582,5110
102030405060708090100
Fitness Value vs Generations size

The results in Figure.19 are obtained after running the algorithm 50 times for more combinations of population size, generations count and averaging the fitness value of the found solution for this runs.
36

5.Bibliography 1.Mitchell, M. "LD Davis, Handbook of Genetic Algorithms." Artificial Intelligence 100.1-2 (1998): 325-330. 2.Goldberg, David E., and John H. Holland. "Genetic algorithms and machine learning." Machine learning 3.2 (1988): 95-99. 3.Holland, John H. "Genetic algorithms." Scientific american 267.1 (1992): 66-73. 4.Fogel, David B., and J. Wirt Atmar. "Comparing genetic operators with Gaussian mutations in simulated evolutionary processes using linear systems." Biological Cybernetics 63.2 (1990): 111-114. 5.Fox, B. R., and M. B. McMahon. "Genetic operators for sequencing problems." Foundations of genetic algorithms. V ol. 1. Elsevier, 1991. 284-300. 6.Mendoza, Jorge, et al. "Minimal loss reconfiguration using genetic algorithms with restricted population and addressed operators: real application." IEEE Transactions on Power Systems 21.2 (2006): 948-954. 7.Srinivas, Mandavilli, and Lalit M. Patnaik. "Adaptive probabilities of crossover and mutation in genetic algorithms." IEEE Transactions on Systems, Man, and Cybernetics 24.4 (1994): 656-667. 8.Oliver, I. M., DJd Smith, and John RC Holland. "Study of permutation crossover operators on the traveling salesman problem." Genetic algorithms and their applications: proceedings of the second International Conference on Genetic Algorithms: July 28-31, 1987 at the Massachusetts Institute of Technology, Cambridge, MA. Hillsdale, NJ: L. Erlhaum Associates, 1987., 1987. 9.Michalewicz, Zbigniew. "Evolution strategies and other methods." Genetic algorithms+ data structures= evolution programs. Springer, Berlin, Heidelberg, 1996. 159-177. 10.Sivanandam, S. N., and S. N. Deepa. Introduction to genetic algorithms. Springer Science & Business Media, 2007. 11.Goodman, Erik D. "Introduction to genetic algorithms." Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation. ACM, 2014. 12.Schwefel, Hans-Paul, and Reinhard Maenner. "Parallel Problem Solving from Nature 1st Workshop, PPSN I Dortmund, FRG, October 1–3, 1990 Proceedings." Conference proceedings PPSN. 1990. 13.Holland, John Henry. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press, 1992.
37

Similar Posts