Source code of PyGAD, a Python 3 library for building the genetic algorithm and training machine learning algorithms (Keras & PyTorch).
BSD-3-CLAUSE License
Bot releases are hidden (Show)
Published by ahmedfgad over 3 years ago
None
value works with the crossover_type
and mutation_type
parameters: https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/40
gene_type
parameter supports accepting a list/tuple/numpy.ndarray
of numeric data types for the genes. This helps to control the data type of each individual gene. Previously, the gene_type
can be assigned only to a single data type that is applied for all genes.bool
attribute named gene_type_single
is added to the pygad.GA
class. It is True
when there is a single data type assigned to the gene_type
parameter. When the gene_type
parameter is assigned a list/tuple/numpy.ndarray
, then gene_type_single
is set to False
.mutation_by_replacement
flag now has no effect if gene_space
exists except for the genes with None
values. For example, for gene_space=[None, [5, 6]]
the mutation_by_replacement
flag affects only the first gene which has None
for its value space.None
in the gene_space
parameter (e.g. gene_space=[None, [5, 6]]
), then its value will be randomly generated for each solution rather than being generate once for all solutions. Previously, the gene with None
value in gene_space
is the same across all solutionsPublished by ahmedfgad over 3 years ago
Release Date: 12 March 2021
bool
parameter called allow_duplicate_genes
is supported. If True
, which is the default, then a solution/chromosome may have duplicate gene values. If False
, then each gene will have a unique value in its solution. Check the Prevent Duplicates in Gene Values section for more details.last_generation_fitness
is updated at the end of each generation not at the beginning. This keeps the fitness values of the most up-to-date population assigned to the last_generation_fitness
parameter.Published by ahmedfgad over 3 years ago
Release Date: 20 February 2021
last_generation_fitness
holds the fitness values of the solutions in the last generation, last_generation_parents
holds the parents selected from the last generation, last_generation_offspring_crossover
holds the offspring generated after applying the crossover in the last generation, and last_generation_offspring_mutation
holds the offspring generated after applying the mutation in the last generation. You can access these attributes inside the on_generation()
method for example.initial_population
parameter is used. The bug occurred due to a mismatch between the data type of the array assigned to initial_population
and the gene type in the gene_type
attribute. Assuming that the array assigned to the initial_population
parameter is ((1, 1), (3, 3), (5, 5), (7, 7))
which has type int
. When gene_type
is set to float
, then the genes will not be float but casted to int
because the defined array has int
type. The bug is fixed by forcing the array assigned to initial_population
to have the data type in the gene_type
attribute. Check the issue at GitHub: https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/27
Thanks to Marios Giouvanakis, a PhD candidate in Electrical & Computer Engineer, Aristotle University of Thessaloniki (Αριστοτέλειο Πανεπιστήμιο Θεσσαλονίκης), Greece, for emailing me about these issues.
Published by ahmedfgad over 3 years ago
Release Date: 16 February 2021
gene_space
argument, the user can use a dictionary to specify the lower and upper limits of the gene. This dictionary must have only 2 items with keys low
and high
to specify the low and high limits of the gene, respectively. This way, PyGAD takes care of not exceeding the value limits of the gene. For a problem with only 2 genes, then using gene_space=[{'low': 1, 'high': 5}, {'low': 0.2, 'high': 0.81}]
means the accepted values in the first gene start from 1 (inclusive) to 5 (exclusive) while the second one has values between 0.2 (inclusive) and 0.85 (exclusive). For more information, please check the Limit the Gene Value Range section of the documentation.plot_result()
method returns the figure so that the user can save it.gene_space
parameter like [0, 1]
, it was possible that the gene value may not change after mutation. That is if the current value is 0, then the randomly selected value could also be 0. Now, it is verified that the new value is changed. So, if the current value is 0, then the new value after mutation will not be 0 but 1.Published by ahmedfgad almost 4 years ago
A bug fix when save_best_solutions=True. Refer to this issue for more information: https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/25
Published by ahmedfgad almost 4 years ago
Changes in PyGAD 2.10.1
gene_space
parameter, any None
value (regardless of its index or axis), is replaced by a randomly generated number based on the 3 parameters init_range_low
, init_range_high
, and gene_type
. So, the None
value in [..., None, ...]
or [..., [..., None, ...], ...]
are replaced with random values. This gives more freedom in building the space of values for the genes.gene_space
parameter are casted to the type specified in the gene_type
parameter.numpy.uint
data type is supported for the parameters that accept integer values.pygad.kerasga
module, the model_weights_as_vector()
function uses the trainable
attribute of the model's layers to only return the trainable weights in the network. So, only the trainable layers with their trainable
attribute set to True
(trainable=True
), which is the default value, have their weights evolved. All non-trainable layers with the trainable
attribute set to False
(trainable=False
) will not be evolved. Thanks to Prof. Tamer A. Farrag for pointing about that at GitHub.Published by ahmedfgad almost 4 years ago
pygad.torchga
to train PyTorch models using PyGAD. Check its documentation.run()
method completes or exits, the fitness value of the best solution in the current population is appended to the best_solution_fitness
list attribute. Note that the fitness value of the best solution in the initial population is already saved at the beginning of the list. So, the fitness value of the best solution is saved before the genetic algorithm starts and after it ends.parent_selection_type
is set to sss
(steady-state selection), then a warning message is printed if the value of the keep_parents
parameter is set to 0.mutation_percent_genes
is set to the string "default"
rather than the integer 10. This change helps to know whether the user explicitly passed a value to the mutation_percent_genes
parameter or it is left to its default one. The "default"
value is later translated into the integer 10.mutation_percent_genes
parameter is no longer accepting the value 0. It must be >0
and <=100
.warnings
module is used to show warning messages rather than just using the print()
function.bool
parameter called suppress_warnings
is added to the constructor of the pygad.GA
class. It allows the user to control whether the warning messages are printed or not. It defaults to False
which means the messages are printed.adaptive_mutation_population_fitness()
is created to calculate the average fitness value used in adaptive mutation to filter the solutions.best_solution()
method accepts a new optional parameter called pop_fitness
. It accepts a list of the fitness values of the solutions in the population. If None
, then the cal_pop_fitness()
method is called to calculate the fitness values of the population.Published by ahmedfgad almost 4 years ago
Changes in PyGAD 2.9.0 (06 December 2020):
best_solutions_fitness
attribute.save_best_solutions
is added. It defaults to False
. When it is True
, then the best solution after each generation is saved into an attribute named best_solutions
. If False
, then no solutions are saved and the best_solutions
attribute will be empty.crossover_type
parameter the value "scattered"
.gene_space
parameter.gene_type
, crossover_probability
, mutation_probability
, delay_after_gen
) can be assigned to a numeric value of any of these data types: int
, float
, numpy.int
, numpy.int8
, numpy.int16
, numpy.int32
, numpy.int64
, numpy.float
, numpy.float16
, numpy.float32
, or numpy.float64
.Published by ahmedfgad about 4 years ago
crossover_probability
parameter is used. Thanks to Eng. Hamada Kassem, Research and Teaching Assistant, Construction Engineering and Management, Faculty of Engineering, Alexandria University, Egypt.Published by ahmedfgad about 4 years ago
Support of a new module named pygad.kerasga
to train Keras models using the genetic algorithm.
Published by ahmedfgad about 4 years ago
Bug fix to support building and training regression neural networks with multiple outputs.
Published by ahmedfgad about 4 years ago
A bug fix when the problem_type
argument is set to regression
.
Published by ahmedfgad about 4 years ago
Changes in PyGAD 2.7.0 (11 September 2020):
learning_rate
parameter in the pygad.nn.train()
function defaults to 0.01.problem_type
. It is added as a parameter to both pygad.nn.train()
and pygad.nn.predict()
functions. The value of this parameter can be either classification or regression to define the problem type. It defaults to classification."None"
to refer that there is no activation function at this layer. As a result, the supported values for the activation function are "sigmoid"
, "relu"
, "softmax"
, and "None"
.To build a regression network using the pygad.nn
module, just do the following:
problem_type
parameter in the pygad.nn.train()
and pygad.nn.predict()
functions to the string "regression"
."None"
. This sets no limits on the range of the outputs as it will be from -infinity
to +infinity
. If you are sure that all outputs will be nonnegative values, then use the ReLU function.Check the documentation of the pygad.nn
module for an example that builds a neural network for regression. The regression example is also available at this GitHub project: https://github.com/ahmedfgad/NumPyANN
To build and train a regression network using the pygad.gann
module, do the following:
problem_type
parameter in the pygad.nn.train()
and pygad.nn.predict()
functions to the string "regression"
.output_activation
parameter in the constructor of the pygad.gann.GANN
class to "None"
.Check the documentation of the pygad.gann
module for an example that builds and trains a neural network for regression. The regression example is also available at this GitHub project: https://github.com/ahmedfgad/NeuralGenetic
To build a classification network, either ignore the problem_type
parameter or set it to "classification"
(default value). In this case, the activation function of the last layer can be set to any type (e.g. softmax).
Published by ahmedfgad about 4 years ago
Release Date: 6 August 2020
initial_population
parameter.gene_type
is added to control the gene type. It can be either int
or float
. It has an effect only when the parameter gene_space
is None
.on_start
, on_fitness
, on_parents
, on_crossover
, on_mutation
, on_generation
, and on_stop
.Published by ahmedfgad about 4 years ago
pygad.GA
class which are crossover_probability
and mutation_probability
.crossover_probability
parameter, then the parent is selected for the crossover operation.mutation_probability
, then this gene is selected for mutation.linewidth
is added to the plot_result()
method to specify the width of the curve in the plot. It defaults to 3.0.gene_space
as added to the pygad.GA
class constructor. It is used to specify the possible values for each gene in case the user wants to restrict the gene values. It is useful if the gene space is restricted to a certain range or to discrete values.Assuming that all genes have the same global space which include the values 0.3, 5.2, -4, and 8, then those values can be assigned to the gene_space
parameter as a list, tuple, or range. Here is a list assigned to this parameter. By doing that, then the gene values are restricted to those assigned to the gene_space
parameter.
gene_space = [0.3, 5.2, -4, 8]
If some genes have different spaces, then gene_space
should accept a nested list or tuple. In this case, its elements could be:
None
: A gene with its space set to None
is initialized randomly from the range specified by the 2 parameters init_range_low
and init_range_high
. For mutation, its value is mutated based on a random value from the range specified by the 2 parameters random_mutation_min_val
and random_mutation_max_val
. If all elements in the gene_space
parameter are None
, the parameter will not have any effect.Assuming that a chromosome has 2 genes and each gene has a different value space. Then the gene_space
could be assigned a nested list/tuple where each element determines the space of a gene. According to the next code, the space of the first gene is [0.4, -5] which has 2 values and the space for the second gene is [0.5, -3.2, 8.8, -9] which has 4 values.
gene_space = [[0.4, -5], [0.5, -3.2, 8.2, -9]]
For a 2 gene chromosome, if the first gene space is restricted to the discrete values from 0 to 4 and the second gene is restricted to the values from 10 to 19, then it could be specified according to the next code.
gene_space = [range(5), range(10, 20)]
If the user did not assign the initial population to the initial_population
parameter, the initial population is created randomly based on the gene_space
parameter. Moreover, the mutation is applied based on this parameter.
Published by ahmedfgad over 4 years ago
Changes in PyGAD 2.4.0:
delay_after_gen
is added which accepts a non-negative number specifying the time in seconds to wait after a generation completes and before going to the next generation. It defaults to 0.0
which means no delay after the generation.callback_generation
parameter of the pygad.GA class constructor can terminate the execution of the genetic algorithm if it returns the string stop
. This causes the run()
method to stop.One important use case for that feature is to stop the genetic algorithm when a condition is met before passing though all the generations. The user may assigned a value of 100 to the num_generations
parameter of the pygad.GA class constructor. Assuming that at generation 50, for example, a condition is met and the user wants to stop the execution before waiting the remaining 50 generations. To do that, just make the function passed to the callback_generation
parameter to return the string stop
.
Here is an example of a function to be passed to the callback_generation
parameter which stops the execution if the fitness value 70 is reached. The value 70 might be the best possible fitness value. After being reached, then there is no need to pass through more generations because no further improvement is possible.
def func_generation(ga_instance):
if ga_instance.best_solution()[1] >= 70:
return "stop"
Published by ahmedfgad over 4 years ago
Changes in PyGAD 1.0.19 (4 May 2020):
ValueError
exception on passing incorrect values to the parameters.init_rand_high
and init_rand_high
) allowing the user to customize the range from which the genes values in the initial population are selected.__code__
of the passed fitness function is checked to ensure it has the right number of parameters.