Conversation
a6a4a00 to
08ad29c
Compare
| else: | ||
| self.number_of_worse_iterations = 0 | ||
| self.best_elitist_fitness = elitist_fitness | ||
| self.operator_ = self.operator |
There was a problem hiding this comment.
It's needed, because we set the mutation operator during runtime (I think). If we leave it out, self.operator_ has no matching_type and therefore rules aren't mutating at all
There was a problem hiding this comment.
If this happens I suspect the initial setting of the operator is somewhat broken. Other than the reset each cycle, this should not be needed at all. From a design perspective, we set the operator_ to some RuleMutation in init, change it after some criterion evaluates to true and reset it at the beginning of a new cycle. However, we set it much more often than that with the current implementation
There was a problem hiding this comment.
at the very least we should clone it, don't we?
There was a problem hiding this comment.
Yes there were multiple issues:
_validate_matching_typeneeds to be called before_validate_rule_generation- We need to specifically set
matching_typewhen we create the model
With these points changed, it works without assigning the operator
|
|
||
| class AdaptiveMutation(RuleMutation): | ||
| """Start off by using a given operator until the fitness decreased | ||
| more than self.number_of_worse_iterations times. Then change the mutation operator |
There was a problem hiding this comment.
This should only work with & or , replacement strategies, shouldn't it? So we should probably note that here. Or maybe even better, don't require the fitness to have decreased but rather not to have increased? That way this would work for + ES as well (albeit likely with a different tolerance required)
| """ | ||
|
|
||
| def __init__(self, matching_type: MatchingFunction = None, | ||
| sigma: Union[float, np.ndarray] = 0.1, |
There was a problem hiding this comment.
does this sigma have an effect or is it just a dummy? we could just parse operator's sigma to super, couldn't we?
| else: | ||
| self.number_of_worse_iterations = 0 | ||
| self.best_elitist_fitness = elitist_fitness | ||
| self.operator_ = self.operator |
There was a problem hiding this comment.
at the very least we should clone it, don't we?
| self.operator_ = self.operator | ||
|
|
||
| def unordered_bound(self, rule: Rule, random_state: RandomState): | ||
| raise TypeError("AdaptiveMutation has no implementation for unordered_bound") |
There was a problem hiding this comment.
why not use NotImplementedError if the message conveys exactly that? But I think that we should give more explanation here. It is not implemented because we didn't do it (yet) but because it makes no sense to do it as this is only a wrapper
|
Add try catch to go into default mode if not OrderedBound |
6f95342 to
bbfa81e
Compare
Using
HalfnormIncreaseorUniformIncreasefor mutation while usingESwas causing issues with discovering new rules. Once a peak was reached (in terms of the fitness of a rule), following rules' fitness were quickly declining with each iteration.This PR introduces an adaptive component to the mutation of
ESwith eitherHalfnormIncreaseorUniformIncrease. Once a peak is reached and the rules' fitness begins to decrease, the mutation type switches from the originalHalfnormIncreaseorUniformIncreasetoNormal.