Evolution
The main task of this part of the tutorial will be to let the algorithm for creating images evolve via user input. Evolution is not just a metaphor, but the actual name of what we will do. Evolution hinges on four principles:
- Individuals each have several offspring
- Some traits of offspring are inherited
- Some traits of offspring are new mutations
- Which offspring survive to reproduce depend on these traits
These apply equally to evolution by natural selection, as is seen in nature, as evolution by artificial selection, which is what we will be using today. So, let us go through these four principles, one by one, adding them to the program.
The product for today
Here's the link to see it full screen:
https://www.openprocessing.org/sketch/581716
Individuals each have several offspring
So far, we have only been looking at one image, which is simply not enough. Here, I will expand the program to make four images at once instead. This will require the canvas to be shrunk to half the size, so that 2x2 images can be drawn onto the screen next to each-other. Additionally, I will add some empty bars to seperate the images from feeling too crowded next to each-other.
For this, we can change our variables of frameWidth and frameHeight, now controlled by new variables frameBorderX and frameBorderY.
frameNumber = 0
frameBorderX = 20 ; frameBorderY = 20
frameWidth = windowWidth/2-frameBorderX ; frameHeight = windowHeight/2-frameBorderY
frameBorderX = 20 ; frameBorderY = 20
frameWidth = windowWidth/2-frameBorderX ; frameHeight = windowHeight/2-frameBorderY
Next, we will need four images. First, frameNumber will keep track of which of the four images we are drawing. Next, each of these iterations will have different offsets of frameX and frameY.
frameX = frameBorderX/2+canvasWidth/2*(frameNumber % 2)
frameY = frameBorderY/2+canvasHeight/2*floor(frameNumber/2)
frameY = frameBorderY/2+canvasHeight/2*floor(frameNumber/2)
Currently, the program draws the same image four times, then stalls.
It does not seem quite able to draw within the borders, but that can be fixed by going back to our brush-drawing script and limiting all x-coordinates of vertices between frameX and frameX+frameWidth, and y-coordinates between frameY and frameY+frameHeight.
Some traits of offspring are inherited
It might seem counterintuitive to go to this point next, when it seems obvious that what we need is variation! But before we can do variation, we need to split apart the genetics of each of the four pictures. Importantly, we will need to be able to access the genetics of each even after they have been drawn, so we cannot just alter the genes as we go from one frame to another.
First, what kind of information does our pictures have? What are the genes, so to say? Throughout the tutorial, we have added more and more variables that all serve as genes:
frameNumber:
So now, when the program initialises, it will go:
pigmentA[0] = make_color_hsv(random(255),255-random(random(255)),255-random(random(255)))
pigmentB[0] = make_color_hsv(random(255),255-random(random(255)),255-random(random(255)))
pigmentC[0] = make_color_hsv(random(255),255-random(random(255)),255-random(random(255)))
pigmentD[0] = make_color_hsv(random(255),255-random(random(255)),255-random(random(255)))
generalbrushdirection[0] = random(360)
//CONVinputABCD and CONVfunction are unchanged
And then, it duplicates these variables into the next four pictures:
for(i=1;i<5;i++)
{
pigmentA[i] = pigmentA[0]
pigmentB[i] = pigmentB[0]
pigmentC[i] = pigmentC[0]
pigmentD[i] = pigmentD[0]
generalbrushdirection[i] = generalbrushdirection[0]
generalbrushsize[i] = generalbrushsize[0]
generalbrushfluff[i] = generalbrushfluff[0]
generalbrushlength[i] = generalbrushlength[0]
neuron4X[i] = neuron4X[0]
neuron4Y[i] = neuron4Y[0]
perlin6size[i] = perlin6size[0]
perlin7size[i] = perlin7size[0]
perlin8size[i] = perlin8size[0]
perlin9size[i] = perlin9size[0]
}
for(i=20;i<100;i++) //up to 100 because there's 5*20
{
CONVinputA[i] = CONVinputA[i % 20]
CONVinputB[i] = CONVinputB[i % 20]
CONVinputC[i] = CONVinputC[i % 20]
CONVfunction[i] = CONVfunction[i % 20]
}
First, what kind of information does our pictures have? What are the genes, so to say? Throughout the tutorial, we have added more and more variables that all serve as genes:
- pigmentA, pigmentB, pigmentC and pigmentD
- generalbrushdirection
- convinputA[0-20], convinputB[0-20], convinputC[0-20], convfunction[0-20]
frameNumber:
- [0] is the master value from which the four offspring will inherit
- [1] is the first offspring, that is, in the top left
- [2] is the second offspring, that is, in the top right
- [3] is the third offspring, that is, in the bottom left
- [4] is the fourth offspring, that is, in the bottom right
Cue a joke about arrays starting at one. But I think placing the master at position zero makes sense.
pigmentA[0] = make_color_hsv(random(255),255-random(random(255)),255-random(random(255)))
pigmentB[0] = make_color_hsv(random(255),255-random(random(255)),255-random(random(255)))
pigmentC[0] = make_color_hsv(random(255),255-random(random(255)),255-random(random(255)))
pigmentD[0] = make_color_hsv(random(255),255-random(random(255)),255-random(random(255)))
generalbrushdirection[0] = random(360)
//CONVinputABCD and CONVfunction are unchanged
And then, it duplicates these variables into the next four pictures:
for(i=1;i<5;i++)
{
pigmentA[i] = pigmentA[0]
pigmentB[i] = pigmentB[0]
pigmentC[i] = pigmentC[0]
pigmentD[i] = pigmentD[0]
generalbrushdirection[i] = generalbrushdirection[0]
generalbrushsize[i] = generalbrushsize[0]
generalbrushfluff[i] = generalbrushfluff[0]
generalbrushlength[i] = generalbrushlength[0]
neuron4X[i] = neuron4X[0]
neuron4Y[i] = neuron4Y[0]
perlin6size[i] = perlin6size[0]
perlin7size[i] = perlin7size[0]
perlin8size[i] = perlin8size[0]
perlin9size[i] = perlin9size[0]
}
for(i=20;i<100;i++) //up to 100 because there's 5*20
{
CONVinputA[i] = CONVinputA[i % 20]
CONVinputB[i] = CONVinputB[i % 20]
CONVinputC[i] = CONVinputC[i % 20]
CONVfunction[i] = CONVfunction[i % 20]
}
Here, I would normally paste in an image showing the result, but none of this will be visible. All it does is to allow us to next mutate each picture individually.
Some traits of offspring are new mutations
Finally, the interesting bit. Now, it might be quite tempting to throw in a lot of mutations, but that is not necessary. With the chaotic processes that go on in the neural layers, just a few changes will have quite big changes in the outcome. Like the parable of the butterfly and the hurricane, here just two mutations might be sufficient:
CONVinputA[IT*20+floor(random(19))] = floor(random(4))
CONVinputB[IT*20+floor(random(19))] = 5+floor(random(4))
And let's see if this is enough:
CONVinputA[IT*20+floor(random(19))] = floor(random(4))
CONVinputB[IT*20+floor(random(19))] = 5+floor(random(4))
And let's see if this is enough:
If you look for long enough, you will see that all four pictures in fact are different, but it is not an obvious change. To remedy this, we can also introduce mutations into the pigments. Not necessarily completely, but perhaps make a red orange, or a light green dark green.
colorMode(HSB,1)
var newcol = color(random(1),1-sq(random(1)),1-sq(random(1)))
colorMode(RGB,256)
switch (floor(random(4)))
{
case 0: pigmentA[frameNumber] = lerpColor(pigmentA[frameNumber],newcol,random(1)) ; break;
case 1: pigmentB[frameNumber] = lerpColor(pigmentB[frameNumber],newcol,random(1)) ; break;
case 2: pigmentC[frameNumber] = lerpColor(pigmentC[frameNumber],newcol,random(1)) ; break;
case 3: pigmentD[frameNumber] = lerpColor(pigmentD[frameNumber],newcol,random(1)) ; break;
}
This will make more obvious, though more superficial changes - but the two mutations combined should make quite a difference.
The four variations within each are quite clearly related, yet different enough for the different images to be more or less aesthetically pleasing.
Which offspring survive to reproduce depend on these traits
Finally, the fourth foundation of evolution - selection. In our case, "surviving to reproduce" means to be made into the new master, from which four new offspring will be made. The traits, as seen above, determine how the picture looks. So to fulfill the above, we will let the user choose one of the four images, which will then ascend to become the master for a new generation.
How one will do this - well, personally, I will use the mouse cursor, but depending on what program you use and what options you have available, you may use the click of a button or something. The most important part is that some input is taken in, and one image is made the new master, in a script that is the complete opposite of the inheritance described above. Then, the generation starts over, making four new pictures from the new master.
My personal script looks like this, in the drawing function:
for(i=0;i<4;i++)
{
var nx = frameBorderX/2+canvasWidth/2*(i % 2)
var ny = frameBorderY/2+canvasHeight/2*floor(i/2)
NoFill()
if (sel == -1) stroke(128+50*cos(time/10),128+50*cos(time/10),128+50*cos(time/10),30)
else if (sel == i) stroke(128+50*cos(time/10),128+50*cos(time/10),128+50*cos(time/10),30)
else stroke(255,255,255,30)
rect(nx-4,ny-4,frameWidth+8,frameHeight+8)
fill(0)
noStroke()
}
And in the mouse clicked function:
function mouseClicked() {
if (frameNumber == 0 && rendernum == 999999)
{setup_new_generation()}
else if ((abs(canvasHeight/2-mouseY) > frameBorderY/2) && (abs(canvasWidth/2-mouseX) > frameBorderX/2) && (frameNumber == 5))
{
var sel = 0
if (mouseX > canvasWidth/2) sel++
if (mouseY > canvasHeight/2) sel += 2
evolve_new_generation(sel)
}
}
{
var nx = frameBorderX/2+canvasWidth/2*(i % 2)
var ny = frameBorderY/2+canvasHeight/2*floor(i/2)
NoFill()
if (sel == -1) stroke(128+50*cos(time/10),128+50*cos(time/10),128+50*cos(time/10),30)
else if (sel == i) stroke(128+50*cos(time/10),128+50*cos(time/10),128+50*cos(time/10),30)
else stroke(255,255,255,30)
rect(nx-4,ny-4,frameWidth+8,frameHeight+8)
fill(0)
noStroke()
}
And in the mouse clicked function:
function mouseClicked() {
if (frameNumber == 0 && rendernum == 999999)
{setup_new_generation()}
else if ((abs(canvasHeight/2-mouseY) > frameBorderY/2) && (abs(canvasWidth/2-mouseX) > frameBorderX/2) && (frameNumber == 5))
{
var sel = 0
if (mouseX > canvasWidth/2) sel++
if (mouseY > canvasHeight/2) sel += 2
evolve_new_generation(sel)
}
}
Your own implementation will need to be different depending on language used.
Anyway, let us see what happens if I keep selecting the top right picture for a couple of generations:
Now only imgine if I had taken the more aesthetically pleasing picture instead of a set one each time. Perhaps I could have evolved forth some really beneficial traits in these pictures.
Evolution and control
Evolution by artificial selection also leads to a strange conundrum: The art generator is now interactive. Above we have made a simple user-program interface, sure, but it goes a bit deeper than that.
Usually, when you press a key in Word or a button in Photoshop, you will be able to expect the results. This is not so for this program, which inherently is chaotic. On one hand, this might make the program more interesting - unforeseeable results are more interesting than foreseeable ones. On the other, it introduces a balance of control that is difficult to get right. With no control, there is no incentive to interact. With total control, the program is neutered. With too little control, the user might become exasperated when things do not go as desired. With too much control, the user will never get anywhere out of perfectionism.
We can design the interactive experience by giving it rules. We already have, in a way: There are four options, and the user's decision indirectly affects the next four choices (i.e. the same option will not be presented again). Furthermore, these are the rules I will use:
After the user has made four decisions, the final choice will be rendered and saved in full resolution, after which the program starts over from a clean slate. This means the user gets four decisions to push the piece in the right direction, but never enough to feel too attached or foster feelings of perfectionism.
I will add two more keys to the program. The first is to restart from scratch at any desired time by pressing 'R'. The second is to complete a piece instantly, instead of having to wait for the fourth generation, by holding Shift while choosing the piece.
Evolving more values
Before we are done with this part of the tutorial, we should consider adding more "genes" to our evolutionary algorithm. The first, obvious one, is generalbrushdirection. We can give each offspring a one in four chance to set it to a new, random direction.
if (random(4) < 1) generalbrushdirection[frameNumber] = random(2*PI)
But why stop here? In the last article, we designated neuron[15] to control brush size, neuron[16] to control fluffiness, and neuron[17] to control brush stroke length. I suggest that while these neural layers can keep some influence, some of it can be given to genes instead.
if (random(4) < 1) generalbrushsize[frameNumber] = -.25+random(.5)
if (random(4) < 1) generalbrushfluff[frameNumber] = -.25+random(.5)
if (random(4) < 1) generalbrushlength[frameNumber] = -.25+random(.5)
Now, the above is easier said than done. All genes need to be located five places in the code: Initialisation, inheritance, mutation, ascension to the new standard, and the place they actually effect the image generation.
So let us add a few more and do all of it at once. For instance, neuron[4] draws two diagonal lines, but that always seemed a bit arbitrary. Why not let the direction and size of the lines be up for mutations?
if (random(4) < 1) neuron4X[frameNumber] = -4+random(4)
if (random(4) < 1) neuron4Y[frameNumber] = -4+random(4)
What about also introducing values controlling the size of the different Perlin noises used in neuron[6] through neuron[9]:
if (random(4) < 1) perlin6size[frameNumber] = 3+random(4)
if (random(4) < 1) perlin7size[frameNumber] = 1+random(2)
if (random(4) < 1) perlin8size[frameNumber] = 2+random(4)
if (random(4) < 1) perlin9size[frameNumber] = 10+random(10)
In this way, we can ensure that even the first ten neurons, which otherwise are the same from composition to composition, will be subject to evolutionary selection.
I will now add all these mutations, as well as their initialisation, inheritance, ascension and handle. It might be a bit of a hassle, but it all should work out.
The Most Important Change
I noticed that most of the output ended up being quite boring. Often, only one colour channel was used, and even when increasing the amount of mutations, paintings were quite similar. This can happen! What usually causes it is that there is a general average trend perpetuated despite the random neural connections. Maybe the values keep decreasing and while the input is 0-1, the output rarely reaches above 0,5. Or maybe variance itself is what decreases. Who knows. All I had to do was add this tiny piece of code to the neural loop:
neuron[i+10] = 0.5+0.5*cos(neuron[i+10]*PI)
Neuron[i+10] is of course the output, so this function simply transforms the value according to this graph:
The function does not amplify the result. An input of 0 will give an output of 0, 1 will give 1, and even 0.5 will give 0.5 Between these three values, it will, however, create a larger difference in values, making it more unlikely that the final layers will be dull and grey.
These kinds of changes might be necessary, and incredibly efficient when applied at the right time.
Oh, I also made a thousand alterations to maxradius, difradius and so on. Rendernum for instance controls maxradius, so the first brush strokes are bigger than the last, final touches. I am finding out that making a tutorial is fine, but it cannot really be comprehensive if you are working on something so dependent on the artistic process.
And that's it for today
To see the program in action, as well as the fully assembled code, go to https://www.openprocessing.org/sketch/581716.
The next, and possibly final part is online now, too, here!
Evolution and control
Evolution by artificial selection also leads to a strange conundrum: The art generator is now interactive. Above we have made a simple user-program interface, sure, but it goes a bit deeper than that.
Usually, when you press a key in Word or a button in Photoshop, you will be able to expect the results. This is not so for this program, which inherently is chaotic. On one hand, this might make the program more interesting - unforeseeable results are more interesting than foreseeable ones. On the other, it introduces a balance of control that is difficult to get right. With no control, there is no incentive to interact. With total control, the program is neutered. With too little control, the user might become exasperated when things do not go as desired. With too much control, the user will never get anywhere out of perfectionism.
We can design the interactive experience by giving it rules. We already have, in a way: There are four options, and the user's decision indirectly affects the next four choices (i.e. the same option will not be presented again). Furthermore, these are the rules I will use:
After the user has made four decisions, the final choice will be rendered and saved in full resolution, after which the program starts over from a clean slate. This means the user gets four decisions to push the piece in the right direction, but never enough to feel too attached or foster feelings of perfectionism.
I will add two more keys to the program. The first is to restart from scratch at any desired time by pressing 'R'. The second is to complete a piece instantly, instead of having to wait for the fourth generation, by holding Shift while choosing the piece.
Evolving more values
Before we are done with this part of the tutorial, we should consider adding more "genes" to our evolutionary algorithm. The first, obvious one, is generalbrushdirection. We can give each offspring a one in four chance to set it to a new, random direction.
if (random(4) < 1) generalbrushdirection[frameNumber] = random(2*PI)
But why stop here? In the last article, we designated neuron[15] to control brush size, neuron[16] to control fluffiness, and neuron[17] to control brush stroke length. I suggest that while these neural layers can keep some influence, some of it can be given to genes instead.
if (random(4) < 1) generalbrushsize[frameNumber] = -.25+random(.5)
if (random(4) < 1) generalbrushfluff[frameNumber] = -.25+random(.5)
if (random(4) < 1) generalbrushlength[frameNumber] = -.25+random(.5)
Now, the above is easier said than done. All genes need to be located five places in the code: Initialisation, inheritance, mutation, ascension to the new standard, and the place they actually effect the image generation.
So let us add a few more and do all of it at once. For instance, neuron[4] draws two diagonal lines, but that always seemed a bit arbitrary. Why not let the direction and size of the lines be up for mutations?
if (random(4) < 1) neuron4X[frameNumber] = -4+random(4)
if (random(4) < 1) neuron4Y[frameNumber] = -4+random(4)
What about also introducing values controlling the size of the different Perlin noises used in neuron[6] through neuron[9]:
if (random(4) < 1) perlin6size[frameNumber] = 3+random(4)
if (random(4) < 1) perlin7size[frameNumber] = 1+random(2)
if (random(4) < 1) perlin8size[frameNumber] = 2+random(4)
if (random(4) < 1) perlin9size[frameNumber] = 10+random(10)
In this way, we can ensure that even the first ten neurons, which otherwise are the same from composition to composition, will be subject to evolutionary selection.
I will now add all these mutations, as well as their initialisation, inheritance, ascension and handle. It might be a bit of a hassle, but it all should work out.
The Most Important Change
I noticed that most of the output ended up being quite boring. Often, only one colour channel was used, and even when increasing the amount of mutations, paintings were quite similar. This can happen! What usually causes it is that there is a general average trend perpetuated despite the random neural connections. Maybe the values keep decreasing and while the input is 0-1, the output rarely reaches above 0,5. Or maybe variance itself is what decreases. Who knows. All I had to do was add this tiny piece of code to the neural loop:
neuron[i+10] = 0.5+0.5*cos(neuron[i+10]*PI)
Neuron[i+10] is of course the output, so this function simply transforms the value according to this graph:
The function does not amplify the result. An input of 0 will give an output of 0, 1 will give 1, and even 0.5 will give 0.5 Between these three values, it will, however, create a larger difference in values, making it more unlikely that the final layers will be dull and grey.
These kinds of changes might be necessary, and incredibly efficient when applied at the right time.
Oh, I also made a thousand alterations to maxradius, difradius and so on. Rendernum for instance controls maxradius, so the first brush strokes are bigger than the last, final touches. I am finding out that making a tutorial is fine, but it cannot really be comprehensive if you are working on something so dependent on the artistic process.
And that's it for today
To see the program in action, as well as the fully assembled code, go to https://www.openprocessing.org/sketch/581716.
The next, and possibly final part is online now, too, here!
Comments
Post a Comment