Convolutional Art - Tutorial 1

I have showcased some of the artwork of my art generator, and tried to describe its development, but I feel I have only scratched the surface. This series goes a lot more in-depth on how to create this sort of program yourself, and for this purpose, I will code a new art generator completely from scratch, documenting every piece of code. It will take some time, but at the end of this session, we should have something like this:



The program will be made using Openprocessing - the result of each part of the tutorial has its own sketch. Code and results: https://www.openprocessing.org/sketch/581541



The Shell

The core of the program is the following piece of code:

function draw() {
  for(ix=0;ix<frameWidth;ix++)
  for(iy=0;iy<frameHeight;iy++)
  {
    stroke(random(16777216))
    point(ix,iy) 
}
}

For now, we will simply loop through every pixel of our frame and draw a dot with a random colour there. What we will be dealing with for the rest of this part of the series is making the random colour less random.


You should get a picture like this. It is not really what we are looking for, but at least the core has been set up for the actual rendering procedures.



The Input Layer

Right now, the colour of every pixel is completely random, which just won't do. Our goal is for every pixel to have a colour that depends on its location on the canvas. We are going to move from very simple connections between position and colour to increasingly complicated ones. For this purpose, we need an array. The first values of this array will be filled with inputs.

This array will be called neuron[], and is an analogue to other neural network methods.


neuron = []

for(ix=0;ix<frameWidth;ix++)
for(iy=0;iy<frameHeight;iy++)
{
xcoord = ix/frameWidth
ycoord = iy/frameHeight
neuron[0] = xcoord
neuron[1] = ycoord
stroke(color(255*neuron[0])) 
point(ix,iy)
}


Within our rendering loop, ix and iy are translated into an x coordinate between 0 (left) and 1 (right), and y coordinate (0 = top, 1 = bottom). The two first positions of the array are filled with the x and y coordinates, respectively. Next, instead of a random colour, a grey-scale colour is made using the value array. If value is 0, the colour will be black, if value is 1, the colour will be black.

For now, each piece will use just one of the numbers in the array - here, index 0 in the array returns the x coordinate of the pixel, creating a horizontal gradient.



The render of neuron[0]

The render of neuron[1]




More inputs

Two positional gradients is a good start, but far from sufficient. Let us try with some circles:


neuron[2] = sqrt((sq(xcoord)+sq(ycoord))/2)


Value[2] uses both the x and y position, squaring them, adding them together, and taking their square root, which just so happens to create a circle centred on the top left corner (0,0). For now, we want our values to stay within the range of 0 to 1, so the result is halved so that (0,0) gives 0 and (1,1) gives 1.



neuron[3] = sqrt((sq(xcoord-.5)+sq(ycoord-.5))*2)

Neuron[3] uses the same approach, but it subtracts one half from each of the coordinates. This centres the circle in the middle of the frame. Since it is no longer off to a corner, we no longer need to divide it by 2 - we might even want to multiply it by 2 instead.



neuron[4] = abs(sin(3*xcoord+5*ycoord))

This next neuron also just takes in the x and y position, adding them together. This time, it takes the sine to this number. I threw some random integers in there, but in the future, they can be randomized. Finally, to bound the value between 0 and 1, the absolute of the value is taken.



neuron[5] = min(1,max(0,abs(ycoord-.5)*10-2))
This last neuron takes in the distance from the y coordinate to the centre of the frame (ycoord - 0.5), transforming it linearly (*10-2), then bounds it between 0 and 1. This will make it easier for the program to make landscape compositions - if things go as planned.

You might notice that the last couple of neurons have seemed quite arbitrary - and they are! There is a lot of freedom here. I think the only necessary inputs are x, y and the distance from the centre, but some extra geometric inputs are sure to make compositions easier.



Perlin Noise

Whereas the first 6 neurons have been calculated from straightforward, geometric functions, we can use Perlin noise to fill out our remaining inputs. Perlin noise is as deterministic as the previous functions - it takes in two coordinates and returns a numerical value - but it will have less predictable results and often forms shapes on itself.

I will be using the built-in noise function of Openprocessing. If you do not have access to Perlin noise, I have a hacky implementation at the end of this article.



neuron[6] = noise(xcoord*5 ,ycoord*5)


Currently, our coordinates stay between 0 and 1, so I will multiply them by 5 before using them in this noise function.

Depending on your implementation of Perlin noise, you may need to scale the coordinates, choose the Perlin noise octaves and scale the output differently. It is not necessary for you to get a result similar to mine, the only important part is to have some sort of mid-frequency Perlin noise (and to have the output be between 0 and 1).

This gives us some wispy structures on a scale where we can both see islands and continents, so as to say.



neuron[7] = noise(xcoord*2+10 ,ycoord*2)


For the next layer of Perlin noise, I am going for a more large-scale noise, so I only multiply the coordinates by 2. This is to give the paintings some large-scale structures for the composition, rather than the previous, which was more to add some variety to the colours.

You may also notice that I add 10 to the x coordinate. This is simply to make sure we do not sample the same part of the perlin noise for the different layers.



neuron[8] = 1/(1+25*abs(0.5-noise(xcoord*4 ,ycoord*4+10)))


For this neuron we will create what I call a canal map. This returns high output at the low values of the Perlin noise, which act as boundaries between positive and negative areas of the map. Here this is accomplished by dividing the output by the absolute of the noise. The numbers are pretty arbitrary and can be tuned according to desire.



neuron[9] = noise(xcoord*15+10 ,ycoord*15+10)

And finally, I wanted some more noisy noise, so here the input coordinates are multiplied by 15 instead.

So, here we are, with ten input values. Neither of them are particularly striking in their own. So now we will start doing convolutions, or in more plain language, combining them.



Combining Values

The convolutions are the soul of the program and what will set the different paintings apart. Therefore, I feel like we should also lay the foundation for the DNA of the current painting, too.

CONVinputA = []
CONVinputB = []
CONVinputC = []
CONVfunction = []

for(i=0;i<20;i++)
{
CONVinputA[i] = floor(random(5))
  CONVinputB[i] = floor(5+random(5))
  CONVinputC[i] = floor(random(10))
  CONVfunction[i] = floor(random(3))
}

Here I create 4 arrays, and initialise them with random values. The three inputs, A, B and C, will be numbers between 0 and 9, and A and B will be guaranteed different. These will specify which of the input neurons, the ones we worked on above, will be combined.

Of course, values cannot just be combined - some function will have to be applied to them. These functions can be simple - adding, subtracting, multiplying - or more complex. For now, we will implement just three functions, and CONVfunction will hold the index of one of them.


Now, inside the rendering loop, after setting the first 10 neurons (0 to 9), I will add a new loop:

for(i=0;i<20;i++)
{
inputA = neuron[i+CONVinputA[i]]
inputB = neuron[i+CONVinputB[i]]
inputC = neuron[i+CONVinputC[i]]

switch CONVfunction[i]
{
case 0: //adding values A and B and taking the average
neuron[i+10]= (inputA+inputB)/2 ; break;

case 1: //multiplying values A and B and taking the square root
neuron[i+10]= sqrt(inputA*inputB) ; break;


case 2: //using value C to determine if A or B should be outputted
neuron[i+10] = inputC > .5 ? inputA : inputB ; break;
}
}

First, for each of the twenty convolutions, we initialise the inputs A, B and C. The inputs are the neurons set beforehand, indexed by our current convolution as well as the CONVinputA/B/C. In the first convolutions, these inputs will be the ten set beforehand, while later, the inputs will draw from other convolutions.

Then, we put the CONVfunction inside a switch statement. Currently there are three functions. If CONVfunction is 0, output is the average of input A and B. If CONVfunction is 1, output is the square root of input A and B multiplied by each-other. Finally, if CONVfunction is 2, input C is compared with 0.5 to determine if output should be input A or input B.

Finally, the output is put into a new neuron. Since we start with i=0 and we already have filled the first ten positions in the neuron-array, we jump ten positions forwards. Thus, our twenty convolutions will fill the neurons from 10 to 29.

Let us take a look at the results:
neuron[10]

neuron[11] 
neuron[12]
you might not be able to tell on the white background, but this has been letterboxed by neuron[5]

neuron[13]

neuron[14]

neuron[15]

neuron[16]

neuron[17]

neuron[18]

neuron[19]
Who could have seen this coming? Finally, the horizon-lines and diagonals pay off.

neuron[20]

neuron[21]

neuron[22]

neuron[23]

neuron[24]

neuron[25]
now this is starting to look like something

neuron[26]

neuron[27]

neuron[28]
and then all our hard work has been dropped :(

neuron[29]

These were our 20 convolutions. I can promise that every time you input new random values in the CONVinputA/B/C and CONVfunction, vastly different results will emerge. I hope all the hard work will be worth it, but I feel like I can see this adding up to something.

There is just one problem left - everything is still gray-scale, and though the inputs are interesting when seen next to each-other, by their own, they are quite simple.



Picking Colours

Lucky thing is that adding colours will allow us to circumvent both problems. By using colours, we can look at several different images at once.

Here, I will outline two approaches. The first is simple and what I term digital. Colours on a screen are represented by three values, Red, Green and Blue. This means we can use the last three convolutions to create colours:

stroke(color(255*neuron[27],255*neuron[28],255*neuron[29]))






Trippy!

Alternatively, we can use the HSV method of creating colours instead, but this does not actually change very much. Instead, let us see what happens when we replace our digital colouring with a more traditional approach.



Picking pigments

Instead of creating colours directly from the convolutional values, we can use a pre-set mix of colours. Some generative artists swear by using completely pre-set colours, but here I will just generate four random colours before we begin the convolutions, calling them pigment A-D.

colorMode(HSB,1)
pigmentA = color(random(1),1-sq(random(1)),1-sq(random(1)))
pigmentB = color(random(1),1-sq(random(1)),1-sq(random(1)))
pigmentC = color(random(1),1-sq(random(1)),1-sq(random(1)))
pigmentD = color(random(1),1-sq(random(1)),1-sq(random(1)))

I am using sq(random()) to make completely grey or completely black colours more unlikely, since they both occlude the hue.

Now, I will use the convolutions to merge these four pigments together:

colorMode(RGB,1)
colAB = lerpColor(pigmentA,pigmentB,neuron[26])
colCD = lerpColor(pigmentC,pigmentD,neuron[27])
colABCD = lerpColor(colAB,colCD,neuron[29])
stroke(colABCD)

First, we create two colours. colAB is a mix between pigments A and B, and the "amounts" of each is determined by neuron[27]. colCD does the same with pigments C and D, and neuron[28]. Finally these two colours are mixed together into colABCD, ordered by neuron[29]. colABCD thus holds a mix of all four pigments controlled by three of our convolutional layers.

colorMode(RGB,1) has to be called for mixing colours to work properly.





Now, I'm not going to pretend these are pieces of art. But they are the first step towards something more. What we see is undeniable variability, and some of the pieces even come close to what can be called "composition". 

Also, some of our current problems are clear, if not here, then if we look at the one-dimensional renderings. Quite a lot of the layers end up with big, black areas, and the influence of Perlin noise is a bit too clear. So we have something to work on, and next time, we will do just that, as well as perhaps moving from pixel-rendering to something more akin to brush-strokes.



Scaling up the output

All these pictures are way too small to see! To fix this issue, we can do the following:

1) Change the ix/iy loop to the following:
for(ix=0;ix<frameWidth;ix+=8)
for(iy=0;iy<frameHeight;iy+=8)
  {
    fill(colorABCD)
    ellipse(ix,iy,11,11) 
}

2) Make the frameWidth and frameHeight much bigger.

And that's that. Again, to see this program, go to https://www.openprocessing.org/sketch/581541, and the next part of the tutorial can be found here.



A Simple Implementation of Perlin Noise in Game Maker Studio 2

This implementation is probably not the most effective out there, but for my purposes, it works just alright.

In the create event of the object using noise, an array of prime numbers and random values is set up:

PNL = array_create(100)
PNL[0]=2; PNL[1]=3; PNL[2]=5; PNL[3]=7; PNL[4]=11; PNL[5]=13; PNL[6]=17; PNL[7]=19; PNL[8]=23; PNL[9]=29; PNL[10]=31; PNL[11]=37; PNL[12]=41; PNL[13]=43; PNL[14]=47; PNL[15]=53; PNL[16]=59; PNL[17]=61; PNL[18]=67; PNL[19]=71; PNL[20]=73
RZ = 22733 
RNL = array_create(RZ+10) ; for(i=0;i<RZ+10;i++) RNL[i] = -1+random(2)

PNL holds a list of prime numbers. RZ holds an arbitrily large value. RNL holds a list of random numbers between -1 and 1.

Now, a script named noise() is made with the following code:

var i, s = 0, dx, dy, np, nx = abs(argument[0]), ny = abs(argument[1]), fx, fy, fx2;
for(i=2;i<argument_count;i++)
{
np = PNL[argument[i]] ; fx = (nx div np) << 10 ; fy = ny div np ; fx2 = fx+(1 << 10)
dx = (nx/np) mod 1 ; dy = (ny/np) mod 1
s += (RNL[(((fx2)+(fy+1))) % RZ]*(dx*dy)+
RNL[(((fx )+(fy+1))) % RZ]*((1-dx)*dy)+
RNL[(((fx2)+(fy  ))) % RZ]*(dx*(1-dy))+
RNL[(((fx )+(fy  ))) % RZ]*((1-dx)*(1-dy)))*np
}
return(s) 

Here it loops through however many octaves you enter as arguments into the function, and calculates a bilinear interpolation between values from the random number list, adding them together and outputting a single value.