Convolutional Art - Tutorial 3

This is a continuation of part 1 and part 2.

Today we will finish our rendering of brush-strokes to create some much more unique pieces.



But of course, things were already looking quite nice at the end of the last part, so from here on out, this is just the icing on the cake.

The finished code and generator from this part can be found here: https://www.openprocessing.org/sketch/581704


Randomising positions

Where we left off, we had converted pixels or circles into fluffy clouds. The problem with this is that all points are rendered in the same top-left to bottom-right order, making a pretty static image.

It might seem easy to randomize positions - just take random X and Y coordinates, why don't we?
And here I was going to write a whole long paragraph on the problems with this approach, but no, I don't think that's necessary. It actually works fine.

for(rendernum=0;rendernum<frameWidth*frameHeight/64*0.75;rendernum++)
{
  ix = random(frameWidth)
  iy = random(frameHeight)
  ...
}


Here, the amount of points rendered has been decreased to 75%, and in general, it is quite sufficient. You might notice that there are black spots. Had the amount been reduced to 50%, there would be more black spots. Had the amount been increased back to 100%, there might be slightly fewer, but even at 150%, we would not be rid of places where, completely randomly, nothing has been rendered.


Instead of forcing the program to render much more than is necessary, we will do render things through two layers. The first layer will use much bigger fluffy clouds, to fill in the background of the picture, and then afterwards, we will go back and render the more precise circles to occlude the background. This will not only eliminate the black spots, but allow us to do some more fancy things with brush strokes and detail density.

For now, we will extend the above:

for(rendernum=0;rendernum<frameWidth*frameHeight/64*0.75;rendernum++)
{
  if (rendernum < 640)
  {
    ix = ((rendernum % 32)+.5)*frameWidth/32 
    iy = ((rendernum / 32)+.5)*frameHeight/20
    nsize = 24
  }
  else
  {
    ix = random(frameWidth)
    iy = random(frameHeight)
    nsize = 8
  }

  ...
  maxradius = nsize
  ...
}


Now we use this new variable, rendernum, to seperate the two layers. This keeps track of how many points have been rendered. For the first 640, we render the background in an orderly fashion with big fluffy clouds. This will look like:



Which is not great, but it does fill out the background. After the first 640 points, it switches to a smaller size of fluffy clouds, distributed completely randomly, which makes the final product look like:


So, now we have spend quite a long time solving the pretty small problem of black parts. But I promise that this will hold several benefits for us as we continue.



Varying brush-strokes

Ignoring the background, all fluffy clouds still have the same size. This can be changed in two ways. First, it might just be set completely randomly at each new position rendered. I would argue, however, that we have a slightly more orderly sort of randomness at our disposal, which will do much better.

if (rendernum < 640)
maxradius = 75
else
maxradius = 10*(0.8+0.5*neuron[15])


Here, maxradius, which controls the size of the fluffy clouds will now multiply nsize with a value from one of the neural layers. I picked one which is on the slightly simpler end of the spectrum, but of course, whether this layer is flat or highly complex, only randomness can tell.


Here we see that some areas of the picture are rendered much more in-depth than others. Interestingly, it seems whatever is on the value[15]-layer also has an effect on the end colours, since generally the orange parts or dark parts seem to have smaller dots.

It should be mentioned that maxradius is only randomised on the front layer, not for the 640 points in the background.

Until now, all times the "fluffy cloud" of the brush was drawn, the result was more or less the same, only scaled up and down. But there might be merit to work with different levels of fluffyness. In one extreme is a rendering where all points of the cloud have the same length from the centre, that is, a perfect circle. The perfect circle, with all triangles overlapping, would also be a very solid shape. Then, on the other hand, the bigger the random area where the fluff of the cloud might reach, the less solid and more chaotic the shape might become. This will be done by altering two lines of code - first, after maxradius, the difference in radius, or difradius, is introduced:

difradius = .1+.4*neuron[16] 

This is then used to determine the length from the centre for the triangles in the fluffy cloud:

length = maxradius*(.5-difradius+random(6*difradius)) 

This makes the difference between a vague, blurry image, and one with much more noticeable points. Or rather, now neuron[16] decides which part of the picture should be hi- and lo-definition. Or pointillist or expressive. Who knows! Anything can happen!



Drawing brush-strokes

But it remains obvious that the paintings being rendered are a collection of points. Instead, we might want to emulate brush-strokes, and for that, we need not only draw one point when we find our colour, but a row of points that together will form a smooth stroke across the canvas.

For this, we simply find the colour, like before, then repeatedly drawing brush(), moving a bit, drawing brush(), moving a bit, etc.

for(var nb=0;nb<1+(2/(1.1-neuron[17]))*sq(random(1));nb++)
{
  brush(ix,iy,maxradius,difradius)
  dir = generalbrushdirection+noise(xcoord*15,ycoord*15-25)*2
  ix += maxradius*sin(dir)
  iy += maxradius*cos(dir)
}

There are two non-trivial parts to the code. First is the amount of points that should be rendered, that is, how long the brush-stroke should be. Here, at least two points are rendered, but additionally up to 19 extra iterations are added. This number is proportional to neuron[17] - when neuron[17] is 1, the full realm of 2 to 19 points might be rendered. When value[17] is 0, always two points will be rendered.

These values were decided on after a lot of iterations, until it started to look correct. For instance, skewing the random distribution by taking sq(random()) became necessary to make it seem like brush-strokes instead of long hairs.

The other nontrivial part is how to go from one point to the next. Here, ix and iy are moved one maxradius between each point, in direction dir. Note that we will not go back through all the neural calculations or anything - the same colour, max- and difradius is used throughout the whole brush-stroke.

But how, then, do we find the direction to move in? Here, I have elected to combine a general brush stroke direction, randomly set at the beginning of rendering for the whole image, together with a noise-value taken from a low-frequency Perlin noise field. This makes it so that the brush-strokes are not just straight lines, but will move in wavy patterns across the canvas:


Here we can see that the variability in layer value[17] makes some parts mottled and others use much more well-defined, long brush strokes, for instance just right of the centre.

Another thing! Though it generally works to use just one colour for the whole row of points, this often contributes to a very flat-looking colour. To avoid this, it might be better to slightly offset the colour of each point in the row that is being rendered. Here we can use colred, colgreen and colblue with random additions and subtractions to create slight alterations:

fill(color(colred-(8+random(16),colgreen-8+random(16),colblue-8+random(16),65))

One can make an argument that these colour alterations should be used especially where neuron[17] makes long brush-strokes - or for that sake, the opposite! I'm taking the middle road of doing neither, but you should play around with it, thinking about how you prefer respectively pointillist and aquerelle type images. 

I think this is enough for today. Next time, we will look at letting the pictures evolve and allowing the user to interact with the creation.