Friday, 29 August 2014

A simple trick for fast high quality bokeh on lights in Cello Fortress

Bokeh is an effect that pops up in a lot of new games. It is one of those effects that makes a game instantly feel a lot more "next-gen". It is also an effect that usually eats up a lot of performance. For Cello Fortress I came up with a simplified version of bokeh that looks really high quality, is very fast to render and even very easy to implement. My implementation also has some severe limitations, but I think it can work really well for many games.

Bokeh is a part of focal blur, which is also known as depth of field blur or DoF. DoF is the effect that only objects at a certain distance are sharp, while everything closer and further away is blurred. This is an effect that every lens has, and the larger the lens is, the stronger the blur is. Focal blur has been done in many games for quite a while now, but with the normal DoF rendering techniques the bokeh effect is usually lost. Bokeh means that extremely bright spots grow with the blur and appear as clearly recognisable circles (or hexagons or other shapes, depending on the shape of the lens).


Image from HDWallSource.com.

The problem with rendering bokeh in games is that a naive implementation requires high dynamic range (HDR) rendering and an extremely large amount of blur samples. HDR has become quite common in games by now, but taking enough samples to get good sharp bokeh is impractical. For bokeh as large as in the image above, you would probably need many hundreds of samples to get it look smooth. I actually tried that exact thing in Proun two years ago, as a fun little experiment.

Proun already had really high quality DoF (as I described in this blogpost). I added bokeh by simply making some pixels extremely bright. This is a quick dirty trick that was needed because Proun only has limited fake HDR, but the effect looks pretty convincing, as you can see below on the left image. However, if the blur is increased the bokeh becomes extremely noisy, as you can see on the right. It looks noisy despite that it already uses a quite insane 128 samples per pixel! You can find more images of this approach in this blogpost.



This makes this naive approach unusable if strong blur is wanted. We need something smarter. The most common approach in current games seems to be to render actual polygons with a bokeh circle texture on them. To do so we need to find all the bright pixels in the image and then generate a quad for each one.

According to this article The Witcher 2 was ahead of its time by having bokeh all the way back in 2011. The Witcher 2 did this by generating a quad for every pixel and then discarding the ones that are not blurred enough. That's a whole lot of sprites to render! Needless to see this only worked in real-time on the very fastest PC videocards.

Most games that have bokeh these days seem to use a smarter approach, using newer shader models: first they look for all the pixels that are so bright that they would get a clear bokeh circle, and then they generate quads for this. An example of this approach can be found here. Unlike The Witcher 2's technique this creates only the bokeh itself, not the general focal blur, so with this technique the depth of field blur needs to be done separately.

Even with this clever technique the performance hit is still heavy. It requires analysing all the pixels on the screen to find the ones that are much brighter than the ones next to it. It also produces temporal artefacts: if the source of the bokeh is really small it will flicker because it is smaller than a pixel. Normally this is not that much of a problem because it is only a pixel, but if a big bokeh sprite flickers on and off this becomes much more noticeable. These temporal artefacts might already show with slight camera movements. I don't know what technique Star Citizen uses, but the kind of flickering this would result in can clearly be seen in this video in the bright spots in the background.

Now that we roughly know how bokeh is usually rendered, let's look at that simple and fast trick that I use in Cello Fortress. My idea comes from that the most pronounced bokeh is often from actual lights, not just random bright pixels. If you are looking directly at a light, but the light is in the blurred background, then it will generate a very clear bokeh circle. In general a game engine already knows where all the lights are, so this means that we can skip the searching for bright pixels entirely and directly create one screen-space quad for every light. To do so I first render the scene, then apply normal depth of field blur, and finally render the bokeh sprites on top of that.

Not only does this method skip the expensive step of finding the bright pixels that need bokeh, it also fixes all the issues with temporal artefacts. We always know exactly where the lights are so no matter how small they are, the bokeh will never flicker when the camera moves. It just moves along with the light correctly.



To get a good-looking effect I scale the bokeh with how blurred the light should be. This can simply be calculated based on the focal settings of the camera and the distance to the camera. I also fade out the bokeh sprite a bit as it gets larger, since the larger the blur, the less bright the bokeh circle should be (unless the light is infinitely bright). Here is a video that shows the bokeh in action in Cello Fortress. The bokeh is mostly visible at the bottom of the screen.



An important part of bokeh is the actual shape of the bokeh effect. This shape is created by the shape of the lens. I often like hexagonal bokeh best, but I recently discovered that in photography this is generally considered ugly. When reading reviews of a new camera lens I wanted to buy I learned that the more expensive lenses have circular bokeh while cheaper lenses have hexagons. Still, in the end the only thing that matters is what looks good aesthetically in your game. Since most bokeh rendering techniques use those textured sprites the bokeh shape can be modified really simply by using a different texture.



Note that this all does not look as good as it can yet because I have so far spent too little time on the actual visual design of Cello Fortress. I mostly focussed on the gameplay and some cool shaders. Once I start working on proper graphics I should also tweak the brightness, size and colour of the bokeh to make it look better. I should probably also try adding a bit of chromatic aberration to the bokeh texture then.

My bokeh technique does not solve occlusion at the moment. If a light disappears behind an object then it should not still get a bokeh sprite. I didn't solve this yet because it does not occur in the current version of Cello Fortress. However, several solutions can be implemented easily. For example, I already have a depth texture for the depth of field blur, so I can do a look-up in that on the screen position of the light to see whether there is an object in front of it.

The big downside of my method for rendering bokeh is that you need to know where the bokeh will appear beforehand. This means that more subtle bokeh sources like reflections and strong speculars are not practically doable. I can imagine some tricks to get for example bokeh in planar reflections working (if you know where the reflecting planes are), but beyond that it quickly becomes infeasible. The standard technique of searching for the bokeh pixels of the image handles this much better, so if you really need bokeh on speculars, then you will probably need to resort to such techniques.


Image from this article by MJP

I looked up a bunch of articles on bokeh implementations and couldn't find any mention of something similar to what I am doing. This surprises me because the idea is so simple and obvious. If anyone knows of any articles or games that already do this, then please let me know so that I can add a link.

That's it! My bokeh technique is simpler and much faster than most commonly used methods of rendering bokeh and it even fixes problems with temporal artefacts. It is also a limited technique that does not handle all bokeh effects, but in many cases it will probably be good enough. It definitely is for Cello Fortress!

Sunday, 24 August 2014

Tips and tricks for a successful convention booth

Last week we were at Gamescom, showing Swords & Soldiers II and Awesomenauts to the public. Gamescom is the biggest game convention in the world, drawing in a whopping 400,000 people. We have had booths at several conventions in the past years so I figured it might be interesting to share some tips and tricks based on our experiences.

Our booth at Gamescom was part of the Indie Megabooth. The Indie Megabooth is an awesome initiative where a lot of indies hire a big area together and fill it with their games. Since the Indie Megabooth has been so successful in recent years it is now a real organisation with dedicated people to manage it all.

There are several advantages to being part of the Indie Megabooth. For starters it is quite affordable and they take care of a lot of the organisational complexities. More importantly it gives gamers who like indie games a clear spot to go to, hopefully making indie thrive more on a big convention where most booths are gigantonormous monsters from the big publishers. Another good reason to join the Indie Megabooth is that it is great to meet other indies and hang out with them. For example, we were next to Capybara Games, creators of Below, Super Time Force and Sword & Sworcery. Since I really look up to them it was inspirational for me to meet them and discuss game development.

As with all good things there are also some downsides to the Indie Megabooth, in this case mostly because it is so good. The Indie Megabooth is very popular among developers so they get way more requests for booth spots than they can handle. The result is that not everyone can get a booth. It also makes them not very flexible if you want a bigger booth.



To decorate our booth we used big cloth banners. This works really well: they weigh little and unlike posters they hardly tear or damage. You can just roll them up and bring them along. Banners are also a really cheap solution: the type we use costs only around €20 per banner (we got them at Drukwerkdeal.nl). The only downside to the banners we use is that they are slightly transparent, so if the surface behind the banner is coloured or uneven then you might see this through the banners.



The most important thing to us is that as many people as possible get to actually play our games. For this reason we put four screens in our booth: two for Swords & Soldiers 2 and two for Awesomenauts. Since both games support local multiplayer we could have up to ten people playing in our booth. This is quite a lot more than any of the other indie booths, where some could have only one player at a time since they brought just one computer for a single player game. Since players could also play single player on our screens we had between four and ten players at any given time. We rarely had unused computers for longer than a couple of minutes.



While we tried to cramp as many players as possible into our small booth, our friends at Two Tribes had a different approach that I also really liked. They had only two screens for their new game Rive but had a slick design for the booth. They put in nice chairs and beanbags, giving players a very relaxed playing experience. They clearly focussed more on giving each player the best possible experience than on reaching as many players as possible. (Rive is also a great game, by the way, playing incredibly smoothly.)



If audio is important to your game then be sure to bring along some big headphones. There is a lot of noise on game conventions, especially from big booths. We were near an AMD stage where people received free stuff if the crowd shouted "AMD" loudly. At other moments they just played really loud dance music. The Indie Megabooth crew even gave all the indie exhibitors earplugs because of this. One of the other exhibitors had even brought along a microphone to talk to his players without having to shout. When I talked to him it felt quite awkward that he answered me through a microphone while I was right in front of him so I wouldn't personally use this method. Still, I totally understand why he solved the sore-throat-from-talking-loudly-all-day problem this way.



The goal of going to a convention is of course marketing and business. We had a lot of good business meetings and we had many players at our booth, but unfortunately we didn't reach as much press this time as we had hoped. We asked around and most indie devs said the same thing: it was difficult to get a ton of meetings with press going this year. This might be because Gamescom doesn't show as much new stuff as E3, so it is a bit less interesting to the press.

A couple of developers did have a lot of press meetings. One had hired a PR agency to set up the press meetings. While our experiences with PR agencies have so far been very bad, this one had apparently done really well and set up a lot of interviews for them. Another dev said they had started to contact press four weeks in advance. We only started contacting press two weeks before the convention, so we might have also been too late for maximum reach. In general though most devs said they didn't meet as much press as they had hoped, so it seems like Gamescom might also just not be the best spot for this. Of all the conventions we have visited so far we had the most success at Eurogamer Expo a couple of years ago. I have no idea whether that was accidental or whether Eurogamer is actually better for reaching out to press.

From press we go to another exciting topic... utensils! It is a really good idea to bring along a bag of random utensils. My colleague Robin had brought along things like scissors, pins, several types of tape, screwdrivers and pens. Several of our neighbours borrowed them during the convention, so apparently not everyone brings such things along. You never know what might be wrong with your booth so it is good to pack these even if you don't expect to actually need them.



Convention visitors love button badges. They often stick them to their bag right away, hopefully triggering a tiny bit of extra word of mouth and definitely making sure the player remembers the game when he gets home. Buttons are really cheap to order in large batches so we just doled them out freely.



To my surprise our flyers were quite popular. I even saw quite a lot of visitors grab a flyer without playing the actual game.



If you are making a console game then be sure to ask the platform holder for permission to bring along testkits, or borrow special exhibition kits from them for your booth. Travelling with devkits is generally not allowed and you definitely don't want the risk of having to explain to Nintendo, Microsoft or Sony that a devkit was stolen from your luggage... We even got lucky and received two of these awesome exhibition stands from Nintendo. They look really professional and take up less space than a table with a television, so this was really cool.



Really important when exhibiting at a big convention is to never let your booth remain unattended. Always have someone there to look after your stuff. Theft happens a lot at these conventions. The worst time for this is during the night. We knew this beforehand so we brought small computers and carried them to the hotel every evening to be sure they were never left unattended at the booth. Several other indie devs apparently were not aware of this, resulting in two console testkits being stolen from the booths during the night. Never leave stealable stuff at your booth when you are not there, and don't trust convention security to keep your booth safe.

Another tip is to bring a couple of extra people and not have everyone at the booth all the time. Standing at the booth for five days is incredibly exhausting, especially with days like Gamescom's Saturday that last from 9:00 to 20:00. We were with three Ronimo devs and didn't realise until the last day that we could have let one person sleep late every day. This would have helped a lot since getting up at seven every morning and working such longs days is simply too tiresome. Maarten and Robin even had blisters on their feet after a couple of days...

The absolute highlight of being at Gamescom for me were the fans visiting our booth. We met one player who had played Awesomenauts for 2000 hours. Realising that we have made a game that quite a lot of players love so much is incredibly cool! Another player, r0estir0bbe, even brought us a bag of Swiss candy as a gift for the team. Thanks r0estir0bbe! ^_^

Sunday, 10 August 2014

The gross imperfections of tuning in music

As a cellist, one of my biggest challenges has always been to play notes at the exactly correct pitch. While the keys of a piano and the frets of a guitar make sure that those instruments basically always play notes at the right pitch (as long as the instrument itself is tuned correctly, of course), instruments like a cello and a violin allow the musician to play notes at any pitch, not just at the pitches of real notes. This gives endless possibilities, but it also means that if you put your finger just one millimeter too high or low, it already sounds out of tune and horrible.



Playing in tune has always been a big topic to me, always striving for that oh-so-difficult 'perfect pitch'. Not that I am horribly good at it, but that is exactly why I practice so much to get closer to it. This is why it came as such a shock to me to learn that there is no such thing as 'perfect pitch': several tuning systems exist and they all come with different opinions on what the exact frequency of specific notes should be. They are also all flawed in their own way. Our modern tuning system is called equal temperament and this is not because it allows for perfect pitch, but because it spreads the pain and problems equally everywhere, instead of having some parts perfection and some parts complete horror.

How can this be? Why is there no perfect system for tuning? To understand this, let's have a look at the frequencies of notes in our modern musical system, and how they relate to each other:

Note Frequency Difference with
previous note
Percentage higher
than previous note
A 440hz
A# 466.16hz 26.16hz 5.95%
B 493.88hz 27.72hz 5.95%
C 523.25hz 29.37hz 5.95%
C# 554.37hz 31.11hz 5.95%
D 587.33hz 32.96hz 5.95%
D# 622.25hz 34.92hz 5.95%
E 659.26hz 37hz 5.95%
F 698.46hz 39.2hz 5.95%
F# 739.99hz 41.53hz 5.95%
G 783.99hz 44hz 5.95%
G# 830.61hz 46.62hz 5.95%
A' 880hz 49.39hz 5.95%

What you can see here, is that each next note has a higher frequency than the previous, and A' is exactly twice as high as A. This way playing an octave (A and A' at the same time) sounds really good, because their frequencies are exactly double. In fact, it sounds almost like a single note.

What you can also see here, is that the distance between two notes grows the higher we get. This way every time we jump 12 notes for an octave, we get exactly the double frequency. Each next note is approximately 5.95% higher than the previous, so notes are spaced equally when measured relatively.

This all looks fine and dandy, and it is something I have known for years. However, it gets hairy when we look at the distance towards that base note, the A. Note the last column:

Note Frequency Percentage above A
A 440hz
A# 466.16hz 5.95%
B 493.88hz 12.25%
C 523.25hz 18.92%
C# 554.37hz 25.99%
D 587.33hz 33.48%
D# 622.25hz 41.42%
E 659.26hz 49.83%
F 698.46hz 58.74%
F# 739.99hz 68.18%
G 783.99hz 78.18%
G# 830.61hz 88.77%
A' 880hz 100.00%

This may look fine, but keep in mind that I previously mentioned that an octave sounds so pleasing because the frequencies are exactly doubled. This goes for other intervals as well. The second-best-sounding interval is at exactly 1.5x the base frequency, so that would be A (440hz) plus E (660hz). However, now look at the table again and note that E is not at 660hz, but at 659.26hz. Slightly out of tune! The same goes for the fourth interval (A+D): the D is not the pleasing 33.33% higher, but the slightly off 33.48%.

This may seem like a tiny difference, but it is actually quite audible, and these aren't even the worst: the major third (A + C#) is at 25.99% instead of at 25%, which is a much bigger difference.

To really understand the problem, you need to hear it. This video explains it quite well by putting perfect chords (50% higher, 33%, 25%) next to the chords of a normal modern instrument (49.83%, 33.48%, 25.99%). Listen carefully to note the difference:



So why don't we fix this up by changing the frequencies to allow for perfect intervals? We could indeed do this, but this would only fix the intervals on top of A. In fact, all intervals would become different depending on what the base note is, because we wouldn't be keeping that 5.95% interval from note to note. No matter how you try, there is no system that results in perfect intervals everywhere. I even tested this with other numbers of notes per octave (instead of the standard 12), but there is no system with perfect intervals.

Letting go of the requirement that all distances are equal, we can choose new tunings that make sure that certain intervals produce perfect chords, while others may sound much worse. This is indeed how tuning worked in the 18th century: some chords sounded perfectly in tune, while others sounded much worse than they do today, making them practically unusable. I guess this explains why baroque music with many sharps or flats is extremely rare: it just doesn't sound acceptable with the tuning used back then.

The system used in the 18th century is called meantone temperament, while our current system is called equal temperament, as all chords sound only a little bit off and none are completely broken. I guess the reason equal temperament replaced meantone temperament is that it allows for much more variation in chord progressions: equal temperament allows all chords in all positions, while meantone temperament only allows specific ones. Those specific ones do sound better though in meantone temperament.

Knowing that the western system with 12 notes per octave is not perfect also explains why other cultures have different numbers of notes. Arabian music for example has 24 notes per octave. This means that in between every one of our notes, they have one extra note. This creates a very different system of musical theory and a very different sound to Arabian music.



Most software for writing music makes it quite difficult to compose with more than the western standard 12 notes, but I have been told that some of the older Prince Of Persia games are notable because the composer abused the MIDI system to play real 24 tone Arabian music. I couldn't find any info on this online though, so I don't know whether that is true.

It is interesting to hear 24 tone music since it sounds quite alien and weird to the western ear, especially when used in a way like this on a special kind of piano:



So why is this all relevant today, now that equal temperament is the golden standard that nearly everyone in the wester world uses? First, it is important to realise that it is not perfect and thus pitch is also not perfect. Comparing a note I play on my cello to one other note might make it sound in tune, while comparing that very same note with another note might result in an interval that is slightly off. This is not because I am playing it wrong, but because the intervals are simply not perfect in our modern equal temperament system.

The second reason this is important to me is that until recently I played in the Kunstorkest, a small amateur orchestra that plays baroque music. Indeed, this is music from the age of meantone temperament, so knowing how it was originally intended is important to us! Not that a hobby musician like myself has the skill to play all the subtleties of this difference, but it helps to at least understand what is going on here and why sometimes the director wants a note played slightly differently. To conclude, here's a short recording of a piece we played with the Kunstorkest:

Sunday, 27 July 2014

Using throttling to reduce network errors

Recently we managed to reduce the number of network errors in Awesomenauts by 10% by improving our throttling algorithm. While automatic throttling by a router is killing, smart throttling by the game itself can be a good tool to make the game work better on crappy internet connections. Today I would like to explain what throttling is and how we approached this topic.

(Note that the throttling improvements were in Awesomenauts patch 2.5.3. This post is unrelated to the bandwidth optimisations in patch 2.5.4 that are currently being tested in beta.)

The basic idea of throttling is that if you detect that an internet connection cannot handle as much as you are sending, then you start sending less. This can happen on the side of the game, or on the side of the connection itself (by the modem or router, for example). If we keep sending more than the connection can handle, then either we lose a lot of packets or, even worse, the internet connection is lost altogether, causing a network error. Throttling is intended to keep this from happening.



The basic idea may sound simple enough, but actually implementing this is a lot more difficult. There are two problems: how can we reduce bandwidth dynamically, and how can we detect whether we need to throttle?

Lets start with how to reduce bandwidth. Modems usually have an approach to this that is simple enough: they just throw away some packets. This is a very efficient way of reducing bandwidth, but completely killing to any game. If a packet with important data is dropped then it needs to be resent. We cannot do this immediately as the internet connection doesn't notify the game that a packet was dropped. The only thing we can do is just wait for the acknowledgement to come in. If after a while we still haven't received an acknowledgement, then we conclude that the packet has probably been lost and needs to be resent.

Resending based on acknowledgements is the best we can do, but it is a pretty imperfect solution: the acknowledgement might still be under way. We cannot know whether it is, so we just need to pick a duration and decide to resend if that much time has passed. If we set this duration too long, then it takes very long before the packet is resent, causing extra delay in the gameplay if the packet was really dropped. If we choose a very short resending duration, then we will probably be sending a lot of data double that had actually already arrived. This wastes a lot of bandwidth and is not an option either.



Since we cannot pick such a short resending time, we need to wait a little while before resending. This means that dropped packets arrive in the long run, but with an enormous delay. If for example the packet contained information on a player's death then he might die one second too late, which is really bad for the gameplay experience.

In other words: we never ever want the modem to throttle. We want to decide ourselves what gets thrown away, so that we can make sure that the really important packets are never dropped. If we need to throttle, then we want to at least throttle data that is less important. The problem with this is that if we could get away with sending less, then we would always do that instead of only when throttling. After all, an important goal in multiplayer programming is to use as little bandwidth as possible. This means that throttling comes with a drawback and we don't want to do it unless necessary.

In Awesomenauts we reduce bandwidth and packet count when throttling by sending less position updates. During normal gameplay Awesomenauts will send the position of a character 30 times per second. This way if a player turns around quickly, other players will know about it as soon as possible. If we send position updates less often, then we essentially add a bit of lag: we don't send the latest information as soon as we know it. We would prefer not doing that of course. However, if the alternative is that the modem is going to throttle by randomly throwing away packets that might be important, then our hand is forced and we prefer sending less position updates.



Now that we know how to throttle we get to the much more important question: when to throttle. How can we know whether we need to throttle? It is not possible to just ask an internet connection how much bandwidth it can handle. Nor do you get notifications when the connection starts dropping packets because it is too much. We therefore can never know for sure whether throttling is needed and have to deduce this somehow.

Our initial approach to this was to throttle if the ping was too high. The idea is that if a connection cannot handle the packets it needs to send, then latency will increase and we can detect this. This works fine for connections that normally have low ping: if the standard ping is 50ms and suddenly it rises to 300ms, then it is extremely likely that we are sending too much and need to throttle to keep the connection from being lost altogether.

This approach is too simplistic however: internet connections are a very complex topic and can have all kinds of properties. Some people might indeed have a fast connection and a painfully low maximum bandwidth. However, if an Australian and a European player are playing together and they both have a really good internet connection, then their ping will still be high because the distance is so large. In this case throttling won't help at all. In fact, since our throttling essentially increases lag by sending less often, throttling in this case will actually decrease the quality of the connection!

This brings us to the change we recently made in Awesomenauts patch 2.5.3. Instead of looking at ping, we now look at packet loss. Awesomenauts uses UDP packets and we have our own manual reliability system, since various parts of the game require various degrees of reliability. This means that we send and receive our own acknowledgements and thus know exactly how many packets are lost. This is a much better indicator of connection problems than ping. If a lot of packets are dropped by the connection, then apparently we need to throttle to keep from sending too much over a limited internet connection.

It doesn't end there though. I already mentioned that internet connections are a complex topic, and this new plan too is thwarted. Some internet connections are just inherently lossy. For example, maybe someone is playing on a wireless connection and has a wall in between the computer and the modem. Maybe this causes 10% of all packets to be lost, no matter how many packets are sent. I don't know whether wireless routers actually work like that, but we have definitely seen connections that always drop a percentage of the packets, no matter how few we send. Since throttling increases lag we only want to do it when it significantly improves the internet connection. If, like in this case, throttling does not reduce the number of dropped packets, then we do not want the throttle.

Ronimo programmer Maarten came up with a nice approach to solve this problem. His throttling algorithm is based on letting the game perform little experiments. If a player has high packet loss, then the game enables throttling and starts sending less. Then it measures the packet loss again. If packet loss decreased significantly, then we keep throttling. If packet loss remains roughly the same, then we stop throttling and start sending at maximum sending rate again.



The result of this approach is that we only throttle if it actually improves the internet connection. If throttling does not help, then we only throttle shortly during those experiments. These experiments take place automatically during gameplay, but are short and subtle enough that players won't actually notice this happening. If the connection is really good, then we never ever throttle: we don't even do those experiments.

The result of adding this throttling algorithm is that network errors due to losing the connection have been reduced by 10%. This is not a spectacular improvement that many players will have noticed directly, but it is definitely significant enough that we are happy with this result.

In conlusion I would like to stress that internet connections are extremely unpredictable. We have seen all kinds of weird situations: connections that are really fast but stop for a few seconds every couple of minutes, connections that send packets in groups instead of immediately, connections that have low ping but also low bandwidth capacity, and many other combinations of properties. The big lesson we have learned from this is to not make assumptions about properties of internet connections, and to assume any random weirdness can happen for anyone's internet connection. This why I like the approach with the experiments so much: instead of assuming throttling works, it just tries it.

Sunday, 20 July 2014

Evolving level art from Swords & Soldiers 1 to 2

In Swords & Soldiers 2 we are stepping up the quality of everything compared to the original. Level art is one of the areas where we are improving the game a lot. With our improved tools we can make them look a lot more detailed and varied. We also approach level art in a very different way now. Today I would like to discuss how the approaches in both games differ and why we switched to different techniques.



In the original Swords & Soldiers the levels were mostly procedurally generated. The only parts that were placed by hand were the ground itself and the small props on it, like grass and flowers. The foreground and background layers were randomly filled with objects. Since this is essentially a 1D game I simply wrote an algorithm that placed mountains (or trees, or houses) at varying distances from each other. Each layer had a bunch of different textures and settings for how densely it should be filled with objects.



There are many arguments in favour of procedurally placing objects like that, but at the time only one really mattered: we didn't have any tools to place objects by hand. I was the only full-time programmer on Swords & Soldiers 1 and I had to build all the gameplay plus the entire engine, so there was no time to create such tools. I did have a couple of interns helping with programming, but in total there was just way too little time to implement any real tools for level creation.

Levels therefore had to be created in Notepad, as I previously described in this blogpost. This worked surprisingly fast and simple for our artists and designers, but of course precise control over graphics is not possible this way.



Our artists had a limited palette of options to differentiate levels and create varied places. They made six different art sets they could use for parts of levels: desert, snow, grass, rocks, jungle and cherry blossom. They could set a gradient for the sky colour and a second gradient to modify the colour of background props. They could set the weather to rain or snow, modify the density of background props, foreground props and clouds and place smaller props on the landscape by hand.

While all of these options together may sound like a decent set of tools to create unique levels, in practice this is not the case. Levels in Swords & Soldiers 1 quickly blend together and start to look similar. One reason for this is that levels often contain more than one environment type. They might for example vary between rocks and desert. An environment type always randomly varies between all the textures it has. The result is that in only a few levels we can have seen nearly all level art in the game. In fact, the very long Berserker Run level contains all six settings and thus contains nearly all level art the game has.



Another problem is that randomly placed objects may stand in a different spot in every level, but they all feel the same. No matter at what distance the same set of mountains is positioned, it always feels like the same place. This could of course be done better with a more advanced procedural algorithm and a smarter set of textures, but in practice procedural content nearly always suffers from this.

This problem even shows in a game like Diablo III, which has high quality procedural world generation at its core. Blizzard spent enormous amounts of time on creating sophisticated algorithms and tons of art assets for random world generation, and yet when I played through Diablo III it felt like I was constantly visiting the same places. Either a place is uninteresting, with some random trees and bushes, or it is unique with some special big props. The number of unique props in Diablo III is large but not infinite, so they start to feel repetitive quickly.

I have not seen a single game with procedural worlds where the visuals did not become repetitive really quickly. In that sense I am slightly afraid for No Man's Sky: the game looks mindblowingly amazing, but I can hardly imagine they solved this problem well enough to keep planet discovery visually interesting. (I sure hope they did though, because that game looks soooooo good!)

In Swords & Soldiers 2 we therefore concluded that we did not want to use this approach anymore. We also don't need to work around a lack of tools anymore, since we now have the impressive toolset we created for Awesomenauts. All of that is at our disposal for Swords & Soldiers 2 as well.

By placing most objects by hand our artists can now create real compositions. The way trees are arranged can create forests, small clearings and green fields. This way even without additional assets it is possible to create places that feel much more real and unique than any procedural algorithm can. Of course one could add procedural rules to create such places, but there are limits to how much can be done this way, especially because every rule that creates a specific kind of place can be used only a couple of items if repetitiveness is to be avoided. With a good artist there really are no limits to how much variation and uniqueness can be created. Most of the levels in Swords & Soldiers 2 are filled with art by Ronimo artist Ralph and he is a true master at creating interesting compositions and little micro-stories through object placement.



Another big difference between Swords & Soldiers 1 and 2 is that in 2 we have much bigger props. In the original game mountains and trees were relatively small, so you always saw a lot of them. In the sequel we have scaled them much larger, so you might see only one or two really large mountains in the background, instead of dozens. This makes it all feel a lot larger, drawing the player more into the world instead of looking at funny miniatures. At the same time this makes the backgrounds look less repetitive: if you don't see ten mountains at the same time, then it is also much less obvious that several of them use the same texture.

This is strengthened by another big improvement in Swords & Soldiers 2: there are many more unique assets for specific levels. For example, one level plays in a Viking golf court and there are golf related objects everywhere. These props are not used in any of the other levels, making this level unique and memorable. This way the player will remember more specific things than in Swords & Soldiers 1, where all the levels looked quite similar. A level doesn't even need all that many unique props: since background objects are so much larger now, having one really big prop can sometimes already define a level.



Another improvement that I would like to mention today is the art style. The original game had a very simple cartoony look, while the sequel is much more detailed and painterly. Our artists went for a style reminiscent of French animation and it works beautifully. Most of the level art in Swords & Soldiers 2 is drawn by Adam Daroszewski. He worked as a freelancer for us and did an amazing job at creating a painterly and coherent look.

I have spent most of this blogpost explaining how Swords & Soldiers 2's level art is better than the original. There is however also one big downside: it is much more work to make. With the lack of proper tools and the simple style we had when we made Swords & Soldiers 1 it took very little time to decorate more levels. In part 2 everything requires much more work to do. It really shows, but it is an important trade-off to be aware of. Not having proper tools means that there are a lot of things that artists simply cannot do and this saves huge amounts of time.

Limitations like that are a very effective way of limiting the amount of time it takes to make a game. The better your tools, the bigger the risk of using them too much. We have already spent much more time making Swords & Soldiers 2 than originally planned and the game isn't even done yet. At the same time it is looking so good that spending so much time is totally worth it. I hope that when the game launches you will be as enthusiastic about it as we are! ^_^

In conclusion, I would like to say that I think procedural level generation is great for games that evolve around it, but for a designed experience having hand-made levels works much better. Levels in a game like Swords & Soldiers 2 look better when made by hand than when generated.

Sunday, 6 July 2014

Why composition is often better than inheritance

An important question in code structure is whether to make classes work together through composition or inheritance. The "has a" relationship versus the "is a" relationship. For example: a seat has a cushion and a seat is a piece of furniture. While in this example the difference is really obvious, there are in practice many cases where both could make sense. Does a character in a game have a collision box, or is it a collidable object? These are not the same, but each can be used as the main structure for collision handling (or even both together) and it is not always clear which is better. In my experience intuition often favours inheritance, but it gives so many problems that in many cases composition is better.

Let's first have a look at an example of how the same problem can often be solved in both ways. In Awesomenauts we have a separate class that handles the 'physics' of a character. This class handles things like gravity, knockback, sliding and jumping. Since a character is a physics object, it would make sense to say that Character inherits from PhysicsObject. It is also possible to say that a character has a PhysicsObject that handles its collision. Instead of has a we might also word this as uses a.



Let's see what this relation would look like in code. This is a highly simplified version, but it shows the basic concepts nicely:

class PhysicsObject
{
 Vector2D position;

 void updatePhysics();
 void applyKnockback(Vector2D force);
 Vector2D getPosition() const;
};

//Using PhysicsObject through inheritance
class CharacterInheritance: PhysicsObject
{
 void update()
 {
  updatePhysics();
 }
};

//Using PhysicsObject through composition
class CharacterComposition
{
 PhysicsObject physicsObject;
 
 void update()
 {
  physicsObject.updatePhysics();
 }
 
 void applyKnockback(Vector2D force)
 {
  physicsObject.applyKnockback(force);
 }
 
 void getPosition() const
 {
  return physicsObject.getPosition();
 }
};

As you can see in this code, CharacterInheritance is shorter. It also feels more natural, since we don't have to write these extra accessor functions for applyKnockback and getPosition. However, after years of creating both of these kinds of structures I have learned that in a situation like this, using composition is actually more flexible, less sensitive to bugs and even more understandable than using inheritance.

Flexibility

Let's start with the flexibility argument. What if we want to make an enemy that consists of two blocks linked by an energy chain. This enemy does damage to anyone who touches the chain. The blocks can move separately, making for some interesting positional gameplay when fighting this enemy. The two blocks this object consists of move entirely separately and each have their own physics. Knocking back one block does not influence the other block. Yet they really are one character: they have one healthbar, one AI, one entry on the minimap and can only exist together.



With a composition structure it would be pretty straightforward to create this enemy, since we could just create a character that has several PhysicsObjects. With inheritance however we cannot make this as a single character, since we cannot inherit from PhysicsObject twice. We could probably work around this somehow and make it work with inheritance, but it quickly becomes much less simple and intuitive.

If you read this and think this is a far-fetched situation and not all that relevant, then you have probably never been in a project of significant size where gameplay was king. Game designers constantly come up with game mechanics that are exceptions to what you already programmed. Saying no to these just because your code structure cannot handle them will seriously damage your game quality, since in the end the only argument should be whether it is fun to play (okay, and maybe whether it is achievable within the scope of the project).

Just take a look at the diversity of the hundreds of upgrades in Awesomenauts and you will realise how many of those were likely exceptions to what our code could do. Our designers came up with those upgrades and the code had to make it work somehow. An important goal in game programming is flexibility: making your code in such a way that it is relatively easy to add whatever weird whim the game designers come up with today. In most cases composition is much more flexible than inheritance.

Readability

My next argument against inheritance is readability. "Readability" is always accomponied by "sensitivity to bugs", since if a programmer does not really understand how something works, then he will likely break it when working on it.

At Ronimo one of the more important rules in our coding standard is that we strive to keep the size of all classes below 500 lines of code. We don't always see ways to keep to that, but the goal is clear: keep classes relatively short so that they are easy to understand and a programmer can fit the workings of the entire class in his head.

Let's say our code has grown over time as more and more features were added to our game, and Character and PhysicsObject have both grown to 500 lines of code each. We have also added two more classes that use PhysicsObject: Pickup and Projectile. These are also 500 lines of code each.



In such a situation inheritance will usually make all of this very confusing. We might have strived to keep as much private as possible, but in the end inheritance almost always introduces a bunch of virtual and protected functions. In other words: PhysicsObject has become pretty intertwined with its three child classes Character, Pickup and Projectile. In my experience this nearly always happens: inherited classes work together to create complex behaviour and intertwine more and more over time.

By itself this might not be that much of a problem, but to really understand any of these classes we now need to know them all. If we restructure something in PhysicsObject because Projectile needs a new feature, then Character and Pickup will also be involved. To understand the entire situation, the programmer now needs to think about four different classes and fit all 2000 lines of code in her head simultaneously. In my experience this quickly becomes impossible: this is just too much code to really grasp it all at once without starting to mix things up. The result is that readability decreases and the programmer becomes more likely to introduce bugs because she overlooked something.

Of course a composition based structure does not magically fix all of this, but it does help a lot in keeping this structure simple and understandable. With composition there is no virtual or protected, so the separation between PhysicsObject on the one side and Character, Projectile and Pickup on the other is much clearer. This keeps the classes from intertwining over time and makes it easier to keep them truly separate. While they could in theory also have been kept separate with inheritance, in my experience this is too difficult to maintain, while composition enforces it. We have numerous cases in our code where an inheritance structure of classes that all grew too big is hardly understandable any more.

Diamond problem

Any discussion of inheritance versus composition is incomplete without mentioning the famous diamond problem. What happens when a class A inherits from two classes B and C that both inherit from a single parent D? A now has a D twice and chaos ensues.



There are several solutions to the diamond problem in C++. For example, you might just accept that A has double D. (Haha, double D! Ahum.) Or you might use virtual inheritance to solve it. However, both solutions cause all kinds of problems and potential bugs, so just avoiding the diamond problem altogether is highly preferable.

This problem is usually not around in the initial design of a gameplay code structure, but as features are added it might pop up occasionally. The problem is that when it does, it can often be very difficult to come up with a good solution for how to get rid of it without doing a lot of refactoring.

Nevertheless, the diamond problem has only popped up a couple of times in my 12 years of object oriented programming, so I don't think the diamond problem is a very strong argument against inheritance.

(By the way, even without the diamond problem complex multiple inheritance structures tend to get a little sleazy, as I explained in this previous blogpost about why this is not always this.)

Use inheritance, but use it less

Does all of this mean that I am against inheritance in general? Nope, absolutely not! Inheritance is very useful for a lot things, for example in polymorphism and in design patters like listeners and factories. My point is just that inheritance is not as generally useful as it might seem. There are lots of cases where inheritance may seem the cleanest solution, while in practice composition would be better. So use inheritance, but don't overuse it.

Saturday, 28 June 2014

Finding bugs through autotesting

Some bugs and issues can only be found by playing the game for ages in a single play session, or by triggering lots of random situations. As a small studio we don't have the resources to hire a ton of people to do such tests, but luckily there is a good alternative: a fun method is to hack automated controls into the game and let the game test itself. We have used this method in both Swords & Soldiers and Awesomenauts and found a bunch of issues this way.

Autotests are quite easy to build. The core idea is to let the game press random buttons automatically and leave it running for many hours. This is very simple to hack into the game. However, such a simplistic approach is also pretty ineffective: randomly pressing buttons might mean it takes ages to simply get from the menu to actual gameplay, let alone ever actually finishing any levels. It gets better if you make it a little bit smarter: increase the likelihood of pressing certain buttons, or even just automatically press the right button in certain menus to get through them quickly. With some simple modifications you can make sure the autotesting touches upon many different parts of the game.



This kind of autotesting has very specific limited purposes. There are a lot of issues you will never find this way, like if animations are not playing, texts are not displaying, or characters are glitching through walls. The autotester does not care and keeps pressing random buttons. Basically anything that needs to be seen and interpreted is difficult to find with autotesting, unless you already know what you are looking for and can add a breakpoint in the right position beforehand.

Nevertheless there are important categories of issues that can be found very well through autotesting: crashes, soft-crashes and memory leaks. A soft-crash is a situation where the game does not actually crash, but the user cannot make anything happen any more. This happens for example if the game is waiting for a certain event but the event is never actually triggered. Memory leaks are when the game forgets to clean up memory after usage, causing the amount of memory the game uses to keep rising until it crashes. Especially subtle memory leaks can take many hours before they crash and are thus often never found during normal development and playtesting.

Another category of issues that can be found very well through autotesting is networking bugs. This one is very important for Awesomenauts, which has a complex matchmaking system and features like host migration that are hard to thoroughly test. Our autotesting automatically quits and joins matches all the time, potentially triggering all kinds of timing issues in the networking. If you leave enough computers randomly joining and quitting for long enough, almost any combination of timings is likely to happen at some point.

Recently we needed this in Awesomenauts. After we launched patch 2.5 a couple of users had reported a rare crash. We couldn't reproduce the crash, but did hear that in at least one case the connection was very laggy. Patch 2.5 added Skree, a character that uses several new gameplay features (most notably chain lightning and spawnable collision blocks). This made it likely that the crash was somewhere in Skree's netcode.



We tried reproducing the crash by playing with Skree for hours and triggering all kinds of situations by hand. To experiment with different bad network situations we used the great little tool Clumsy. However, we couldn't reproduce the crash.

I really wanted to find this issue, so I reinvigorated Awesomenauts' autotesting system. We had not used that in a while, so it was not fully functional anymore and lacked some features. After some work it functioned again. I made the autotester enter and leave matches every couple of minutes. Since I didn't know whether Skree was really the issue I made the game choose him more often, but not always. I also made the autotester select a random loadout for every match and made it immediately cheat to buy all upgrades. The autotester is not likely to accidentally buy upgrades otherwise, so I needed this to have upgrades tested as well.

I ran this test on around ten computers during the soccer match Netherlands-Australia. While we beat the Australians our computers were beating this bug. Using Clumsy I gave some of those computers really high artificial packet loss, out-of-order and packet duplication.

Watching the computer press random buttons is surprisingly captivating, especially as it might leave the match at any moment. Simple things like a computer being stuck next to a low wall become exciting events: will it manage to press jump before quitting the match?

Here is a video showing a capture of four different computers running our autotest. The audio is from the bottom-left view. Note how the autotester sometimes randomly goes back to the menu, and can even randomly trigger a win (autotesters are not tactical enough to destroy the base otherwise):



And indeed, after a couple of hours already three computers had crashed! Since I had enabled full crash dumps in Windows, I could load up the debugger and see exactly what the code was doing when it was crashing.

The bug turned out to be quite nice: it required a very specific situation in combination with network packets going out of order in a specific way. When Skree dies just after he has started a chain lightning attack, the game first sends a chain lighting packet and then a character destroy packet. If these go out of order because of a really bad internet connection, the character destroy packet can arrive first. In this case the Skree has already been destroyed when his lightning packet is received. Chain lightning always happens between two characters, so the game needs both Skree and his target to create a chain lightning.

Of course we know that this kind of thing can happen when sending messages over the internet, so our code actually did check whether Skree and his target still existed. However, due to a typo it also created the chain lightning if only one of the two characters existed, instead of if they both existed. This caused the crash. Crashes are often just little typos, and in this case accidentally typing || ("OR") instead of && ("AND") caused this crash.

Once we knew where the bug was, it was really easy to fix it (the fix went live in hotfix 2.5.2). Thus the trick was not in fixing the code, but in reproducing the issue. This is a common situation in game programming and autotesting is a great tool to help reproduce and find certain types of issues.