What is Resolution? A deep dive into some techniques in use
It seems that Remedy and the XboxOne have been at the heart of another scandal after its latest game Quantum Break hit outlets for review and analysis. With the flames now fanned lets take a look into the games possible techniques and many others.
After my preview and the teams own papers and work the game targeted a 1080p output for the game. But using lower resolution buffers to create the final output but as I have not had hands on with the game still as yet I cannot analyse the game at that level, but the team have release a statement since this came to light this week which states.
Now this is not only a sorry situation (and is the main reason I hold back on information like this until it can be confirmed and clarified) as the game will indeed sell less due to this reason which is not only madness but deeply sad. In addition the team are correct in that the simple pixel count of a single buffer or section of a screen grab is never as important as it appears to some. I talked about this over a year ago with new techniques AA solution making pixel counting both harder and less relevant than many think, allow me to explain in some depth.
A games frame is and has always been made up of many stages and samples and these all come in at different resolution. This is true no matter if you run the highest PC specification you can buy and is simply the techniques needed for real-time interactive performance that games need. You can reduce any machine to a crawl very quickly if you are not clever about these things, not just in rendering but programming as a whole. So these intelligent comprises are not only needed but are always evolving and this generation as you have heard me talk about before is no different.
Reconstruction, reprojection, MSAA, PBR and multiple render targets are just some of the areas that have had much innovation in this area and here with Quantum Break being yet another scenario of just that. I have contacted Remedy for some detail on the method in use and how it all works but if and until I get a response I can only talk about methods known and what they have said based on this information I will cover these now.
The biggest and most vocal this generation was the technical talent over at Guerrilla games with its launch title Killzone ShadowFall. With it being one of the first games to split its SP/MP aims with a 1080/60 – 1080/60 aim it had to use clever techniques to hit these targets in the launch window. This meant that a lawsuit was brought but later rejected even though the entire case was built on incorrect information. This was that the MP never output a full 1920x1080P image which is incorrect as it did each and every frame, but how it achieved this was not as per most games and indeed the Single Player.
It is referred to as reprojection and others have called it interlacing, which to some extent it is but only in theory and unlike the fields that made up a complete frame in an interlaced image this technique was based on the larger horizontal plain rather than the vertical, i.e. the 1920 section instead of the 1080 height. But what does this mean in practise?
Well it means that unlike standard render frame and discard they decided to use that information to not only improve image quality (IQ) with a temporal AA but also reduce the rendering cost for each frame and help them achieve the 60fps target easier. To achieve this they take 3 frames in a buffer along with a 1080 complete image to help with AA. These consist of the current frame, last frame and the frame before that.
These pixels are then calculated with the odd pixels being created or “reprojected” from the previous frames data which includes its motion vectors (direction of travel thus expected location in the new frame) and its colour to create the new odd lines. If all this fails then simple interpolation is used from the surrounding pixels. This can at times create issues as you get some distortion or the stippled effect, much easier spotted on alpha effects like rain and such. But by and large was a success and went by largely unnoticed other than a “Different” AA choice being taken than single player. As you can see from my example this is reprojection with each odd line being created from previous data or nearest neighbour. You can see where the Interlaced idea came from but unlike the field creation of that image the final progressive display was always a 1920*1080 image created from a 960*1080 render and the further 960 pixels columns created from data calculation. This would not have saved them 50% render time at all as all this calculation was not free but did save them enough to have the higher MP frame rate.
This is a form of handling resolution and IQ in new ways that was a success and as you can see not a simple it’s a 960*1080 image resolution. But this IS NOT the same technique that Remedy has used and is not the only one in use by any means and I am sure more to come.
But this is reprojection and didn’t Remedy talk about reconstruction? Yes they did and thus the above method is only one way to achieve a result. Let’s move onto to another method then…
As I have talked about in my Uncharted 4 articles the team also seems to be using temporal reconstruction methods within its pipeline to improve image quality, shadow maps as we seen in previous games. The shader passes are not all within the single frame budget and instead spread over multiple frames as an image builds up more information and quality. The team had aims towards adaptive tessellation with character silhouettes but instead seem to be using an MSAA/EQAA edge and colour sample in conjunction with its shaders to improve detail and clarity with a reconstruction method. The shadows in scenes are a clear sign of this as I covered before and in the MP you can see the stored information signs in certain areas along with what I covered in my E3 look last year that information from the shader is built up over time. Both from additional information being added (prolonged shader passes) and from previous frames data to “reconstruct” the final image and improve it. But as this game is no even out I cannot go into any detail on this but the results speak for themselves , just to confirm this DOES NOT mean the game is rendering below a full 1920*1080 geometry buffer as it most likely is.
Another company using this method in more extravagant ways is Ubisoft with Unity reprojection was used in parts of the games rendering for shadow maps and also Rainbow Six siege using what it calls a temporal filter pass in the menu options on PC that is combined with a 2x MSAA sample to clean up edges that have been created from a half resolution, yes that means on PC and PS4 the game is rendering a final 1920*1080 image at 960*540 which would normally result in a horribly rough and pixelated display. But with its MSAA passes sampling edges and most likely colour allowing this to greatly improve the image quality and stability even if it can look slightly lower quality than a full 1080 display it certainly does not look that bad in practise and can improve performance by around 40% important when 60fps is the target on limited hardware. But this does create slightly more unstable results on ambient occlusion, lighting etc that can shimmer more and break up and this is something I noted in my QB preview.
The Senior Lead Programmer RedLynx studios Sebastian Aaltonen did a talk on this at Siggraph last year which was very, very interesting and I suggest checking it out in the link below. Now unlike the Rainbow Six method that is using MSAA to improve the geometry from its lower resolution in conjunction with its Post effects and temporal AA helps reduce the impact on the final result this is still a direct lower resolution buffer. This technique is reconstructing the final 1080 buffer from a lower starting point with its ordered grid sampling. With the consoles for example a jittered or programmable sample point can be taken at a sub-pixel level to improve the results and choices further. The UV co-ordinates, tangent are all interpolated and by the teams own words hard to spot. The performance gains on the given XboxOne show not only why this is most certainly a great thing to do but also that when % improvements are quoted it all comes and effects parts of the pipeline.
But again this is yet another method and not what I think we have in Quantum Break as I think although the aim are the same to increase performance, IQ and resource use the methods can and do differ. From this and the team’s statement I think it falls somewhere between many. Using 4xMSAA/EQAA on a 1280*720P buffer will smooth out the edges of geometry in play to deliver much cleaner lines and IQ than the pixel count would suggest. Then by storing previous frames information similar to Killzone SF the pixel vectors and colour can be used to improve the image and its interpolated sections as it is upscaled to the final 1080p output. Do not confuse upscalling with what these options are, this is simply put stretching a smaller image to fit inside a bigger one. With 4 frames needed to complete the final 1080 render means that in slow moving sections and cut scenes along with its MSAA sampling deliver a much cleaner image that the pixel count suggests and based on the reconstructed nature of the display will move between the lower and upper bounds (Of its 720 buffer and 1080 final render) as you play. Way back when I was talking about the Last of Us PS4 version to ship I mentioned a GPU driven pipeline and this is very much in use across many engines now. Reducing draw call cost, expanding density of worlds and freeing up vital CPU resource. And here the aim is to reduce shading cost on the GPU as this runs across a smaller buffer but the final image can be improved using multiple data samples and reconstruction from previous frames delivering the best combination of pixel quality over quantity that the team have chosen and I am sure they know much better than me or anyone which path to take.
All these changes and solutions are ever changing and happen to maximise the hardware in use and resource allocation to deliver the onscreen results simply calling a game 720/900/1080/4K in and of itself is not worth as much as the final results. Even without hands on QB clearly shows that the team have maximised the engine and XboxOne well. All the physics destruction, time shifting powers, pyrotechnics , volumetric lighting, screen space effects, real-time Global Illumination are not only impressive but a much better use of the ALU’s that simple pushing more pixels. It is always a compromise no matter the team or hardware in question and all games no matter how great or small will always have them. With methods like this here continuing to improve and maximise resource will only improve the games we play,enjoy and the image quality they deliver.