|Page (1) of 1 - 10/12/11||email article||print page|
Confettis Advanced Graphics Middleware Packages
Arti Gupta talks to Wolfgang Engel, CEO of Confetti, about the company’s advanced graphics middleware packages.
Arti Gupta: Can you tell us a little about Confetti?
Wolfgang Engel: Confetti started about a little more than two years ago when Peter Santoki, COO at Confetti and I left Rockstar Games, where we worked on Red Dead Redemption and GTA and all those games. We wanted to start consulting into a multi-powerhouse that helps game developers and movie creators use real-time rendering techniques.
A.G.: Confetti has become one of the earliest providers of advanced graphics middleware tools. Can you tell us a little bit more about the gap that you were trying to fill?
W.E.: When we started out, we wanted to create high-end graphics. After a while, we realized that the market is actually going in a different direction, so the big discreet GPUs are not that attractive to game developers anymore. We have already gotten into a situation where integrated graphics chips have more than 50 percent of the market share, and we expect this to increase substantially over the next two years.
Programming for this GPU means that we hit in two years probably 80 to 90 percent of the target market, which is substantial. We can actually make games that reach a much higher number of people.
A.G.: What are some of the challenges you’re facing while building this engine?
W.E.: One of the biggest challenges that we are currently working on is balancing the workload between the CPU and the GPU. Traditionally, we do most of our work on the GPU, but now it makes sense to move some of the workload from the GPU back onto the CPU. So we started out with CPU MLA. It is very popular at the moment to use MLA in different scenarios, and so we ported an MLA approach that was GPU-only and ported over so it runs fully on the CPU.
A.G.: Can you tell us a little more about what CPU MLA is?
W.E.: In general, MLA has three steps. The first is edge detection. In our case, we detect edges based on luminance. We measure the luminance of each pixel and we use that to detect edges in a full-screen image in screen space. You can also do it with the help of the depth buffer and the normal buffer. We figured out that luminance is a very good indicator for edge detection.
The second step of MLA is so-called edge qualification. We look at each of those edges that we have detected and we qualify them. We give each edge a certain color, and then based on this qualification we create a blended weight texture, which is used to apply a horizontal and vertical blur filter tunnel in the third step.
Now all of this runs on the CPU, in our case. Our GPU MLA on lower-end GPUs is about two to three milliseconds. We go down to one millisecond if we offload the whole process onto the CPU so the GPU cost is only one millisecond. And that is already substantial -- saving two milliseconds in an environment is a lot already.
We actually have a couple of ways to adjust quality as well. We can reuse the pipeline we built for CPU MLA for other image-spaced approaches, such as global illumination. So we are currently looking into our next step, which is to move more into the same pipeline so pulse effects global illuminations and green-spaced global illumination effects so that we utilize this pipeline even more, especially now with the upcoming six core CPUs. We will have a lot of power there.
A.G.: Can you tell us about your other middleware packages as well?
W.E.: When we left Rockstar, we were actually looking into which parts of game engines can we easily separate that are highly reusable for game developers. We identified the Dynamic Skydome System, the Post FX pipeline, and the Global Illumination approach.
The Dynamic Skydome System is for when you have an open-world game and you have a constantly changing time of day. You want your whole world to actually show that the shadows move, the lights move, that the horizon changes its colors, that the clouds change their colors and their lighting. So we built a package that has clouds in there, rudimentary clouds, and atmospheric scattering, terrain soft shadowing, and obviously a 24-hour day and night cycle. For the night, the stars are aligned with the actual position of the stars as you would expect them at a certain time of day in the year. Game developers can use this package to integrate it into their engine.
The second package we have is Post FX pipeline. We figured that Post FX is something that is easy to integrate for game developers because you just mount it at the end of you rendering pipeline and then you do everything in image space. Post FX pipeline actually has two purposes. The first one is that it takes care of color quality in general by offering tone mapping and HDR rendering. The second one mimics all of the camera effects that you would expect from a game nowadays. So it’s actually a Post FX–based camera system. We like to joke in the company about Michael Bay movies because we want to be able to mimic the camera settings in Michael Bay movies.
The third package is pretty new. We’re actually thinking that this is something that the industry will find useful in the near future. We have a global illumination system. This global illumination system actually consists of two parts. The first part is a screen-space global illumination system, which is kind of like it has the task to do all of the rough work. The second part is the volume-based “light provocation volume,” which is the technical term. It is a volume-based approach and it offers much more of the details. The screen-space approach kind of goes into the distance, and the volume approach is more in the close-up. Developers can find all three of these packages at ConfettiSpecialFX.com.
Copyright (c) 2011 Studio One Networks. All rights reserved.>
Related Keywords:Game Development, Camera/Film, Workstation/CPU, Other,