Ultra-realistic characters are a common target for current rendering engines, given the importance of storytelling in modern games. However, efforts toward photorealistic character rendering are futile if they don't include subsurface scattering rendering (SSS), realistic eye shading, physically based rendering, depth of field, film grain, plausible bloom, film-like tone mapping, very detailed assets, soft shadows, and an accurate anti-aliasing solution. Failing on any of these means the illusion of looking at a real human will be broken. This presentation will cover our techniques for SSS, eye shading, anti-aliasing, depth of field, and film grain, as well as presenting the integration issues being solved for our game studios. This talk will also present our computer vision techniques for character asset capture from video, and fitting the dense captured and tracked data to game rigs via energy minimization. Such asset capture increases the level of character quality, which requires more advanced rendering techniques so as not to fall in the uncanny valley. The talk will show high-quality next-generation shading driven with a plausible game rig. Ultimately, our goal is to overcome the uncanny valley with next-gen run-time techniques and good artistic direction. This is for us, the first step toward achieving truly believable characters.