TressFX – For Salon Beautiful Hair in Your Virtual Home

Posted on February 27, 2013 10:30 AM by Jamie Fletcher

Have you ever wanted silky-smooth, volumized, independently-rendered hair? Now you can, with Per-Pixel Linked-List Order Independent Transparency DirectCompute Real-time Physics Simulated TressFX Hair, all without paying salon prices.

AMD and Crystal Dynamics have formulated a special blend of Silica and C-derived syntax from the Direct family to create a nourishing and invigorating extension that can bring life back to poorly-rendered and static hair. With careful application, hair can be instantly volumized.


Single strand, textural and polygonal enhancements allow for contextual dynamic animation over a broad range of environmental conditions.

AMD TressFX Ad

Fortified with collision detection algorithms to prevent tangling and knotting, divas can now flick their hair with ferocity and confidence. Rain, sleet, snow, it will all enhance and provide a pure glossy sheen; even a salon professional would be proud.

Liberal use of Graphics Core Next architecture, available from select Radeon HD 7000 series GPUs, enables life-like rendering. Virtual conditioners and shared memory management can process and nourish hair from root to tip. To enhance your Tomb Raider experience, order TressFX now. Available while supplies last.

For more information about TressFX, please check out AMD’s detailed explanation of the science behind luxurious hair.

  • Rob Williams

    I have LOL’d at this too many times.

  • Marfig

    “Per-Pixel Linked-List Order Independent Transparency DirectCompute Real-time Physics Simulated TressFX Hair”

    Now say it really fast.

    Anyways, does it beat Tessellated hair? Does it not?

    • Jamie Fletcher

      Yes, because tesselated hair is using existing geometry and extrapolating detail out of it through subdivision. This TressFX technique creates brand new, individual strands of hair, as well as bunching, and applies physically simulated properties to it, such as weight and collision detection. Similar techniques have been used in the past with professional 3D modelling, where you ‘paint’ key-hairs on to a model, that define properties such as hair density/strands/length, etc. Build up enough key-hairs over a target model and you’re left with a hedgehog pre-render, and beautifully rendered, realistic hair, post-render.

      The problem is computational complexity, so various methods have to be used to simplify the geometry to make it suitable for real-time rendering, especially if you want such wonders as Ambient Occlusion, self-shadowing and transparancy-based Anti-Aliasing. It’s predominantly a software side issue for scaling purposes, to make things realistic without burdening excessive geometry onto the GPU with no real artistic effect.

Recent Tech News
Recent Site Content