[{"data":1,"prerenderedAt":506},["ShallowReactive",2],{"article-id-en-neural-materials-1":3},{"id":4,"title":5,"body":6,"description":404,"extension":489,"meta":490,"navigation":499,"path":500,"seo":501,"stem":504,"__hash__":505},"content/en/blog/neural-materials-1.mdx","Neural Materials 1",{"type":7,"value":8,"toc":478},"minimark",[9,393],[10,11,12,17,21,24,32,35,39,46,49,52,86,93,100,118,121,126,129,132,138,213,223,230,240,244,247,250,255,270,277,282,293,298,301,306,309,316,324,329,336,342,346,349,352,372,375,383,387,390],"section-md",{},[13,14,16],"h2",{"id":15},"when-physics-meets-neural-networks-how-we-taught-neural-networks-to-reproduce-physically-accurate-textures","When physics meets neural networks: how we taught neural networks to reproduce physically accurate textures",[18,19,20],"p",{},"Computer graphics lives on tradeoffs. You want photorealism, so you build\nsophisticated lighting models. But every frame has a deadline — 16ms for 60fps —\nand the math has to fit inside that budget. Materials are where this tension\nhits hardest.",[18,22,23],{},"Film solves this by ignoring the clock. A single frame can render for hours.\nArtists build enormous material graphs, wiring together dozens of textures and\nmath nodes to get the scuffs on an old couch or the reflections on a polished\ncar just right. These graphs model real physics: light passing through paint\nlayers, scattering in dust, bouncing off microscopic scratches. The output is\nindistinguishable from a photograph.",[18,25,26,27,31],{},"Games can't wait hours. Every frame needs to be done in 16 milliseconds or\nfaster. So developers cut corners: pre-baked lighting (",[28,29,30],"strong",{},"lightmaps","), simpler\nmaterial models, fewer layers. For a long time, the gap between offline and\nreal-time rendering looked unbridgeable.",[18,33,34],{},"We wanted to try anyway. The idea: take a complex material built for film, train\na neural network to replicate its behavior, and run that network fast enough for\nreal-time use. It worked better than we expected.",[13,36,38],{"id":37},"baking-materials-into-neural-networks","Baking materials into neural networks",[18,40,41],{},[42,43],"img",{"alt":44,"src":45},"Material baking concept","/img/blog/neural-materials-1/content-1.png",[18,47,48],{},"To understand what we did, it helps to know how high-quality materials are\nbuilt.",[18,50,51],{},"In VFX and (to a lesser extent) game engines, the standard approach is layered\nmaterials. Think of an antique copper tray. An artist doesn't just paint a\ncopper texture — they build a material that mimics the real physical structure:",[53,54,55,62,68,74,80],"ul",{},[56,57,58,61],"li",{},[28,59,60],{},"Base layer:"," the copper itself, with a specific color and sheen.",[56,63,64,67],{},[28,65,66],{},"Patina layer:"," greenish tarnish that builds up over time, thicker in\nrecesses and thinner on raised areas.",[56,69,70,73],{},[28,71,72],{},"Varnish layer:"," a clear coating that gives strong but slightly blurred\nreflections.",[56,75,76,79],{},[28,77,78],{},"Dust layer:"," fine particles sitting on top.",[56,81,82,85],{},[28,83,84],{},"Scratch layer:"," cuts through everything above.",[18,87,88,89,92],{},"Each layer has its own mathematical model (",[28,90,91],{},"BSDF — Bidirectional Scattering\nDistribution Function","), controlled by masks, height maps, and procedural\nnoise. The result is a graph with dozens of nodes and hundreds of connections.\nEvaluating this graph per pixel in real time is too expensive even for top-end\nGPUs.",[18,94,95,96,99],{},"Instead of evaluating this graph every frame, we train a neural network to\napproximate it. We call this ",[28,97,98],{},"baking",":",[101,102,103,106,109,112,115],"ol",{},[56,104,105],{},"The artist builds a reference material in their usual tool (Autodesk Maya\nwith MaterialX, Houdini, etc.).",[56,107,108],{},"We generate training data by sampling the material repeatedly: random\nsurface points (UV coordinates), random light directions, random camera\ndirections.",[56,110,111],{},"For each sample, we evaluate the full material graph to get the ground-truth\ncolor and reflected light distribution.",[56,113,114],{},"The dataset (millions of samples) trains our neural model.",[56,116,117],{},"After training, the heavy material graph is discarded. Only the compact\nneural network and feature textures ship with the game.",[18,119,120],{},"This is a different kind of representation. Instead of simulating physical\nprocesses (\"how light passes through layers\"), we get a behavioral model (\"how\nthe surface looks under this lighting\") that's hundreds of times smaller and an\norder of magnitude faster to evaluate.",[122,123,125],"h3",{"id":124},"how-a-neural-material-works-step-by-step","How a neural material works, step by step",[18,127,128],{},"To be clear: this isn't one giant network for the whole scene. Each material\ngets its own compact neural model.",[18,130,131],{},"Rendering one pixel with a neural material goes like this:",[18,133,134,137],{},[28,135,136],{},"Stage 1: Input data","\nWhen a ray hits an object with a neural material, the shader gathers information\nabout the hit point and sends it to the network:",[53,139,140,146,207],{},[56,141,142,145],{},[28,143,144],{},"Texture coordinates (UV):"," where on the surface the ray landed. Used to\nlook up the feature map.",[56,147,148,151,152,184,185,206],{},[28,149,150],{},"Directions:"," the incoming light direction (",[28,153,154],{},[155,156,159],"span",{"className":157},[158],"katex",[160,161,163],"math",{"xmlns":162},"http://www.w3.org/1998/Math/MathML",[164,165,166,179],"semantics",{},[167,168,169],"mrow",{},[170,171,172,176],"msub",{},[173,174,175],"mi",{},"ω",[173,177,178],{},"i",[180,181,183],"annotation",{"encoding":182},"application/x-tex","\\omega_{i}",") and the view\ndirection (",[28,186,187],{},[155,188,190],{"className":189},[158],[160,191,192],{"xmlns":162},[164,193,194,203],{},[167,195,196],{},[170,197,198,200],{},[173,199,175],{},[173,201,202],{},"o",[180,204,205],{"encoding":182},"\\omega_{o}",").",[56,208,209,212],{},[28,210,211],{},"Additional attributes:"," normal maps for geometry correction (though the\nnetwork can learn these effects on its own).",[18,214,215,218,219,222],{},[28,216,217],{},"Stage 2: Feature map — the material's memory","\nFeeding raw UV coordinates straight into a big network doesn't work well. The\nnetwork would need enormous capacity to memorize every variation across the\nsurface. Instead, we use a special texture called a ",[28,220,221],{},"feature map",".",[18,224,225,226,229],{},"Unlike standard textures that store RGB colors or material parameters (roughness,\nmetalness), our feature map stores ",[28,227,228],{},"latent vectors",": compact multi-dimensional\nvectors (typically 8 values) that encode the material's properties at each\npoint. An encoder, trained together with the decoder, learns to compress the\nfull material complexity into these vectors.",[18,231,232,235,236,239],{},[28,233,234],{},"Stage 3: Neural decoder — turning features into color","\nThe latent vector for a given point is just a list of numbers on its own. The\n",[28,237,238],{},"Neural Decoder"," — a small fully connected network (multi-layer perceptron) —\nconverts it into something useful. It takes the latent vector, combines it with\nthe light and camera directions, and outputs the physical values the renderer\nneeds. Getting this decoder's architecture right is what makes the whole system\nwork.",[13,241,243],{"id":242},"cooperative-vectors-running-neural-networks-inside-shaders","Cooperative Vectors: running neural networks inside shaders",[18,245,246],{},"Designing the network was half the problem. The harder part was running it\ninside a GPU shader, hundreds of thousands of times per frame, within a\nreal-time budget.",[18,248,249],{},"This is where Cooperative Vectors come in.",[18,251,252],{},[28,253,254],{},"The problem",[18,256,257,258,261,262,265,266,269],{},"Neural networks normally run on GPU tensor cores through libraries like ",[28,259,260],{},"cuDNN","\nand frameworks like ",[28,263,264],{},"TensorFlow"," or ",[28,267,268],{},"PyTorch",". These are built for large batches\nand high throughput. Shaders are different: they're small programs running\nper-pixel or per-ray, and historically they've had no way to access tensor\ncores.",[18,271,272,273,276],{},"The traditional ",[28,274,275],{},"SIMT"," model needs a full warp (32 threads executing the same\ninstruction) to use tensor cores efficiently. Ray tracing processes threads\nindependently, so assembling full warps is nearly impossible. On top of that,\ntensor cores are tuned for matrix-matrix multiplication, but each ray thread\nonly needs vector-matrix multiplication, which wastes compute resources.",[18,278,279],{},[28,280,281],{},"The solution: Cooperative Vectors API",[18,283,284,285,265,289,292],{},"Cooperative Vectors is a new API being developed by AMD with Microsoft,\nintegrated into DirectX. It's built for matrix-vector multiplication in graphics\nworkloads, where each thread needs to multiply a vector by a relatively small\nmatrix (like ",[286,287,288],"code",{},"128x128",[286,290,291],{},"64x64","). Threads work together on these operations,\nwhich is where the \"cooperative\" name comes from.",[18,294,295],{},[28,296,297],{},"Memory layout",[18,299,300],{},"Network weights and feature map data are packed into GPU memory buffers using\nlayouts optimized for the access patterns Cooperative Vectors expects.",[18,302,303],{},[28,304,305],{},"Divergence",[18,307,308],{},"Ray tracing is inherently divergent: one ray hits metal, the next hits fabric.\nDivergent execution hurts GPU performance. Cooperative Vectors handles\ndivergence within a warp, though with some overhead.",[18,310,311,312,315],{},"To minimize this, we use ",[286,313,314],{},"Shader Execution Reordering (SER)",", which groups rays\nby material type. This gives Cooperative Vectors what it needs:",[101,317,318,321],{},[56,319,320],{},"The same network weights across all threads in the warp.",[56,322,323],{},"A full warp of active threads.",[18,325,326],{},[28,327,328],{},"Performance",[18,330,331,332,335],{},"Our neural shaders run more than ",[28,333,334],{},"10x faster"," than equivalent non-neural\nlayered materials implemented as traditional shader code. Fast enough to use\ncinematic-quality materials in games, which wasn't practical before.",[18,337,338],{},[42,339],{"alt":340,"src":341},"Optimization results","/img/blog/neural-materials-1/content-2.png",[13,343,345],{"id":344},"scaling-to-real-scenes","Scaling to real scenes",[18,347,348],{},"A real scene has dozens or hundreds of unique materials. Loading a separate\nneural network for each one would blow out memory.",[18,350,351],{},"We use three strategies to keep things manageable:",[101,353,354,360,366],{},[56,355,356,359],{},[28,357,358],{},"Same architecture, different weights."," All material networks share one\narchitecture. Only the trained weights differ, which makes storage and\nswitching efficient.",[56,361,362,365],{},[28,363,364],{},"Shared direction interpretation."," The part of the network that interprets\nlight and view directions can be shared across materials. Only the feature\nmaps are unique.",[56,367,368,371],{},[28,369,370],{},"Scene-specific compilation."," At runtime, we include only the materials\nactually used in the current scene or level and compile a shader for exactly\nthat set.",[18,373,374],{},"The system works in practice, not just in benchmarks. You could build a game\nwhere a rusty nail and a velvet curtain both have physically correct surfaces\nthat respond to lighting properly, and still hit a high frame rate.",[376,377],"service-banner",{":image":378,"cta":379,"description":380,"title":381,"to":382},"{\"src\":\"/img/services/2.3d-graphics/preview.png\",\"alt\":\"GPU abstract image\"}","Explore the service","Need real-time neural materials or advanced 3D rendering? We build production-ready graphics solutions.","3D Graphics Development","../services/3d-graphics",[13,384,386],{"id":385},"whats-next","What's next",[18,388,389],{},"Neural materials connect offline and real-time rendering. Artists can build\ncomplex materials and ship them as neural models that run at interactive speeds.\nEngineers get better visual quality without tanking frame rates.",[18,391,392],{},"There's more to do: smaller models, materials that change over time (fading in\nsunlight, getting wet in rain). But the core idea works. Neural representations\ncan produce physically accurate materials in real time, and they're practical\nenough to ship.",[394,395,397,413,426,439,452,465],"faq",{"title":396},"Frequently asked questions",[398,399,401,408],"faq-item",{"value":400},"item-1",[402,403,405],"template",{"v-slot:question":404},"",[18,406,407],{},"How is a neural material different from a regular texture?",[402,409,410],{"v-slot:answer":404},[18,411,412],{},"A regular texture stores fixed surface parameters — color, roughness, metalness. A neural material stores a compact trained representation that accounts for complex layered structure and correctly responds to changes in light direction and camera angle. It's not a static image but a function that computes the correct surface response for any viewpoint and lighting.",[398,414,416,421],{"value":415},"item-2",[402,417,418],{"v-slot:question":404},[18,419,420],{},"How much larger is the file size compared to a standard material?",[402,422,423],{"v-slot:answer":404},[18,424,425],{},"A neural representation typically takes anywhere from tens of kilobytes to a few megabytes per material — that's the network weights and the feature map. For comparison, a set of 4K textures for a complex layered material can weigh tens of megabytes. In most cases, the neural version is smaller than the original texture set.",[398,427,429,434],{"value":428},"item-3",[402,430,431],{"v-slot:question":404},[18,432,433],{},"What hardware does this run on?",[402,435,436],{"v-slot:answer":404},[18,437,438],{},"Rendering requires a GPU that supports Cooperative Vectors. The technology is currently being integrated into DirectX with AMD and Microsoft's involvement. Performance is an order of magnitude higher than traditional layered materials in shader code, but you need current drivers and compatible hardware.",[398,440,442,447],{"value":441},"item-4",[402,443,444],{"v-slot:question":404},[18,445,446],{},"Can artists keep using their usual tools?",[402,448,449],{"v-slot:answer":404},[18,450,451],{},"Yes. Artists work in their standard tool — Autodesk Maya with MaterialX, Houdini, or similar. The baking process runs separately: the system generates training data, trains the neural network, and outputs the compact representation. The artist's workflow doesn't change.",[398,453,455,460],{"value":454},"item-5",[402,456,457],{"v-slot:question":404},[18,458,459],{},"Does this support animated materials — like surfaces getting wet or fading?",[402,461,462],{"v-slot:answer":404},[18,463,464],{},"Not yet. The current implementation handles static materials. Supporting dynamically changing properties (getting wet in rain, fading in sunlight, wearing down) is one of the directions for future work.",[398,466,468,473],{"value":467},"item-6",[402,469,470],{"v-slot:question":404},[18,471,472],{},"How visually accurate is the result compared to the original?",[402,474,475],{"v-slot:answer":404},[18,476,477],{},"The neural network trains on millions of samples generated from the reference material graph. With enough training data, the visual difference between rendering through the original graph and through the neural representation is minimal — in most scenarios, indistinguishable to the eye.",{"title":404,"searchDepth":479,"depth":479,"links":480},2,[481,482,486,487,488],{"id":15,"depth":479,"text":16},{"id":37,"depth":479,"text":38,"children":483},[484],{"id":124,"depth":485,"text":125},3,{"id":242,"depth":479,"text":243},{"id":344,"depth":479,"text":345},{"id":385,"depth":479,"text":386},"mdx",{"readTime":491,"image":492,"date":493,"tags":494,"authors":497},null,"/img/blog/neural-materials-1/preview.png","2026-04-07",[495,496],"Artificial Intelligence","Machine Learning",[498],"atimoshenko",true,"/en/blog/neural-materials-1",{"title":502,"description":503},"How We Taught Neural Networks to Reproduce Physically Accurate Textures","Describing an approach that converts complex multi-layered materials into compact neural representations and renders them in real time using ray tracing and hardware acceleration on GPU tensor cores.","en/blog/neural-materials-1","yM5rFJ1g92DFUS_HCU52648SmBeuyJhfGTYDleg3Vqk",1777111174041]