Skip to main content

Command Palette

Search for a command to run...

2.6 - Normal Vector Transformation

Updated
33 min read
2.6 - Normal Vector Transformation

What We're Learning

So far, we have transformed vertex positions from local space to world space and passed normals to the fragment shader for lighting. It seems straightforward: if you want to know which way a normal is facing in the world, you just transform it with the model matrix, right?

This is one of the most common and critical mistakes in shader programming.

Normals are not positions. They represent orientation, and when an object is scaled non-uniformly (stretched or squashed), its surface orientation changes in a way that the model matrix simply can't account for. Applying the same transformation to both positions and normals will lead to bizarre and incorrect lighting, from highlights that appear in the wrong place to surfaces that look strangely dark or flat.

This article tackles that fundamental problem head-on. Getting this right is absolutely essential for any kind of correct lighting, and it forms the bedrock for more advanced techniques like normal mapping, which we will cover later.

By the end of this article, you will understand:

  • Why you can't transform normals the same way you transform positions.

  • The theory behind the Normal Matrix and why the transpose(inverse(model)) formula is the correct solution.

  • How to transform normals correctly and efficiently in Bevy using its built-in functions.

  • The critical importance of renormalizing vectors after transformation and interpolation.

  • How to build a complete TBN (Tangent, Bitangent, Normal) matrix, a prerequisite for normal mapping.

  • How to build a powerful debugging visualizer to see your vectors in real-time.

The Problem: Why Position Transformation Doesn't Work

To understand the solution, we first need to build a rock-solid intuition for the problem. Why can't we just multiply a normal by the model matrix and call it a day? The answer lies in the fundamental difference between a position and a direction.

Normals Represent Direction, Not Position

A position vector in a shader tells us where a vertex is in space. A normal vector tells us which way the surface is facing at that vertex. Normals are pure direction; they are not affected by translation, and they are affected by scaling in a very particular way that is different from positions.

Let's use an analogy. Imagine a car on a map.

  • Its position is a specific coordinate, like (latitude, longitude). If you move the car 10 miles east (a translation), its position changes.

  • Its direction is which way it's facing, like "North." If you move the car 10 miles east, its direction remains "North."

The model matrix contains translation, rotation, and scale information. While rotation affects both positions and directions, translation only affects positions. The real problem, however, comes from scaling. Scaling a position makes sense - you're moving it further from the origin. But what does it mean to "scale a direction"? This is the core of the issue.

The Non-Uniform Scaling Problem

Things break down completely when an object is scaled non-uniformly - that is, stretched or squashed by different amounts along different axes.

Let's consider a simple 2D plane tilted at a 45-degree angle (the blue line in the diagram below). Its surface is perfectly straight. At every point on this surface, the normal vector is perpendicular to it, pointing up and to the left at (-0.707, 0.707).

Now, let's apply a non-uniform scale, making the object twice as wide (X-axis) and half as tall (Y-axis). Intuitively, this should squash the plane, making its slope much flatter. A flatter slope means the new normal vector should point more vertically upwards.

Let's see what happens if we incorrectly transform the original normal (-0.707, 0.707) using the scale part of the model matrix:

  • New X component: -0.707 * 2.0 = -1.414

  • New Y component: 0.707 * 0.5 = 0.354

The resulting incorrect vector is (-1.414, 0.354). This vector points much more to the left than it does up. It implies the surface has become almost vertical, which is the exact opposite of what actually happened! If used for lighting, this would make a nearly flat surface look as if it were a steep cliff face, completely breaking the visual result.

The surface orientation has changed, but the model matrix fails to describe that change correctly for the normal vector.

Mathematical Proof

We can prove this failure mathematically. By definition, a normal vector N is perpendicular to any tangent vector T (a vector lying flat on the surface) at the same point.

In vector math, "perpendicular" means their dot product is zero:

N · T = 0

After we transform our object with a model matrix M, the tangent T (a direction along the surface) transforms correctly into T' = M * T.

Let's assume for a moment that the normal transforms the same way, so N' = M * N.

For our lighting to be correct, the new normal N' must still be perpendicular to the new tangent T'. Let's test their dot product:

N' · T' = (M * N) · (M * T)

Using a rule from linear algebra, this dot product is equivalent to:

N' · T' = Nᵀ * Mᵀ * M * T

(where Nᵀ is the transpose of N, and Mᵀ is the transpose of M)

For this expression to remain zero (and thus perpendicular), the Mᵀ * M part in the middle must be the Identity matrix (I), which doesn't change the equation.

A matrix M for which Mᵀ * M = I is called an orthogonal matrix. This property holds true for pure rotations and uniform scaling (where all axes are scaled by the same amount). However, it fails for non-uniform scaling or shearing.

This proves that using the model matrix M to transform normals is only mathematically valid for transformations that don't warp the object. For the general case, we need a different solution.

The Solution: The Normal Matrix

If the model matrix M is incorrect for transforming normals, what is the right tool for the job? The answer is a special matrix derived from the model matrix, known simply as the Normal Matrix. It is specifically constructed to solve the non-uniform scaling problem and ensure normals remain perpendicular to their surface after transformation.

The Normal Matrix Formula

The formula for the normal matrix is the transpose of the inverse of the model matrix.

Normal Matrix = transpose(inverse(Model Matrix))

More specifically, since normals are direction vectors and are not affected by translation, we only care about the rotation and scale part of the model matrix. Therefore, we use the upper-left 3x3 portion of the mat4x4.

Normal Matrix = transpose(inverse(model_3x3))

In shader code, the full manual operation to transform a normal would look like this:

let world_normal = normalize(transpose(inverse(model_3x3)) * local_normal);

The Intuition: Why Does This Work?

The formula might seem plucked from thin air, but there's a strong intuition behind it.

Think of it this way: the model matrix transformation squashed our surface, making it flatter. To keep the normal perpendicular to this new flatter surface, it had to become more vertical. The transformation that was applied to the surface and the one that needs to be applied to the normal are opposites.

  • If the model matrix stretches space along an axis, the normal matrix must squash the normals along that same axis to compensate.

  • If the model matrix squashes space, the normal matrix must stretch the normals.

The inverse() operation is what provides this "opposite" transformation. It mathematically "undoes" the original scale and rotation. The transpose() operation then correctly orients this inverted transformation to work for normals. In short, the inverse-transpose is the unique mathematical operation that preserves the property of perpendicularity under a non-uniform linear transformation.

The Mathematical Proof

We can revisit our proof from before to show why this works. We need to find a special matrix, let's call it N_mat, such that a transformed normal N' = N_mat * N remains perpendicular to a transformed tangent T' = M * T.

Their dot product must be zero:

(N_mat * N) · (M * T) = 0

Rewriting this using matrix notation:

Nᵀ * N_matᵀ * M * T = 0

For this equation to be true for any N and T, the middle part, N_matᵀ * M, must equal the Identity matrix I.

N_matᵀ * M = I

To solve for our unknown normal matrix N_mat, we can multiply both sides by the inverse of M (M⁻¹):

N_matᵀ * M * M⁻¹ = I * M⁻¹

N_matᵀ = M⁻¹

Finally, to get N_mat, we just take the transpose of both sides:

N_mat = (M⁻¹)ᵀ

This proves that the correct transformation matrix for normals is transpose(inverse(M)). It's precisely the matrix required to ensure that normals remain perpendicular to their transformed tangents.

A Practical Guide: When to Use It

So, when do you actually need to worry about this?

You must use the normal matrix when your object's transform involves:

  • Non-uniform scaling: The most common case. E.g., Transform::from_scale(Vec3::new(2.0, 1.0, 1.0)).

  • Shearing: A less common transformation that skews an object.

You can skip the expensive inverse-transpose calculation and just use the model matrix (or its 3x3 part) for normals if the transform only involves:

  • Translation: Normals ignore translation anyway.

  • Rotation: Rotations are orthogonal transformations that preserve perpendicularity.

  • Uniform scaling: All axes are scaled by the exact same amount. Like rotation, this is an orthogonal transformation.

The Golden Rule for Beginners: When in doubt, always use the normal matrix. The performance cost of getting it wrong (broken lighting) is far greater than the computational cost of getting it right. Modern engines like Bevy are designed to handle this for you correctly and efficiently behind the scenes, but it's crucial to understand what is happening so you can debug it when things go wrong.

Implementing Normal Transformation in Bevy

Now that we understand the theory behind the normal matrix, let's look at how to apply it within a Bevy shader. Bevy, being a well-designed engine, provides a highly optimized, built-in solution. However, understanding how to do it manually is crucial for writing fully custom render pipelines or for moments when you can't use the standard helpers.

The Idiomatic Bevy Way: Using PBR Imports

For almost all use cases, the best approach is to use the helper functions provided in Bevy's bevy_pbr shader module. This is the easiest, safest, and most performant method.

#import bevy_pbr::mesh_functions

// ... inside your @vertex function ...

// Get the model matrix (same as before)
let model = mesh_functions::get_world_from_local(in.instance_index);

// Transform position to world space (same as before)
let world_position = (model * vec4<f32>(in.position, 1.0)).xyz;

// CORRECTLY transform the normal to world space
// This is the magic function that uses the pre-calculated normal matrix!
let world_normal = mesh_functions::mesh_normal_local_to_world(
    in.normal,
    in.instance_index
);

// We'll discuss why this normalize() call is critical in the next section
out.world_normal = normalize(world_normal);

What's Happening Under the Hood?

You don't see a transpose(inverse(model)) call here because Bevy does the heavy lifting for you on the CPU. Here is the workflow:

  1. On the CPU (in Rust): Whenever an entity's Transform changes, Bevy's renderer automatically calculates its Mat4 model matrix.

  2. Pre-computation: At the same time, it also calculates the corresponding Mat3 normal matrix (transpose(inverse(model))).

  3. GPU Data: Both matrices are sent to the GPU and stored in a buffer accessible to your shader.

  4. In the Shader (in WGSL): The mesh_normal_local_to_world() function simply looks up this pre-computed normal matrix and multiplies it by your vertex normal.

This is incredibly efficient. The expensive inverse and transpose operations are done only once per object when its transform changes, not for every single vertex, every single frame.

The Manual Calculation (For Education & Special Cases)

To truly understand what Bevy is doing for you, it's helpful to see what a manual calculation would look like. While WGSL provides a built-in inverse() function that makes this possible, you should almost never do this in production code. It serves as an excellent learning exercise that powerfully demonstrates why Bevy's approach is superior.

// A function to manually compute the normal matrix from a model matrix.
// WARNING: This is computationally expensive and for educational purposes only.
fn calculate_normal_matrix(model: mat4x4<f32>) -> mat3x3<f32> {
    // First, extract the upper-left 3x3 portion (rotation and scale)
    let model_3x3 = mat3x3<f32>(
        model[0].xyz,
        model[1].xyz,
        model[2].xyz
    );

    // WGSL has a built-in inverse function, but it's expensive to run per-vertex.
    let model_inverse = inverse(model_3x3);

    // Finally, transpose the inverse to get the normal matrix
    return transpose(model_inverse);
}

// ... inside your @vertex function ...

let model = mesh_functions::get_world_from_local(in.instance_index);
let normal_matrix = calculate_normal_matrix(model);

let world_normal = normal_matrix * in.normal;
out.world_normal = normalize(world_normal); // Still need to normalize!

Performance Warning: Calculating a matrix inverse is one of the more expensive operations you can ask a GPU to do. Running the code above for every vertex in a mesh with thousands of vertices is a recipe for poor performance. This is why the idiomatic "Bevy Way" is to compute it once on the CPU.

The Professional Approach: Precomputed Uniforms

If you are writing a completely custom material from scratch and not importing bevy_pbr::mesh_functions, the best practice is to mimic Bevy's approach: compute the normal matrix in Rust and pass it to your shader as a uniform.

Step 1: In Your Rust Code

When you define your custom material, you'll pass both the model and normal matrices.

// In your custom material's uniform struct
use bevy::render::render_resource::ShaderType;

#[derive(ShaderType)]
struct MyMaterialUniforms {
    model_matrix: Mat4,
    normal_matrix: Mat3, // Use a Mat3 for the 3x3 normal matrix
}

// In the system that prepares your material's bind group
fn prepare_my_materials(
    // ... query for your material and the object's GlobalTransform ...
) {
    for (material_handle, transform) in query.iter() {
        let model_matrix = transform.compute_matrix();

        // glam (Bevy's math library) makes this easy!
        let normal_matrix = Mat3::from_mat4(model_matrix).inverse().transpose();

        // Now, write these values to the material's uniform buffer.
        // ...
    }
}

Step 2: In Your WGSL Shader

The shader now becomes beautifully simple. It just receives the pre-computed matrix and uses it.

struct MyMaterial {
    model_matrix: mat4x4<f32>,
    normal_matrix: mat3x3<f32>, // The matrix we computed in Rust
}

@group(2) @binding(0)
var<uniform> material: MyMaterial;

// ... inside your @vertex function ...

// Just use the precomputed matrix. Simple and fast!
let world_normal = material.normal_matrix * in.normal;
out.world_normal = normalize(world_normal);

This approach gives you the full performance benefit of the "Bevy Way" while allowing for a completely independent, custom material and shader.

Renormalizing After Transformation

There is a simple, non-negotiable rule in shader programming: after transforming a normal, and again after it's been interpolated, you must normalize it. This is arguably the most common source of subtle lighting errors for beginners. Neglecting this step will break your lighting calculations, which almost universally assume that normal vectors have a length of exactly 1.0.

There are two primary reasons why a normal's length can change.

Reason 1: The Transformation Itself

Even if your input normal from the mesh is a perfect unit vector (length 1.0), the act of multiplying it by the normal matrix can alter its length. This is particularly true when non-uniform scaling is involved. The matrix correctly changes the normal's direction to keep it perpendicular to the scaled surface, but it doesn't guarantee its length will remain 1.0.

Therefore, the first normalize() call is essential immediately after the transformation in the vertex shader.

// In the vertex shader...

// Transform the normal using the correct matrix
let world_normal = mesh_functions::mesh_normal_local_to_world(
    in.normal,
    in.instance_index
);

// CORRECT: Immediately normalize the result before passing it on.
out.world_normal = normalize(world_normal);

Reason 2: Interpolation (The Hidden Trap)

The second, more subtle reason happens between the vertex and fragment shaders. For each pixel it renders, the GPU's rasterizer looks at the normals from the vertices of the triangle that pixel belongs to and linearly interpolates them to create a smooth normal for that specific fragment.

This interpolation process does not preserve length.

Imagine a simple 2D example. The normal at one vertex is (1, 0) and at another is (0, 1). Both are perfect unit vectors.

What is the interpolated normal exactly halfway between them?

  • Interpolated Vector: (0.5, 0.5)

  • Length: sqrt(0.5*0.5 + 0.5*0.5) = sqrt(0.25 + 0.25) = sqrt(0.5) ≈ 0.707

The interpolated vector is not a unit vector! Its length is less than 1.0. This means the normal vector arriving in your fragment shader is almost never perfectly normalized.

This leads to the second golden rule: Always re-normalize your normal vector at the beginning of your fragment shader.

// In the fragment shader...

@fragment
fn fragment(in: VertexOutput) -> @location(0) vec4<f32> {
    // WRONG: `in.world_normal` is not guaranteed to be unit length!
    // let N = in.world_normal;

    // CORRECT: The very first thing you do is re-normalize the incoming normal.
    let N = normalize(in.world_normal);

    // Now proceed with lighting calculations...
    // let light_dot_product = dot(N, light_direction);
    // ...
}

What About the Performance Cost?

Beginners are often tempted to skip normalize() thinking it's an expensive operation. On modern GPUs, this is a false economy. The normalize() function is a highly optimized, low-level instruction.

  • Cost of normalize(): A few clock cycles. Negligible.

  • Cost of incorrect lighting: Catastrophic. Your game will look broken.

The performance gain from skipping this is almost zero, and the visual penalty is enormous.

When Can You Really Skip Normalizing?

For advanced optimization, you can sometimes skip it, but only if you meet all of these conditions:

  1. You are absolutely certain the transformation is rotation-only (no scale at all).

  2. The input normal from the mesh was already perfectly normalized.

  3. You are not interpolating the normal (e.g., using "flat" interpolation, common in low-poly styles).

In practice, for 99% of cases, especially when learning, the rule is simple: just normalize. Once in the vertex shader after transformation, and once again in the fragment shader before use.

Tangent Space and TBN Matrices

Correctly transforming vertex normals is a huge step, but it's only part of the story. To unlock advanced techniques like normal mapping - a cornerstone of modern real-time graphics - we need to establish a complete, local coordinate system at every single point on a mesh's surface. This is known as Tangent Space.

What is Tangent Space?

Imagine a tiny ladybug standing on the surface of a complex model. From her perspective, "up" isn't the world's Y-axis; it's the direction pointing straight away from the surface she's on. This is the Normal. If she walks forward along the grain of the surface texture, she is walking along the Tangent. The direction to her immediate left or right is the Bitangent.

Together, these three vectors - Tangent, Bitangent, and Normal - form the TBN frame.

  • N (Normal): The vector we already know, perpendicular to the surface ("up" for the ladybug).

  • T (Tangent): A vector that runs parallel to the surface, typically aligned with the U-axis of the mesh's UV coordinates (the "forward" direction for textures).

  • B (Bitangent): A vector that is also parallel to the surface, perpendicular to both the Normal and the Tangent. It is calculated using a cross product.

These three vectors form an orthonormal basis: they are all mutually perpendicular and have a length of 1.0. They define a complete 3D coordinate system that is "stuck" to the surface of the mesh, twisting and turning with it.

Why Do We Need Tangent Space?

The primary motivation for tangent space is normal mapping. A normal map is a special texture where the RGB values don't represent color, but instead store the X, Y, and Z components of normal vectors. This allows us to add incredibly detailed lighting information - like bumps, cracks, and pores - to a low-polygon model.

The key is that these normals are stored relative to the surface. They are defined in Tangent Space. For example, a flat blue color (0.5, 0.5, 1.0) in a normal map texture translates to a vector (0, 0, 1) in tangent space, which means "this part of the surface points straight out, along the local normal." A different color would represent a vector pointing at an angle, creating the illusion of a bump.

To use this information, our shader must perform a transformation:

  1. Read the normal vector from the texture (which is in Tangent Space).

  2. Use the TBN matrix to transform this vector from Tangent Space into World Space.

  3. Use the resulting World Space normal for our lighting calculations.

Getting the Tangent Vector in Bevy

Just like normals, tangents are an attribute of a Mesh. They are typically generated from the mesh's UV coordinates, as the tangent is defined to follow the U direction of the texture map.

You almost never need to calculate these yourself. Bevy provides a convenient helper function:

// In your Rust setup code, after creating a mesh...
// This requires the mesh to have UV coordinates.
mesh.generate_tangents().expect("Failed to generate tangents");

// The tangent attribute is now available on the mesh. Bevy's PBR
// material pipeline automatically picks it up. If you are writing
// a fully custom material, you would need to ensure the vertex
// attribute is enabled in your render pipeline.

The tangent attribute in Bevy is a vec4:

  • .xyz: The 3D direction of the tangent vector.

  • .w: A value called "handedness" (either 1.0 or -1.0). This is used to correct the orientation of the calculated Bitangent, ensuring the TBN frame points the right way on meshes with mirrored UVs.

Building the TBN Frame in the Vertex Shader

The goal of the vertex shader is to calculate the World Space versions of the T, B, and N vectors and pass them to the fragment shader.

  1. Transform the Normal: We already know how to do this correctly using the normal matrix via mesh_normal_local_to_world.

  2. Transform the Tangent: A tangent is a direction along the surface, so it transforms like a position vector (without the translation part). We can use the standard model matrix for this.

  3. Calculate the Bitangent: The bitangent is simply the cross product of the world-space normal and tangent. We multiply by the tangent.w handedness value to ensure the correct orientation.

Here is the complete process inside a vertex shader:

// ... inside the @vertex function ...

let model = mesh_functions::get_world_from_local(in.instance_index);

// 1. Transform Normal and normalize
let world_normal = normalize(
    mesh_functions::mesh_normal_local_to_world(in.normal, in.instance_index)
);

// 2. Transform Tangent and normalize
let world_tangent = normalize(
    (model * vec4<f32>(in.tangent.xyz, 0.0)).xyz
);

// 3. Calculate Bitangent from the normalized N and T
let world_bitangent = cross(world_normal, world_tangent) * in.tangent.w;

// Pass these three vectors to the fragment shader via the output struct
out.world_normal = world_normal;
out.world_tangent = world_tangent;
out.world_bitangent = world_bitangent;

Using the TBN Matrix in the Fragment Shader

In the fragment shader, we assemble these three vectors into a mat3x3. This matrix serves as a bridge, allowing us to convert vectors from tangent space (like those from a normal map) into world space.

// ... inside the @fragment function ...

// First, re-normalize all incoming vectors to correct for interpolation.
let N = normalize(in.world_normal);
let T = normalize(in.world_tangent);
let B = normalize(in.world_bitangent);

// Assemble the TBN matrix. The columns of this matrix are our basis vectors.
let TBN = mat3x3<f32>(T, B, N);

// --- Normal Mapping Example ---
// (We will cover this in depth in a later phase)

// 1. Sample a normal from a texture. The value is in the [0,1] range.
// Note: We use textureSample() here, not textureSampleLevel(). The GPU
// can automatically calculate the correct mipmap level in a fragment
// shader because it has screen-space information, which is not
// available in the vertex shader.
let tangent_normal_from_texture = textureSample(normal_map, sampler, in.uv).xyz;

// 2. Remap the normal from [0,1] to the [-1,1] vector range.
let tangent_normal = tangent_normal_from_texture * 2.0 - 1.0;

// 3. Transform the normal from tangent space to world space using the TBN matrix.
let final_world_normal = normalize(TBN * tangent_normal);

// 4. Use `final_world_normal` for all your lighting calculations.
// ...

Pro Tip: Gram-Schmidt Orthogonalization

Just as interpolation can mess up a vector's length, it can also slightly disrupt the perfect 90-degree angles between the T, B, and N vectors. For high-precision work, you can enforce orthogonality in the fragment shader using a process called Gram-Schmidt orthogonalization.

// In the fragment shader, an improved way to create the TBN basis
let N = normalize(in.world_normal);
let T = normalize(in.world_tangent);

// This forces the tangent to be perfectly perpendicular to the normal.
let T_ortho = normalize(T - dot(T, N) * N);

// Recalculate the bitangent from the now-orthogonal T and N.
let B_ortho = cross(N, T_ortho);

// This TBN matrix is guaranteed to be perfectly orthonormal.
let TBN = mat3x3<f32>(T_ortho, B_ortho, N);

This is a robust way to ensure your tangent space basis is clean and accurate, eliminating a potential source of subtle lighting artifacts.

Common Normal Transformation Errors

Theory is one thing, but debugging a shader that produces bizarre lighting is another. Most issues with normals boil down to a handful of common mistakes. Learning to recognize their symptoms will save you hours of frustration.

Error 1: Using the Model Matrix on Normals

This is the fundamental error this entire article is about, and it's the most common first mistake.

The Code

// WRONG: Using the model matrix instead of the normal matrix.
let world_normal = (model * vec4<f32>(in.normal, 0.0)).xyz;

The Symptom

Lighting will look correct on objects that are only rotated or uniformly scaled. However, as soon as you apply non-uniform scale (e.g., scale = (2.0, 0.5, 1.0)), the lighting will become dramatically incorrect. Highlights will appear stretched, squashed, or in the wrong places, and surfaces will look too dark or flat.

The Fix

Always use the proper function for normal transformation, which internally uses the normal matrix.

// CORRECT:
let world_normal = mesh_functions::mesh_normal_local_to_world(in.normal, in.instance_index);

Error 2: Forgetting to Normalize (In Either Shader)

This is the second most common mistake and produces more subtle, but still very wrong, results.

The Code

// In Vertex Shader
// WRONG: Not normalized after transformation.
out.world_normal = mesh_functions::mesh_normal_local_to_world(in.normal, in.instance_index);

// In Fragment Shader
// WRONG: Not re-normalized after interpolation.
let N = in.world_normal;

The Symptom

Lighting intensity will be incorrect. If the interpolated normal's length is less than 1.0 (which is typical), the dot() product in your lighting calculation will be smaller than it should be, making the surface appear darker. If the length were greater than 1.0, it would appear unusually bright or "blown out." The overall effect is inconsistent and often looks "dull."

The Fix

Normalize, normalize, normalize. Once in the vertex shader after transformation, and again in the fragment shader before any calculations.

// CORRECT (Vertex):
out.world_normal = normalize(mesh_functions::mesh_normal_local_to_world(in.normal, in.instance_index));

// CORRECT (Fragment):
let N = normalize(in.world_normal);

Error 3: Transforming Tangents Incorrectly

When building a TBN matrix, it's easy to get confused and transform the tangent vector the same way you transformed the normal.

The Code

// WRONG: Using the normal matrix on a tangent vector.
let world_tangent = mesh_functions::mesh_normal_local_to_world(in.tangent.xyz, in.instance_index);

The Symptom

Normal mapping and other tangent-space effects will be completely broken. The lighting will seem to come from the wrong direction relative to the surface details, creating a confusing and distorted look.

The Fix

Remember that tangents are directions along the surface and transform with the regular model matrix.

// CORRECT:
let model = mesh_functions::get_world_from_local(in.instance_index);
let world_tangent = (model * vec4<f32>(in.tangent.xyz, 0.0)).xyz;

Error 4: Incorrect Bitangent Calculation

A small mistake in calculating the bitangent can flip your tangent space upside-down.

The Code

// WRONG: Forgetting to multiply by the handedness component (tangent.w).
let world_bitangent = cross(world_normal, world_tangent);

The Symptom

Similar to transforming tangents incorrectly. Normal mapped details might appear inverted (bumps look like divots) or lit from the opposite direction, but only on certain parts of your model (specifically, where UV islands have been mirrored by the 3D artist to save texture space).

The Fix

Always multiply the cross product by tangent.w.

// CORRECT:
let world_bitangent = cross(world_normal, world_tangent) * in.tangent.w;

Debugging with Visualizations

The single most effective way to debug these issues is to stop guessing and start visualizing. Instead of calculating lighting, have your fragment shader output the normal vector itself as a color.

// A simple function to map a vector direction [-1, 1] to a color [0, 1]
fn direction_to_color(dir: vec3<f32>) -> vec3<f32> {
    return dir * 0.5 + 0.5;
}

@fragment
fn fragment(in: VertexOutput) -> @location(0) vec4<f32> {
    let N = normalize(in.world_normal);
    let color = direction_to_color(N);
    return vec4<f32>(color, 1.0);
}

This "normal visualizer" will immediately show you if your normals are behaving correctly:

  • Smooth Gradients: A sphere should show smooth color transitions.

  • Consistent Colors: A cube should have a solid, distinct color for each face.

  • Rotation: When you rotate the object, the world-space colors should change accordingly.

  • Scaling: When you apply non-uniform scale, the colors should distort to match the new surface orientation. If they don't, you're not using the normal matrix correctly.


Complete Example: Normal Visualization System

The best way to solidify these concepts and build a powerful debugging tool for the future is to create a shader that can visually display normals, tangents, and bitangents. By mapping these vectors to RGB colors, we can see, instantly, if our transformations are working correctly. This is an indispensable technique used by graphics programmers everywhere.

Our Goal

We will build a complete Bevy application that displays several 3D meshes. We will create a custom material and WGSL shader that, instead of performing lighting, colors the mesh based on its vertex attributes. We will add interactive controls to switch between visualizing normals, tangents, and bitangents, and to toggle between local and world space, allowing us to directly observe the effects of our transformation logic.

What This Project Demonstrates

  • Correct Normal Transformation: You will see the direct output of mesh_normal_local_to_world and confirm that it works correctly under rotation and non-uniform scaling.

  • TBN Frame Calculation: The visualization will show the Tangent, Bitangent, and Normal vectors, confirming our TBN frame is constructed properly.

  • Local vs. World Space: Toggling between the two spaces provides a clear, intuitive understanding of how transformations affect orientation vectors.

  • The Importance of the Normal Matrix: A dedicated control will apply non-uniform scaling, proving visually why the model matrix fails and the normal matrix is necessary for correct results.

  • A Powerful Debugging Tool: You can adapt this shader and reuse it in your own projects whenever you suspect your lighting is wrong.

The Shader (assets/shaders/d02_06_normal_visualization.wgsl)

This single WGSL file contains both the vertex and fragment shaders.

  • The vertex shader is responsible for transforming the position, normal, and tangent from local to world space. It calculates the bitangent and passes the full set of local-space and world-space vectors to the fragment shader.

  • The fragment shader receives these interpolated vectors. Based on uniform parameters controlled by our Rust code, it selects which vector to display (Normal, Tangent, or Bitangent) and from which space (Local or World). It then uses a helper function to map the [-1, 1] vector direction to a [0, 1] RGB color for output.

#import bevy_pbr::mesh_functions
#import bevy_pbr::view_transformations::position_world_to_clip

struct NormalVisualizationMaterial {
    display_mode: u32,  // 0=normals, 1=tangents, 2=bitangents, 3=lighting test, 4=TBN combined
    show_world_space: u32,  // 0=local space, 1=world space
}

@group(2) @binding(0)
var<uniform> material: NormalVisualizationMaterial;

struct VertexInput {
    @builtin(instance_index) instance_index: u32,
    @location(0) position: vec3<f32>,
    @location(1) normal: vec3<f32>,
    @location(2) uv: vec2<f32>,
    @location(3) tangent: vec4<f32>,
}

struct VertexOutput {
    @builtin(position) clip_position: vec4<f32>,
    @location(0) world_position: vec3<f32>,
    @location(1) world_normal: vec3<f32>,
    @location(2) world_tangent: vec3<f32>,
    @location(3) world_bitangent: vec3<f32>,
    @location(4) local_normal: vec3<f32>,
    @location(5) local_tangent: vec3<f32>,
    @location(6) uv: vec2<f32>,
}

@vertex
fn vertex(in: VertexInput) -> VertexOutput {
    var out: VertexOutput;

    let model = mesh_functions::get_world_from_local(in.instance_index);

    // Transform position
    let world_position = mesh_functions::mesh_position_local_to_world(
        model,
        vec4<f32>(in.position, 1.0)
    );

    // Transform normal - using Bevy's helper which uses the normal matrix
    let world_normal = mesh_functions::mesh_normal_local_to_world(
        in.normal,
        in.instance_index
    );

    // Transform tangent - transforms like a position (direction vector)
    let world_tangent = (model * vec4<f32>(in.tangent.xyz, 0.0)).xyz;

    // Calculate bitangent
    // Must normalize before cross product to get correct length
    let N = normalize(world_normal);
    let T = normalize(world_tangent);
    let B = cross(N, T) * in.tangent.w;  // Include handedness

    // Store both local and world space versions
    out.clip_position = position_world_to_clip(world_position.xyz);
    out.world_position = world_position.xyz;
    out.world_normal = N;
    out.world_tangent = T;
    out.world_bitangent = B;
    out.local_normal = in.normal;
    out.local_tangent = in.tangent.xyz;
    out.uv = in.uv;

    return out;
}

// Convert a direction vector to RGB color
// Maps [-1,1] range to [0,1] color range
fn direction_to_color(dir: vec3<f32>) -> vec3<f32> {
    return dir * 0.5 + 0.5;
}

// Simple lighting for lighting test mode
fn calculate_simple_lighting(normal: vec3<f32>, position: vec3<f32>) -> vec3<f32> {
    // Directional light
    let light_dir = normalize(vec3<f32>(1.0, 1.0, 1.0));
    let diffuse = max(0.0, dot(normal, light_dir));

    // View direction for specular
    let view_dir = normalize(-position);
    let half_vec = normalize(light_dir + view_dir);
    let specular = pow(max(0.0, dot(normal, half_vec)), 32.0);

    // Combine
    let ambient = 0.2;
    let light_color = vec3<f32>(1.0, 1.0, 0.9);

    return light_color * (ambient + diffuse * 0.7 + specular * 0.3);
}

@fragment
fn fragment(in: VertexOutput) -> @location(0) vec4<f32> {
    // Re-normalize after interpolation
    let world_normal = normalize(in.world_normal);
    let world_tangent = normalize(in.world_tangent);
    let world_bitangent = normalize(in.world_bitangent);

    // Determine which space to use
    var normal_to_display: vec3<f32>;
    var tangent_to_display: vec3<f32>;
    var bitangent_to_display: vec3<f32>;

    if material.show_world_space == 1u {
        // World space
        normal_to_display = world_normal;
        tangent_to_display = world_tangent;
        bitangent_to_display = world_bitangent;
    } else {
        // Local space
        normal_to_display = normalize(in.local_normal);
        tangent_to_display = normalize(in.local_tangent);
        // Calculate local bitangent
        let local_N = normalize(in.local_normal);
        let local_T = normalize(in.local_tangent);
        bitangent_to_display = cross(local_N, local_T);
    }

    var color: vec3<f32>;

    if material.display_mode == 0u {
        // Mode 0: Display normals as color
        color = direction_to_color(normal_to_display);
    } else if material.display_mode == 1u {
        // Mode 1: Display tangents as color
        color = direction_to_color(tangent_to_display);
    } else if material.display_mode == 2u {
        // Mode 2: Display bitangents as color
        color = direction_to_color(bitangent_to_display);
    } else if material.display_mode == 3u {
        // Mode 3: Lighting test using normals
        color = calculate_simple_lighting(world_normal, in.world_position);
    } else if material.display_mode == 4u {
        // Mode 4: TBN combined visualization
        // Show each component in a different color channel
        let n_contribution = abs(dot(world_normal, vec3<f32>(0.0, 1.0, 0.0)));
        let t_contribution = abs(dot(world_tangent, vec3<f32>(1.0, 0.0, 0.0)));
        let b_contribution = abs(dot(world_bitangent, vec3<f32>(0.0, 0.0, 1.0)));

        color = vec3<f32>(
            t_contribution,  // Red channel = tangent alignment
            n_contribution,  // Green channel = normal alignment
            b_contribution   // Blue channel = bitangent alignment
        );
    } else {
        // Fallback: magenta (error color)
        color = vec3<f32>(1.0, 0.0, 1.0);
    }

    return vec4<f32>(color, 1.0);
}

The Rust Material (src/materials/d02_06_normal_visualization.rs)

This file defines the NormalVisualizationMaterial that connects our Rust logic to the shader. It defines a uniform struct to hold our settings (display_mode, show_world_space) and implements the Material trait. The specialize function is used here to ensure the render pipeline is configured to expect all the vertex attributes our shader needs (POSITION, NORMAL, UV_0, TANGENT), preventing runtime errors.

use bevy::pbr::{MaterialPipeline, MaterialPipelineKey};
use bevy::prelude::*;
use bevy::render::mesh::MeshVertexBufferLayoutRef;
use bevy::render::render_resource::{AsBindGroup, ShaderRef};
use bevy::render::render_resource::{RenderPipelineDescriptor, SpecializedMeshPipelineError};

mod uniforms {
    #![allow(dead_code)]

    use bevy::render::render_resource::ShaderType;

    #[derive(ShaderType, Debug, Clone, Copy)]
    pub struct NormalVisualizationMaterial {
        pub display_mode: u32,
        pub show_world_space: u32,
    }

    impl Default for NormalVisualizationMaterial {
        fn default() -> Self {
            Self {
                display_mode: 0,     // Normals
                show_world_space: 1, // World space
            }
        }
    }
}

pub use uniforms::NormalVisualizationMaterial as NormalVisualizationUniforms;

#[derive(Asset, TypePath, AsBindGroup, Debug, Clone)]
pub struct NormalVisualizationMaterial {
    #[uniform(0)]
    pub uniforms: NormalVisualizationUniforms,
}

impl Material for NormalVisualizationMaterial {
    fn vertex_shader() -> ShaderRef {
        "shaders/d02_06_normal_visualization.wgsl".into()
    }

    fn fragment_shader() -> ShaderRef {
        "shaders/d02_06_normal_visualization.wgsl".into()
    }

    fn specialize(
        _pipeline: &MaterialPipeline<Self>,
        descriptor: &mut RenderPipelineDescriptor,
        layout: &MeshVertexBufferLayoutRef,
        _key: MaterialPipelineKey<Self>,
    ) -> Result<(), SpecializedMeshPipelineError> {
        // Ensure we have all required vertex attributes
        let vertex_layout = layout.0.get_layout(&[
            Mesh::ATTRIBUTE_POSITION.at_shader_location(0),
            Mesh::ATTRIBUTE_NORMAL.at_shader_location(1),
            Mesh::ATTRIBUTE_UV_0.at_shader_location(2),
            Mesh::ATTRIBUTE_TANGENT.at_shader_location(3),
        ])?;

        descriptor.vertex.buffers = vec![vertex_layout];
        Ok(())
    }
}

Don't forget to add it to src/materials/mod.rs:

// ... other materials
pub mod flag_simulation;

The Demo Module (src/demos/d02_06_normal_visualization.rs)

The Bevy application logic sets up our scene and handles interactivity. The setup system creates three distinct meshes (a sphere, a cube, and a torus) to showcase how the visualization looks on different types of geometry. It is crucial that it calls generate_tangents() for each mesh before applying our custom material. The handle_input system listens for keyboard presses to change the material's uniform values, allowing us to interactively change the visualization mode and toggle the non-uniform scaling test.

use crate::materials::d02_06_normal_visualization::{
    NormalVisualizationMaterial, NormalVisualizationUniforms,
};
use bevy::prelude::*;
use std::f32::consts::PI;

#[derive(Component)]
struct RotatingObject {
    speed: f32,
}

#[derive(Component)]
struct ScalingObject {
    base_scale: Vec3,
}

pub fn run() {
    App::new()
        .add_plugins(DefaultPlugins)
        .add_plugins(MaterialPlugin::<NormalVisualizationMaterial>::default())
        .add_systems(Startup, setup)
        .add_systems(
            Update,
            (handle_input, rotate_objects, scale_objects, update_ui),
        )
        .run();
}

fn setup(
    mut commands: Commands,
    mut meshes: ResMut<Assets<Mesh>>,
    mut materials: ResMut<Assets<NormalVisualizationMaterial>>,
) {
    // Create material
    let material = materials.add(NormalVisualizationMaterial {
        uniforms: NormalVisualizationUniforms::default(),
    });

    // Sphere - uniform geometry (good for seeing smooth normals)
    let mut sphere_mesh = Sphere::new(1.0).mesh().uv(32, 16);
    sphere_mesh
        .generate_tangents()
        .expect("Failed to generate tangents");

    commands.spawn((
        Mesh3d(meshes.add(sphere_mesh)),
        MeshMaterial3d(material.clone()),
        Transform::from_xyz(-3.0, 0.0, 0.0),
        RotatingObject { speed: 0.5 },
    ));

    // Cube - flat faces (good for seeing face normals)
    let mut cube_mesh = Cuboid::new(1.5, 1.5, 1.5).mesh().build();
    cube_mesh
        .generate_tangents()
        .expect("Failed to generate tangents");

    commands.spawn((
        Mesh3d(meshes.add(cube_mesh)),
        MeshMaterial3d(material.clone()),
        Transform::from_xyz(0.0, 0.0, 0.0),
        RotatingObject { speed: 0.3 },
    ));

    // Torus - complex geometry (good for seeing tangent space)
    let mut torus_mesh = Torus::new(0.8, 0.3).mesh().build();
    torus_mesh
        .generate_tangents()
        .expect("Failed to generate tangents");

    commands.spawn((
        Mesh3d(meshes.add(torus_mesh)),
        MeshMaterial3d(material.clone()),
        Transform::from_xyz(3.0, 0.0, 0.0),
        RotatingObject { speed: 0.7 },
        ScalingObject {
            base_scale: Vec3::ONE,
        },
    ));

    // Lighting (for lighting test mode)
    commands.spawn((
        DirectionalLight {
            illuminance: 10000.0,
            shadows_enabled: false,
            ..default()
        },
        Transform::from_rotation(Quat::from_euler(EulerRot::XYZ, -PI / 4.0, PI / 4.0, 0.0)),
    ));

    // Camera
    commands.spawn((
        Camera3d::default(),
        Transform::from_xyz(0.0, 3.0, 8.0).looking_at(Vec3::ZERO, Vec3::Y),
    ));

    // UI
    commands.spawn((
        Text::new(
            "[1-5] Display Mode | [Space] Toggle Space | [S] Toggle Scaling\n\
             \n\
             Mode: Normals | Space: World | Scaling: Off\n\
             \n\
             Color Mapping:\n\
             - Red: X+ / Right\n\
             - Green: Y+ / Up\n\
             - Blue: Z+ / Forward",
        ),
        Node {
            position_type: PositionType::Absolute,
            top: Val::Px(10.0),
            left: Val::Px(10.0),
            ..default()
        },
        TextFont {
            font_size: 16.0,
            ..default()
        },
        TextColor(Color::WHITE),
        BackgroundColor(Color::srgba(0.0, 0.0, 0.0, 0.7)),
    ));
}

fn handle_input(
    keyboard: Res<ButtonInput<KeyCode>>,
    mut materials: ResMut<Assets<NormalVisualizationMaterial>>,
    mut scaling_query: Query<(&mut Transform, &ScalingObject)>,
) {
    for (_, material) in materials.iter_mut() {
        // Display mode
        if keyboard.just_pressed(KeyCode::Digit1) {
            material.uniforms.display_mode = 0; // Normals
        }
        if keyboard.just_pressed(KeyCode::Digit2) {
            material.uniforms.display_mode = 1; // Tangents
        }
        if keyboard.just_pressed(KeyCode::Digit3) {
            material.uniforms.display_mode = 2; // Bitangents
        }
        if keyboard.just_pressed(KeyCode::Digit4) {
            material.uniforms.display_mode = 3; // Lighting test
        }
        if keyboard.just_pressed(KeyCode::Digit5) {
            material.uniforms.display_mode = 4; // TBN combined
        }

        // Toggle world/local space
        if keyboard.just_pressed(KeyCode::Space) {
            material.uniforms.show_world_space = 1 - material.uniforms.show_world_space;
        }
    }

    // Toggle scaling (demonstrates normal matrix importance)
    if keyboard.just_pressed(KeyCode::KeyS) {
        for (mut transform, scaling_obj) in scaling_query.iter_mut() {
            if transform.scale == scaling_obj.base_scale {
                // Apply non-uniform scale
                transform.scale = Vec3::new(2.0, 0.5, 1.0);
            } else {
                // Reset to base scale
                transform.scale = scaling_obj.base_scale;
            }
        }
    }
}

fn rotate_objects(time: Res<Time>, mut query: Query<(&mut Transform, &RotatingObject)>) {
    for (mut transform, rotating) in query.iter_mut() {
        transform.rotate_y(time.delta_secs() * rotating.speed);
    }
}

fn scale_objects(
    time: Res<Time>,
    mut query: Query<(&mut Transform, &ScalingObject), Without<RotatingObject>>,
) {
    let time_val = time.elapsed_secs();

    for (mut transform, _) in query.iter_mut() {
        // Only scale if currently scaled (preserve user's S key toggle)
        if transform.scale != Vec3::ONE {
            // Animate the scaling
            let scale_x = 2.0 + (time_val * 0.5).sin() * 0.5;
            let scale_y = 0.5 + (time_val * 0.7).cos() * 0.3;
            transform.scale = Vec3::new(scale_x, scale_y, 1.0);
        }
    }
}

fn update_ui(
    materials: Res<Assets<NormalVisualizationMaterial>>,
    scaling_query: Query<&Transform, With<ScalingObject>>,
    mut text_query: Query<&mut Text>,
) {
    if !materials.is_changed() {
        return;
    }

    if let Some((_, material)) = materials.iter().next() {
        let mode_name = match material.uniforms.display_mode {
            0 => "Normals",
            1 => "Tangents",
            2 => "Bitangents",
            3 => "Lighting Test",
            4 => "TBN Combined",
            _ => "Unknown",
        };

        let space_name = if material.uniforms.show_world_space == 1 {
            "World"
        } else {
            "Local"
        };

        let scaling_status = if let Ok(transform) = scaling_query.single() {
            if transform.scale != Vec3::ONE {
                "On (Non-Uniform)"
            } else {
                "Off"
            }
        } else {
            "N/A"
        };

        for mut text in text_query.iter_mut() {
            **text = format!(
                "[1-5] Display Mode | [Space] Toggle Space | [S] Toggle Scaling\n\
                 \n\
                 Mode: {} | Space: {} | Scaling: {}\n\
                 \n\
                 Color Mapping:\n\
                 - Red: X+ / Right\n\
                 - Green: Y+ / Up\n\
                 - Blue: Z+ / Forward",
                mode_name, space_name, scaling_status
            );
        }
    }
}

Don't forget to add it to src/demos/mod.rs:

// ... other demos
pub mod flag_simulation;

And register it in src/main.rs:

Demo {
    number: "2.6",
    title: "Normal Vector Transformation",
    run: demos::d02_06_normal_visualization::run,
},

Running the Demo

When you run the demo, you'll see three rotating objects, each colored according to the default visualization setting (World-space Normals). Use the keyboard to explore the different modes and observe the results.

Controls

KeyAction
1Display Normals as color.
2Display Tangents as color.
3Display Bitangents as color.
4Display a Lighting Test using world normals.
5Display a Combined TBN visualization.
SpaceToggle between World Space and Local Space.
SToggle Non-Uniform Scaling on the torus.

What You're Seeing

ModeWhat to Look For
Normals (World)Colors change as objects rotate. Red faces right (X+), Green faces up (Y+), Blue faces toward you (Z+). The sphere has smooth gradients. The cube has solid-colored faces.
Normals (Local)Colors are "painted on" and do not change as objects rotate. This shows the normals relative to the object itself.
TangentsShows the direction of the U-axis of the UV map. Useful for debugging texture mapping issues.
Lighting TestA simple check to see if the world-space normals are behaving correctly for a basic lighting model. The highlight should move smoothly across the surface.
Scaling TestPress 'S' to squash the torus. In Lighting mode, notice the highlight remains correct and plausible. This is the normal matrix in action. If we had used the model matrix, the lighting would appear stretched and incorrect.

Key Takeaways

This has been a dense but critical topic. Internalizing these concepts will prevent the vast majority of lighting and transformation errors in your shaders.

  1. Normals are Not Positions: Normals represent orientation and must be treated differently from position vectors, which represent a location in space.

  2. The Normal Matrix is the Solution: For any transformation involving non-uniform scaling or shearing, you must use the Normal Matrix (transpose(inverse(model_matrix_3x3))) to transform normals correctly.

  3. Use Bevy's Helpers: The most efficient and reliable way to do this in Bevy is with mesh_functions::mesh_normal_local_to_world(). This function uses a normal matrix that Bevy pre-computes on the CPU for you.

  4. Normalize After Transformation: The normal matrix corrects a normal's direction but not necessarily its length. You must normalize the result in the vertex shader immediately after transforming it.

  5. Re-Normalize After Interpolation: Linear interpolation between vertices does not preserve a vector's length. You must re-normalize the normal vector at the very beginning of your fragment shader before using it in any calculations.

  6. Tangents Transform Like Positions: Tangent vectors are directions along the surface and are transformed correctly using the standard model matrix, not the normal matrix.

  7. The TBN Frame is a Local Coordinate System: The Tangent, Bitangent, and Normal vectors form a complete coordinate system on the surface of your mesh, which is essential for advanced effects like normal mapping.

  8. Calculate the Bitangent Carefully: The bitangent is calculated with cross(world_normal, world_tangent) * tangent.w. Forgetting the tangent.w (handedness) will cause incorrect results on meshes with mirrored UVs.

What's Next?

You have now mastered one of the trickiest and most fundamental aspects of vertex shading. Understanding how to correctly handle different vertex attributes like positions, normals, and tangents provides a solid foundation for more advanced techniques.

In the next article, we will continue exploring the power of the vertex shader by diving into Instanced Rendering - an incredibly powerful technique for rendering thousands of similar objects with high performance. We will see how to leverage the @builtin(instance_index) to create massive, varied scenes while keeping draw calls to a minimum.

Next up: 2.7 - Instanced Rendering


Quick Reference

The Golden Rule

Normals are not positions. They represent orientation and must be transformed differently.

Normal Transformation

  • Problem: Non-uniform scaling breaks lighting if you use the standard model matrix on normals.

  • Solution: Use the Normal Matrix, which is specifically designed to preserve the correct surface orientation.

  • Formula: Normal Matrix = transpose(inverse(model_3x3))

  • Bevy Practice: Use the built-in mesh_functions::mesh_normal_local_to_world() function, as it uses a pre-calculated Normal Matrix for you.

The Two Rules of Normalization

  1. Always normalize() in the vertex shader immediately after transformation.

  2. Always normalize() again in the fragment shader to correct for interpolation errors.

Tangent Space (The TBN Frame)

A local coordinate system on a mesh's surface, essential for normal mapping.

  • N (Normal): Points "out" from the surface.

  • T (Tangent): Runs "along" the surface, typically following the texture's U direction.

  • B (Bitangent): Runs "across" the surface, perpendicular to both N and T.

How to Transform the TBN Frame

  • Normal (N): Transforms using the Normal Matrix.

  • Tangent (T): Transforms using the standard Model Matrix.

  • Bitangent (B): Is not transformed directly, but is calculated in world space: cross(world_normal, world_tangent) * handedness.

The Best Debugging Technique

When in doubt, visualize your vectors. Output them directly as colors from the fragment shader to see if they are behaving as you expect.

  • Mapping: output_color = vector_direction * 0.5 + 0.5 maps a [-1, 1] vector to a [0, 1] color.