Skip to main content

Command Palette

Search for a command to run...

1.8 - Essential Shader Math Concepts

Updated
38 min read
1.8 - Essential Shader Math Concepts

What We're Learning

Until now, we've treated transformation matrices as black boxes - essential tools for getting our models on screen, but with mysterious inner workings. This chapter opens that box.

We are about to master the engine that drives all 3D graphics: mathematics. This isn't a dry theory lesson; it's a creative awakening. Understanding this math is like learning the grammar of a language. Once you know the rules, you can stop reciting phrases and start writing poetry. You gain the power to fix lighting bugs, invent mind-bending visual effects, and build shaders that are not just correct, but elegant and efficient.

Ever wanted to make a billboard sprite that always faces the camera? Or create a shockwave effect that procedurally ripples across your scene? These aren't just buttons in an editor; they are the direct result of applying a few lines of vector and matrix math. We will demystify the core concepts that power every transformation in every shader you will ever write, shifting your perspective from "moving objects" to "transforming space itself."

By the end of this article, you will have a solid, practical understanding of:

  • The Dual Role of Vectors: Understand how a single data structure represents both a position in space and an abstract direction, and why the distinction is crucial.

  • The Geometric Power of dot and cross: Uncover the intuitive, geometric meaning behind the two most important vector operations: the dot product ("how aligned are two vectors?") and the cross product ("what's the perpendicular direction?").

  • Matrices as Transformation Recipes: See a 4x4 matrix not as a grid of numbers, but as a complete recipe for transforming space, encoding scale, rotation, and translation all in one.

  • Why Transformation Order is Everything: Learn why translation * rotation * scale is the universal standard and how reversing the order creates an orbiting effect instead of a local rotation.

  • The Magic of the 'W' Component: Demystify homogeneous coordinates and the fourth w component, the clever trick that allows matrices to handle translation and create perspective.

  • Why Normals Are Special: Discover why surface normals break under non-uniform scaling and must be transformed with a special "inverse transpose" matrix to keep lighting correct.

Part 1: The Building Blocks - Vectors

Before we can transform objects, we need a way to describe their properties in space. In 3D graphics, every position, direction, and orientation is built upon a single, fundamental concept: the vector. Mastering vector operations is the first and most crucial step to understanding shader math.

1. Introduction to Vectors

What is a vector?

At its core, a vector is a quantity that possesses both magnitude (length) and direction. Think of it as an arrow in space. It points somewhere, and it has a specific length. Crucially, a pure vector does not have a fixed starting position. The instruction "move 3 units forward and 2 units up" describes a vector; the displacement is the same no matter where you start.

In graphics programming, we use this single concept for two primary, related purposes:

  1. As a Direction: To represent things that have an orientation but no specific location. Examples include the direction of a light ray, the direction a camera is facing, or a surface normal (the direction a surface is pointing).

  2. As a Position (or Point): While mathematically distinct from a free-floating vector, we use the same data structure to store the coordinates of a point in space (like a vertex position). You can visualize this as a position vector - an arrow stretching from the world's origin (0,0,0) to that specific point.

Vector Components

We describe vectors numerically using components. Each component corresponds to a value along one of the coordinate system's axes.

  • A 2D vector (x, y) has two components.

  • A 3D vector (x, y, z) has three components.

  • A 4D vector (x, y, z, w) has four components. The w component is special and will become indispensable when we discuss perspective and matrices.

For example, the vector vec3(1.0, 0.0, 0.0) represents a pure direction pointing one unit along the positive X-axis.

Representing Vectors in WGSL

In WGSL, we use built-in types to define vectors of 32-bit floating-point numbers:

  • vec2<f32>: A two-component vector.

  • vec3<f32>: A three-component vector.

  • vec4<f32>: A four-component vector.

// A 3D vector representing a direction
let light_direction: vec3<f32> = vec3<f32>(0.5, 1.0, 0.2);

// A 4D vector representing a position.
// (We'll see why the 'w' is 1.0 for positions later)
let vertex_position: vec4<f32> = vec4<f32>(10.0, 5.0, 0.0, 1.0);

// Accessing components is easy
let x_pos = vertex_position.x;
let y_pos = vertex_position.y;

// You can also "swizzle" components to create new, smaller vectors.
// This is a common and convenient shorthand.
let xy_pos: vec2<f32> = vertex_position.xy;
let xyz_pos: vec3<f32> = vertex_position.xyz;

2. Defining Our Space: Right-Handed Coordinates

Before we can operate on vectors, we must agree on the layout of our 3D world. Which way is "up"? Which way is "forward"? This is defined by the handedness of our coordinate system. It's an arbitrary but critical convention, like deciding which side of the road to drive on. If your game engine and your 3D modeling software disagree on handedness, you'll end up with mirrored or upside-down models.

Right-Handed System (Bevy, Blender, Vulkan)

This is the standard for Bevy, Vulkan, OpenGL, and Blender. You can determine the axis directions using your right hand:

  1. Point your index finger straight out. This is the Positive X axis (Right).

  2. Curl your middle finger 90 degrees inward. This is the Positive Y axis (Up).

  3. Point your thumb straight up. This is the Positive Z axis (Forward/Out of the screen).

Left-Handed System

This system is used by DirectX and Unity. The rule is the same, but you use your left hand. The result is that the Z-axis points in the opposite direction (into the screen).

For this entire series, we will always assume a right-handed coordinate system. This consistency is what allows operations like the cross product to have a predictable, reliable outcome.

3. Basic Vector Operations

These are the everyday operations you'll perform constantly in shaders.

Vector Addition and Subtraction

These operations are performed component-wise, meaning you simply add or subtract the corresponding x, y, and z components.

  • a + b = (a.x + b.x, a.y + b.y, ...)

  • a - b = (a.x - b.x, a.y - b.y, ...)

Geometrically, adding a and b is like placing the tail of vector b at the head of vector a. The result is the vector from the tail of a to the new head of b.

Vector subtraction is one of the most useful tools in your arsenal. The expression A - B gives you the vector that points from point B to point A.

Scalar Multiplication and Division

Multiplying or dividing a vector by a single number (a scalar) scales its magnitude. The operation is component-wise: scalar * v = (scalar * v.x, scalar * v.y, ...).

  • If scalar > 1, the vector gets longer.

  • If 0 < scalar < 1, the vector gets shorter.

  • If scalar < 0, the vector flips and points in the opposite direction.

Vector Normalization

Normalization is the process of adjusting a vector's length to be exactly 1 while preserving its direction. The result is called a unit vector. Unit vectors are essential in graphics because they represent pure direction, with magnitude factored out. This is critical for lighting calculations where only the direction to the light matters, not the distance.

In WGSL, you use the normalize() built-in function.

// A vector with an arbitrary length
let v = vec3<f32>(3.0, 4.0, 0.0);

// a_unit will be (0.6, 0.8, 0.0), which has a length of 1.0
let a_unit = normalize(v);

Vector Length / Magnitude

The length of a vector is calculated using the Pythagorean theorem: sqrt(x*x + y*y + z*z). In WGSL, you use the length() function. Normalizing a vector is mathematically equivalent to dividing the vector by its own length: normalize(v) is simply a highly optimized version of v / length(v).

4. Advanced Vector Operations

These two operations are the heart of 3D graphics, forming the basis for everything from lighting models to procedural generation.

Dot Product

The dot product takes two vectors and returns a single scalar value. It answers the question: "How much do these two vectors point in the same direction?"

It measures the alignment, or "agreement," between the two vectors:

  • dot(a, b) > 0: The angle between the vectors is less than 90°. They point in a generally similar direction.

  • dot(a, b) = 0: The vectors are exactly perpendicular (orthogonal) to each other.

  • dot(a, b) < 0: The angle is greater than 90°. They point in generally opposite directions.

For normalized (unit) vectors, the dot product has a powerful, specific meaning: it gives you the cosine of the angle between them. This is a cornerstone of nearly all lighting calculations.

How does the simple formula (a.x*b.x) + (a.y*b.y) + (a.z*b.z) manage to measure alignment? Think of it as calculating an "agreement score" for each axis and summing the results.

  • The term a.x * b.x checks for agreement on the X-axis. If both a.x and b.x are positive (or both are negative), their product is positive - they agree. If they have opposite signs, the product is negative - they disagree.

  • The same logic applies to the Y and Z axes.

  • The final sum is the net agreement across all three dimensions. A large positive result means high overall agreement, while a large negative result means they point in opposite directions.

This simple component-wise multiplication and sum is a computationally cheap yet powerful way to geometrically project one vector onto another and measure the result.

Multiply the corresponding components of two vectors and sum the results.

let a = vec3<f32>(1.0, 2.0, 3.0);
let b = vec3<f32>(4.0, 5.0, 6.0);

// result = (1*4) + (2*5) + (3*6) = 4 + 10 + 18 = 32.0
let result = dot(a, b);
  • Diffuse Lighting: How much light should a surface receive? Calculate the dot product of the surface normal and the light direction. A high value means the surface squarely faces the light; a low or negative value means it's angled away or in shadow.

  • Checking Visibility: Is an enemy facing the player? Calculate the dot product of the enemy's "forward" vector and the vector pointing from the enemy to the player. If the result is positive, the player is generally in front of the enemy.

Cross Product

The cross product takes two 3D vectors and returns a new 3D vector that is perpendicular to both of the inputs. It's how you generate a third direction from two existing ones.

The direction of the resulting vector follows the right-hand rule.

To find the direction of cross(a, b) with your right hand:

  1. Point your index finger in the direction of vector a.

  2. Curl your middle finger in the direction of vector b.

  3. Your thumb will now point in the direction of cross(a, b).

Crucially, the order matters! cross(a, b) points in the exact opposite direction of cross(b, a).

The cross product's formula might seem arbitrary, but it's a clever algebraic construction designed to solve a very specific puzzle: find a new vector c that is perpendicular to both a and b.

In vector math, "perpendicular" means the dot product is zero. Therefore, the formula for cross(a, b) was engineered to produce a vector c that is guaranteed to satisfy two conditions:

  1. dot(c, a) = 0

  2. dot(c, b) = 0

The formula achieves this by mixing the components of the input vectors in a very particular way (for example, the z component of the result, c.z, is calculated as a.x*b.y - a.y*b.x). While you don't need to memorize the formula, understanding that it's a purpose-built solution to the "find a perpendicular vector" problem is key.

The formula mixes components in a specific way to mathematically guarantee the result is perpendicular to both inputs. You don't need to memorize it, just use the built-in function.

let a = vec3<f32>(1.0, 0.0, 0.0); // Positive X-axis
let b = vec3<f32>(0.0, 1.0, 0.0); // Positive Y-axis

// c will be (0.0, 0.0, 1.0) -> the Positive Z-axis,
// which is perpendicular to both X and Y in a right-handed system.
let c = cross(a, b);

Note: The cross product is only defined for vec3.

  • Calculating Normals: If you have two vectors representing the edges of a triangle on a mesh, you can use the cross product to calculate the normal vector for that triangle's surface.

  • Building a Coordinate System: A fundamental technique for orienting objects. If you know an object's "forward" direction and its desired "up" direction, you can use a cross product to find its "right" direction (right = cross(forward, up)). This gives you a complete, stable 3D orientation, essential for cameras and character controllers.

Part 2: Transforming Space - Matrices

Now that we understand vectors, we can learn how to manipulate them. We need a tool that can rotate, scale, and move our vectors to position objects in a 3D scene. That tool is the matrix. It's best to think of a matrix not as a grid of numbers, but as a description of a transformed coordinate system.

5. Introduction to Matrices

What is a matrix?

A matrix is a rectangular grid of numbers, arranged in rows and columns. In 3D graphics, we primarily use 4x4 matrices. Forget thinking of it as just a table of data and start thinking of it as a transformation recipe. It's a compact, powerful set of instructions that tells the GPU how to take an input vector and produce a new, transformed output vector.

The first three columns of a typical transformation matrix define the new directions for the X, Y, and Z axes, respectively. The fourth column defines the new origin (translation).

Representing Matrices in WGSL

Just like vectors, WGSL has built-in types for matrices:

  • mat2x2<f32>: A 2x2 matrix (2 columns, 2 rows).

  • mat3x3<f32>: A 3x3 matrix.

  • mat4x4<f32>: A 4x4 matrix, the standard for 3D transformations.

WGSL Matrix Construction (Column-Major Order)

This is a critical concept that often trips up newcomers: WGSL, like OpenGL and Vulkan, is column-major. This means when you construct a matrix from a list of numbers, you are defining the columns one by one, from top to bottom.

// Constructing a 4x4 matrix in WGSL
let my_matrix = mat4x4<f32>(
    // Column 0
    1.0, 2.0, 3.0, 4.0,
    // Column 1
    5.0, 6.0, 7.0, 8.0,
    // Column 2
    9.0, 10.0, 11.0, 12.0,
    // Column 3
    13.0, 14.0, 15.0, 16.0
);

In standard mathematical notation (which is often written row-by-row), this matrix looks like this:

┌                        ┐
│  1.0  5.0   9.0  13.0  │  <- Row 0
│  2.0  6.0  10.0  14.0  │  <- Row 1
│  3.0  7.0  11.0  15.0  │  <- Row 2
│  4.0  8.0  12.0  16.0  │  <- Row 3
└                        ┘
   ^    ^     ^     ^
 Col0  Col1  Col2  Col 3

Notice how the first four numbers in the constructor (1.0, 2.0, 3.0, 4.0) became the first column, not the first row.

Accessing Matrix Elements

You can access an entire column vector using a single index:

// Accessing Column 1
// result will be vec4<f32>(5.0, 6.0, 7.0, 8.0)
let col1 = my_matrix[1];

To access an individual element, you use a second index. The syntax is matrix[columnIndex][rowIndex].

// Accessing the element in Column 1, Row 2
// This is the value 11.0 in our example matrix
let element_1_2 = my_matrix[2][1]; // This was a mistake in the original article, it should be 10.0
// This is the value 7.0 in our example matrix
// let element_1_2 = my_matrix[1][2];

Remembering this [col][row] indexing is essential for manually reading or manipulating matrix elements.

6. Matrix-Vector Multiplication: Applying Transformations

The operation matrix * vector is the cornerstone of shader math. Its purpose is to take a vector defined in one coordinate system and find its new coordinates after the system has been transformed by the matrix.

The Conceptual Model: A Linear Combination

Think of a vector's components, like v = (2, 3, 0), as a set of instructions: "Starting from the origin, move 2 units along the standard X-axis, then 3 units along the standard Y-axis." These instructions are relative to a standard, untransformed space.

A matrix M describes a new, transformed coordinate system with its own basis vectors (a new X-axis, Y-axis, etc.). The multiplication M * v answers the question:

If we follow the exact same instructions (2, 3, 0) but use the new axes defined by matrix M, where do we end up?

The calculation M * v is a linear combination of the matrix's columns, using the vector's components as weights:

result = (v.x * M's_X_axis_column) + (v.y * M's_Y_axis_column) + (v.z * M's_Z_axis_column) + (v.w * M's_Origin_column)

This provides a powerful geometric intuition: you are remapping a point from an old grid onto a new, transformed grid.

The Mechanical Process

While the linear combination is the what, the GPU needs a concrete algorithm for the how. To find each component of the output vector, it calculates the dot product of the input vector and one of the matrix's rows.

  • result.x = dot(v, M's_first_row)

  • result.y = dot(v, M's_second_row)

  • result.z = dot(v, M's_third_row)

  • result.w = dot(v, M's_fourth_row)

This "sum of products" procedure is the algorithm that computes the conceptual linear combination we described above. It's a different way of looking at the same calculation, but one that is much more suited for hardware implementation. You will simply use the * operator, but now you know both the "why" (linear combination) and the "how" (row-vector dot products) of the operation.

Thankfully, you never have to write this manually. The GPU's hardware is built to do this incredibly fast. You just write:

let transformed_vector = my_matrix * v;

Homogeneous Coordinates (The Magic of W)

You might wonder why we use 4D vectors and 4x4 matrices for 3D graphics. The fourth component, w, is called a homogeneous coordinate. It's a clever mathematical trick that lets us perform all our transformations within a single, unified system.

The value of w distinguishes between two fundamental concepts:

  • For a Position (a point in space), we set w = 1.0. We want a point to be affected by the entire transformation, including translation.

  • For a Direction (a vector with no location), we set w = 0.0. A direction should be affected by rotation and scale, but moving the entire object shouldn't change the direction of its surface normals or light rays.

Look at the last column of a standard transformation matrix - the translation part (tx, ty, tz, 1). When we multiply, the w component of our vector determines if this translation is applied:

// For a Position (w=1):
new_x = (rotation/scale part) + (tx * 1.0); // Translation IS applied

// For a Direction (w=0):
new_x = (rotation/scale part) + (tx * 0.0); // Translation IS IGNORED

This elegant system is fundamental to how 3D graphics pipelines work.

7. Basic Transformation Matrices

Let's see what the most common transformation recipes look like.

Identity Matrix

The identity matrix is the equivalent of the number 1 in multiplication. It does nothing. Multiplying any vector by the identity matrix gives you the same vector back. It has 1s on the diagonal and 0s everywhere else. Conceptually, its basis vectors are the standard X, Y, and Z axes, and its origin is at (0,0,0).

let identity = mat4x4<f32>(
    // Col 0: X-axis is (1,0,0)
    1.0, 0.0, 0.0, 0.0,
    // Col 1: Y-axis is (0,1,0)
    0.0, 1.0, 0.0, 0.0,
    // Col 2: Z-axis is (0,0,1)
    0.0, 0.0, 1.0, 0.0,
    // Col 3: Origin is at (0,0,0)
    0.0, 0.0, 0.0, 1.0
);

When is it useful?

  • Initialization: It's the perfect starting point for building a more complex transformation.

  • Default Value: It serves as a safe default for an optional transformation.

  • Resetting: You can reset an object's transformation by setting its matrix back to the identity.

Translation Matrix

A translation matrix simply moves a point. The translation values (tx, ty, tz) go in the fourth column, effectively setting a new origin. The basis vectors (the first three columns) remain unchanged.

// Move 5 units right (X), 3 units up (Y)
let tx = 5.0; let ty = 3.0; let tz = 0.0;
let translation = mat4x4<f32>(
    1.0, 0.0, 0.0, 0.0, // X-axis
    0.0, 1.0, 0.0, 0.0, // Y-axis
    0.0, 0.0, 1.0, 0.0, // Z-axis
    tx,  ty,  tz,  1.0  // New origin
);

Scale Matrix

A scale matrix changes the size of an object by stretching or shrinking the basis vectors. The scale factors for each axis go along the diagonal of the first three columns.

// Scale 2x in X, 0.5x in Y, 1x in Z (no change)
let sx = 2.0; let sy = 0.5; let sz = 1.0;
let scale = mat4x4<f32>(
    sx,  0.0, 0.0, 0.0, // X-axis is now 2 units long
    0.0, sy,  0.0, 0.0, // Y-axis is now 0.5 units long
    0.0, 0.0, sz,  0.0, // Z-axis is unchanged
    0.0, 0.0, 0.0, 1.0
);

Rotation Matrices

Rotation matrices pivot the basis vectors around an axis without changing their length. They are more complex, using trigonometric functions sin and cos to mix coordinate values in a way that corresponds to circular motion. A rotation matrix is simply a container for these new, rotated basis vectors.

Here are the standard matrices for performing a counter-clockwise rotation around the cardinal axes by an angle θ (theta).

// Common setup for all rotation matrices
let angle = ...; // The angle of rotation, in radians
let c = cos(angle);
let s = sin(angle);

// Rotation around the X-axis
let rotation_x = mat4x4<f32>(
    1.0, 0.0, 0.0, 0.0,
    0.0,  c,   s,  0.0,
    0.0, -s,   c,  0.0,
    0.0, 0.0, 0.0, 1.0
);

// Rotation around the Y-axis
let rotation_y = mat4x4<f32>(
     c,  0.0, -s,  0.0,
    0.0, 1.0, 0.0, 0.0,
     s,  0.0,  c,  0.0,
    0.0, 0.0, 0.0, 1.0
);

// Rotation around the Z-axis (standard 2D rotation)
let rotation_z = mat4x4<f32>(
     c,   s,  0.0, 0.0,
    -s,   c,  0.0, 0.0,
    0.0, 0.0, 1.0, 0.0,
    0.0, 0.0, 0.0, 1.0
);

8. Matrix-Matrix Multiplication: Combining Transformations

What if you want to scale an object, then rotate it, and finally move it into position? You combine these transformations by multiplying their matrices together. The result is a single matrix that contains the entire sequence of transformations, which is far more efficient than applying them one by one.

Order Matters!

This is the most important rule of combining transformations: Matrix multiplication is not commutative. This means A * B is not the same as B * A.

When you multiply matrices, the transformations are applied from right to left. This works just like function composition in mathematics, where f(g(x)) applies g first, then f.

final_transform = translation * rotation * scale;

When you apply this to a vector v, it is evaluated as:

final_vector = (translation * (rotation * (scale * v)))

  1. First, the scale matrix is applied to the vector.

  2. Then, the rotation matrix is applied to that scaled result.

  3. Finally, the translation matrix is applied to the scaled-and-rotated result.

Let's see why this is critical:

Correct Order: translation * rotation * scale

Here, the object rotates around its own local origin, and then the fully-rotated object is moved to its final position. This is usually what you want.

Incorrect Order: rotation * translation

If you reverse the order, the object first moves away from the world origin, and then it rotates around that distant world origin. This makes the object orbit, which is almost never the desired behavior for positioning a single object.

The universal order for a "model" matrix (which positions a single model in the world) is translation * rotation * scale. Read from right-to-left, this correctly:

  1. Scales the object in place (around its local origin).

  2. Rotates the scaled object (around its local origin).

  3. Translates the scaled-and-rotated object to its final position in the world.

Part 3: Advanced Matrix Concepts & Practical Applications

With a solid grasp of how matrices are built and combined, we can now assemble them into the full pipeline used by GPUs to render a 3D scene. This part covers the journey of a vertex from a model's local space all the way to your screen and the special matrix math required along the way.

9. The Model-View-Projection (MVP) Chain

The transformation of a single vertex from its position in a 3D model file to the final 2D pixel on your screen is a journey through multiple coordinate spaces. This journey is managed by a sequence of three key matrices, known as the Model-View-Projection (MVP) chain.

The four coordinate spaces on this journey are:

  1. Local Space (or Model Space): The coordinates of a vertex relative to the center of its own model. This is how vertices are defined in a tool like Blender, where the model's pivot is at (0,0,0).

  2. World Space: The coordinates of a vertex after its model has been positioned, rotated, and scaled within the larger 3D scene. All objects in the scene share this common space.

  3. View Space (or Camera Space): The coordinates of a vertex from the perspective of the camera. In this space, the camera is at the origin (0,0,0), looking down its own -Z axis. Everything else in the world is moved and rotated to be relative to the camera.

  4. Clip Space: The final coordinate system before the rasterizer. This is a normalized box (typically a cube from -1 to +1 on all axes) where anything outside is "clipped" and discarded. This space also encodes perspective information in the w component.

The MVP chain uses three matrices to manage these transitions:

  • Model Matrix: Transforms vertices from Local Space → World Space. This is the translation * rotation * scale matrix we built previously. Bevy provides this via functions like mesh_functions::mesh_position_local_to_world.

  • View Matrix: Transforms vertices from World Space → View Space. This matrix positions and orients the entire world to be seen from the camera's perspective.

  • Projection Matrix: Transforms vertices from View Space → Clip Space. This matrix applies perspective, making distant objects appear smaller, and prepares the coordinates for the screen.

In a Bevy vertex shader, you typically get the final clip-space position by calling a single helper function that combines all these matrices for you:

// Bevy provides a function that handles the full MVP transformation.
// It takes the local position and applies the Model, View, and Projection
// matrices in the correct order.
let clip_position = position_world_to_clip(world_position.xyz);

Why this specific order? Because, reading from right to left, it perfectly mirrors the vertex's journey:

clip_position = Projection * (View * (Model * local_position))

  1. Model * local_position: First, take the vertex from local space and position the object in the world. The result is world_position.

  2. View * world_position: Next, take that world position and view it from the camera's perspective. The result is view_position.

  3. Projection * view_position: Finally, apply perspective to get the final clip_position.

10. Projection Matrices: Creating Depth

The projection matrix is responsible for how your 3D scene is flattened onto your 2D screen.

Orthographic Projection

This projection creates a "flat" view, mapping 3D coordinates directly to the screen without any perspective. Objects have the same size regardless of how near or far they are from the camera. This is the standard for 2D games, user interface elements, or technical diagrams where preserving parallel lines and relative sizes is essential.

Mathematically, an orthographic matrix manipulates the x, y, z coordinates but always sets the output w to 1.0. When the GPU performs the automatic Perspective Divide (dividing x, y, z by w), dividing by 1 changes nothing. The vertex's distance from the camera has no effect on its final screen size.

Perspective Projection

This projection mimics how human eyes and cameras work, making distant objects appear smaller. The magic of a perspective matrix is how it manipulates the w component.

Here is what a typical perspective matrix looks like conceptually:

┌                                 ┐
│ scale_x    0        0       0   │
│    0    scale_y     0       0   │
│    0       0      f(z,n)  g(z,n)│  <- Remaps Z for the depth buffer
│    0       0       -1       0   │  <- The secret sauce!
└                                 ┘

The key is that the fourth row is (0, 0, -1, 0). Let's see what happens when we multiply a view-space position (x, y, z, 1) by this matrix. When we calculate the final w component of the output, we get:

w_clip = (0 * x) + (0 * y) + (-1 * z) + (0 * 1) = -z

The w component of our clip-space position is now equal to the negative of its distance from the camera!

After your vertex shader runs, the GPU performs a non-optional step called the Perspective Divide. It automatically divides the x, y, and z components of the clip space position by its w component:

  • screen_x = x_clip / w_clip

  • screen_y = y_clip / w_clip

  • screen_z = z_clip / w_clip (This value is used for the depth buffer)

Since w_clip is the distance (-z), dividing by it makes objects that are farther away (larger z) have smaller final screen coordinates. This single, automatic division is how 3D perspective is achieved.

11. Matrix Inverse: Undoing Transformations

For every transformation matrix M that performs an operation (like rotating 45°), there often exists an inverse matrix, written as inverse(M), that does the exact opposite (rotates -45°).

If M transforms point A to point B, then inverse(M) transforms point B back to point A. Multiplying a matrix by its inverse results in the identity matrix: M * inverse(M) = Identity.

When do you need an inverse matrix?

  • Calculating the View Matrix: The view matrix transforms the world so it's relative to the camera. This is the same as transforming the camera by the inverse of its own world position and rotation. view_matrix = inverse(camera_world_matrix). This is why cameras seem to work "backwards."

  • World Space to Local Space: To find out how a world-space effect (like an explosion) affects a specific model, you need to bring the explosion's position into the model's local space. local_pos = inverse(model_matrix) * world_pos.

  • Normal Transformations: The inverse is a key part of the special matrix used to correctly transform surface normals.

Performance Note: Calculating a matrix inverse is an expensive operation. Avoid doing it in a shader if at all possible. It should be pre-computed on the CPU and passed as a uniform.

12. The Determinant: Transformation Properties

The determinant of a matrix is a single number that reveals key properties of the transformation it represents. While you rarely calculate it yourself in a shader, its geometric meaning is very insightful.

Geometric Meaning

Volume Change

The absolute value, abs(det(M)), tells you by what factor the volume of any shape is scaled.

  • |det| = 1: Preserves volume (e.g., a pure rotation).

  • |det| > 1: Expands volume (e.g., scaling up).

  • |det| < 1: Shrinks volume (e.g., scaling down).

  • det = 0: Collapses volume to zero (e.g., squashing a 3D object into a 2D plane). A matrix with a determinant of 0 is called singular and cannot be inverted because information has been lost.

Orientation (Handedness)

The sign of the determinant tells you if the transformation has "flipped" or mirrored the coordinate system.

  • det > 0: Preserves orientation. A right-handed coordinate system stays right-handed.

  • det < 0: Flips orientation (a mirror image). A right-handed system becomes left-handed. This is important for correctly rendering mirrored objects.

13. Normal Transformation: Why Normals Are Special

A surface normal is a vector that points perpendicular to a surface, and it is essential for lighting calculations. When we transform a model, we must also transform its normals. However, there's a catch: you cannot simply multiply a normal by the model matrix.

The Problem

This naive approach works for uniform scaling and rotation, but it breaks as soon as you apply non-uniform scaling. If you squash a sphere horizontally, simply scaling its normals by the same amount will cause them to no longer be perpendicular to the surface.

The Solution: The Normal Matrix

To transform normals correctly under all conditions, you must multiply them by the normal matrix, which is the inverse transpose of the upper 3x3 part of the model matrix.

normal_matrix = transpose(inverse(mat3x3(model_matrix)))

While the math behind why the inverse transpose works is complex, the rule is simple: Always use the normal matrix to transform normals.

In practice, you never calculate this in a shader. The CPU is much better suited for it. Bevy calculates the final world-space normal for you and provides it through a helper function.

// This is the correct way to transform a normal in Bevy's WGSL shaders.
// This function internally uses the inverse transpose of the model matrix.
let world_normal = mesh_functions::mesh_normal_local_to_world(
    local_normal,
    instance_index // Needed for instanced rendering
);

Part 4: Performance Considerations

Understanding the theory is one thing, but applying it efficiently is just as important. Shader math, especially matrix operations, happens millions or even billions of times per second. Writing efficient code is key.

14. Performance Hierarchy

Not all math operations are created equal. Some are significantly more "expensive" for the GPU to compute than others. A rough hierarchy from most to least expensive is:

  • Very Expensive (Avoid in shaders if possible):

    • Matrix Inversion inverse(m): A computationally intensive operation that is significantly slower than multiplication.

    • Determinant determinant(m): Also complex, though generally faster than a full inversion.

  • Moderately Expensive (Use in Vertex Shader, Avoid in Fragment Shader):

    • Matrix Multiplication: While fast, multiplying several matrices per-pixel in a fragment shader is extremely costly because it runs for every single pixel of an object. The vertex shader, by contrast, only runs for each vertex.
  • Cheap (Fast Everywhere):

    • Vector-Matrix Multiplication: A core GPU capability, highly optimized in hardware.

    • Vector Operations (dot, cross, normalize): These are trivial for the GPU and are considered very cheap.

    • Basic Arithmetic (+, -, *, /): The fastest operations available.

15. Optimization Tips

  1. Pre-compute on the CPU: This is the golden rule. Any matrix that is constant for an entire object's draw call (model, view, projection, mvp, normal_matrix) should be calculated once per frame on the CPU and sent to the GPU as a uniform. Bevy and other game engines do this for you automatically. Your job is to leverage the results they provide.

  2. Do Math in the Vertex Shader: Always perform transformations and normal calculations in the vertex shader. The results can then be passed to the fragment shader as interpolated values. This is fundamentally more efficient than re-calculating the same values for every pixel.

  3. Use Special Cases: For a rotation-only matrix, its inverse is its transpose: inverse(rot) = transpose(rot). A transpose is computationally trivial - it just reorders elements. If you know you are only dealing with rotation, this is a huge optimization over a general inverse().

  4. Normalize Only When Needed: If you know your transformation matrix is orthonormal (i.e., it only contains rotation and uniform scale), the transformed normal will still have a length of 1, so you don't need to normalize it again. If there is any non-uniform scaling involved, you must normalize the result. When in doubt, normalizing is the safe option.


Part 5: Complete Example - Transformation Visualizer

Let's bring these abstract mathematical concepts to life. Theory is essential, but seeing the direct visual consequences of these rules is what truly builds intuition. This interactive demo is a visual playground for shader math, designed to prove why the principles we've discussed are not just academic, but critical for creating correct and compelling graphics.

Our Goal

We will create a single, powerful material that can switch between several visualization modes. Each mode uses the shader to color a sphere based on a different mathematical concept, allowing us to see the effects of the dot product, the determinant, the normal matrix, and transformation order in real-time.

What This Project Demonstrates

  • Normal Matrix vs. Naive Transform: Visually proves why you must use the correct normal matrix for lighting when non-uniform scaling is applied by highlighting the incorrect areas in bright orange.

  • Dot Product as Lighting: Shows a direct, real-time visualization of dot(normal, light_dir) as a light source orbits the object, forming the basis of all diffuse lighting.

  • Determinant as Volume & Orientation: Colors the object based on whether its volume is being compressed or expanded and adds a visual "glitch" effect when its orientation is flipped (mirrored).

  • The "Orbit vs. Rotate" Problem: Clearly demonstrates why translation * rotation is the correct order by showing the classic orbiting mistake that happens when the order is reversed.

The Shader (assets/shaders/d01_08_transform_demo.wgsl)

This is where all the logic lives. The shader uses a demo_mode uniform to control its behavior.

The vertex shader is responsible for the core mathematical calculations for each mode. It calculates different values and passes them to the fragment shader via the VertexOutput struct:

  • In Mode 0, it calculates two versions of the normal: the naive_normal (incorrectly transformed) and the world_normal (correctly transformed) so they can be compared.

  • In Mode 2, it calculates the determinant of the final transformation matrix.

  • In Mode 3, it deliberately applies transformations in the wrong order - translate first, then rotate - to create an orbiting effect.

The fragment shader then uses this data to color the pixels in a way that visualizes the underlying math. It highlights the error between the two normals in Mode 0, uses the determinant to select colors and effects in Mode 2, and so on.

#import bevy_pbr::mesh_functions
#import bevy_pbr::view_transformations::position_world_to_clip

struct TransformDemoMaterial {
    demo_mode: u32,
    time: f32,
    custom_scale: vec3<f32>,
}

@group(2) @binding(0)
var<uniform> material: TransformDemoMaterial;

struct VertexInput {
    @builtin(instance_index) instance_index: u32,
    @location(0) position: vec3<f32>,
    @location(1) normal: vec3<f32>,
}

struct VertexOutput {
    @builtin(position) clip_position: vec4<f32>,
    @location(0) world_normal: vec3<f32>,
    @location(1) naive_normal: vec3<f32>,
    @location(2) world_position: vec3<f32>,
    @location(3) determinant: f32,
}

// Create a custom scaling matrix
fn make_scale_matrix(scale: vec3<f32>) -> mat3x3<f32> {
    return mat3x3<f32>(
        scale.x, 0.0,     0.0,
        0.0,     scale.y, 0.0,
        0.0,     0.0,     scale.z
    );
}

// Calculate determinant of 3x3 matrix
fn determinant_3x3(m: mat3x3<f32>) -> f32 {
    return m[0][0] * (m[1][1] * m[2][2] - m[1][2] * m[2][1])
         - m[0][1] * (m[1][0] * m[2][2] - m[1][2] * m[2][0])
         + m[0][2] * (m[1][0] * m[2][1] - m[1][1] * m[2][0]);
}

@vertex
fn vertex(in: VertexInput) -> VertexOutput {
    var out: VertexOutput;

    let model = mesh_functions::get_world_from_local(in.instance_index);
    let model_3x3 = mat3x3<f32>(model[0].xyz, model[1].xyz, model[2].xyz);

    var position = in.position;
    var normal = in.normal;

    // Mode 3: Transform order demonstration - orbit vs local rotation
    if material.demo_mode == 3u {
        // Create rotation matrix
        let angle = material.time;
        let c = cos(angle);
        let s = sin(angle);
        let rotation_y = mat3x3<f32>(
            c,   0.0, -s,
            0.0, 1.0, 0.0,
            s,   0.0, c
        );

        // WRONG ORDER: rotation * translation makes it orbit!
        // First translate away from origin
        position = position + vec3<f32>(3.0, 0.0, 0.0);
        // Then rotate - causes orbiting behavior
        position = rotation_y * position;

        normal = rotation_y * normal;
    } else {
        // For other modes, apply custom scaling
        let scale_mat = make_scale_matrix(material.custom_scale);
        position = scale_mat * position;

        // Calculate both naive and correct normals for Mode 0
        let naive_transformed = model_3x3 * (scale_mat * normal);
        out.naive_normal = normalize(naive_transformed);

        // Correct normal transformation using inverse transpose
        let inv_scale = make_scale_matrix(vec3<f32>(
            1.0 / material.custom_scale.x,
            1.0 / material.custom_scale.y,
            1.0 / material.custom_scale.z
        ));
        let normal_matrix = inv_scale;  // For pure scale, inverse is reciprocal
        normal = normal_matrix * normal;

        // Calculate determinant
        let combined = model_3x3 * make_scale_matrix(material.custom_scale);
        out.determinant = determinant_3x3(combined);
    }

    // Transform to world space
    let world_position = mesh_functions::mesh_position_local_to_world(
        model,
        vec4<f32>(position, 1.0)
    );

    out.clip_position = position_world_to_clip(world_position.xyz);
    out.world_normal = normalize(model_3x3 * normal);
    out.world_position = world_position.xyz;

    return out;
}

// Simple noise function for static effect
fn hash(p: vec2<f32>) -> f32 {
    let p3 = fract(vec3<f32>(p.x, p.y, p.x) * 0.13);
    let p3_dot = dot(p3, vec3<f32>(p3.y + 3.33, p3.z + 3.33, p3.x + 3.33));
    return fract((p3.x + p3.y) * p3_dot);
}

@fragment
fn fragment(
    in: VertexOutput,
    @builtin(front_facing) is_front: bool
) -> @location(0) vec4<f32> {
    let normal = normalize(in.world_normal);
    var color = vec3<f32>(0.0);

    // Mode 0: Normal Matrix - Correct vs. Naive
    if material.demo_mode == 0u {
        // Use correct normal for base lighting
        let light_dir = normalize(vec3<f32>(1.0, 1.0, 1.0));
        let diffuse = max(0.3, dot(normal, light_dir));
        color = vec3<f32>(0.7, 0.8, 0.9) * diffuse;

        // Calculate error - where naive normal differs from correct normal
        let error = length(in.naive_normal - in.world_normal);

        // Highlight errors in bright red/orange
        if error > 0.1 {
            let error_intensity = clamp(error * 3.0, 0.0, 1.0);
            color = mix(color, vec3<f32>(1.0, 0.3, 0.0), error_intensity);
        }
    }
    // Mode 1: Dot Product - Diffuse Lighting
    else if material.demo_mode == 1u {
        // Classic Lambertian diffuse lighting
        let light_dir = normalize(vec3<f32>(
            cos(material.time * 0.5),
            0.7,
            sin(material.time * 0.5)
        ));

        // The dot product in action!
        let ndot1 = dot(normal, light_dir);
        let diffuse = max(0.0, ndot1);

        // Color the surface based purely on the dot product
        // Blue base color with lighting applied
        color = vec3<f32>(0.3, 0.5, 1.0) * (0.2 + diffuse * 0.8);

        // Add a subtle indicator showing the dot product value
        if diffuse > 0.9 {
            // Very bright areas get a highlight
            color = color + vec3<f32>(0.3, 0.3, 0.0);
        }
    }
    // Mode 2: Determinant - Volume & Orientation
    else if material.demo_mode == 2u {
        let det = in.determinant;
        let abs_det = abs(det);

        // Color based on volume change
        if abs_det > 1.05 {
            // Expanded - blue
            let t = clamp((abs_det - 1.0) / 2.0, 0.0, 1.0);
            color = mix(vec3(0.2, 1.0, 0.2), vec3(0.2, 0.5, 1.0), t);
        } else if abs_det < 0.95 {
            // Compressed - red
            let t = clamp((1.0 - abs_det) * 2.0, 0.0, 1.0);
            color = mix(vec3(0.2, 1.0, 0.2), vec3(1.0, 0.3, 0.2), t);
        } else {
            // Near 1.0 - green (neutral)
            color = vec3(0.2, 1.0, 0.2);
        }

        // Add static/noise effect when determinant is negative (orientation flipped)
        if det < 0.0 {
            let noise_coord = in.world_position.xy * 50.0 + material.time * 10.0;
            let noise = hash(noise_coord);

            // Strong static effect
            if noise > 0.5 {
                color = mix(color, vec3(1.0, 0.0, 1.0), 0.7);
            }
        }

        // Add basic lighting
        let light_dir = normalize(vec3<f32>(1.0, 1.0, 1.0));
        let brightness = max(0.4, dot(normal, light_dir));
        color = color * brightness;
    }
    // Mode 3: Transform Order - Orbit vs Local Rotation
    else if material.demo_mode == 3u {
        // Color based on whether it's orbiting (which is wrong!)
        // The sphere should be red because it's using wrong transform order
        color = vec3<f32>(1.0, 0.3, 0.2);

        // Add spinning indicator - vertical stripes that spin with the sphere
        let angle = atan2(in.world_position.z, in.world_position.x);
        let stripe = step(0.5, fract(angle * 3.0 / 3.14159));
        color = mix(color, vec3<f32>(0.8, 0.1, 0.1), stripe * 0.3);

        // Add text overlay effect
        let text_grid = step(0.8, fract(in.world_position.y * 10.0));
        color = mix(color, vec3<f32>(1.0, 1.0, 0.0), text_grid * 0.3);

        // Basic lighting
        let light_dir = normalize(vec3<f32>(1.0, 1.0, 1.0));
        let brightness = max(0.4, dot(normal, light_dir));
        color = color * brightness;
    }

    return vec4<f32>(color, 1.0);
}

The Rust Material (src/materials/d01_08_transform_demo.rs)

The Rust Material definition is straightforward. It contains fields for demo_mode, time, and custom_scale that directly map to the uniforms in our shader. We also override the specialize function to disable backface culling. This is essential for the determinant demo (Mode 2), as it allows us to see the inside of the sphere when its orientation is flipped by a negative scale, which would otherwise be culled (made invisible).

use bevy::pbr::MaterialPipelineKey;
use bevy::prelude::*;
use bevy::render::mesh::MeshVertexBufferLayoutRef;
use bevy::render::render_resource::{AsBindGroup, ShaderRef};
use bevy::render::render_resource::{RenderPipelineDescriptor, SpecializedMeshPipelineError};

#[derive(Asset, TypePath, AsBindGroup, Debug, Clone)]
pub struct TransformDemoMaterial {
    #[uniform(0)]
    pub demo_mode: u32,
    #[uniform(0)]
    pub time: f32,
    #[uniform(0)]
    pub custom_scale: Vec3,
}

impl Material for TransformDemoMaterial {
    fn vertex_shader() -> ShaderRef {
        "shaders/d01_08_transform_demo.wgsl".into()
    }

    fn fragment_shader() -> ShaderRef {
        "shaders/d01_08_transform_demo.wgsl".into()
    }

    // Disable backface culling for mode 2 to see orientation flip
    fn specialize(
        _pipeline: &bevy::pbr::MaterialPipeline<Self>,
        descriptor: &mut RenderPipelineDescriptor,
        _layout: &MeshVertexBufferLayoutRef,
        _key: MaterialPipelineKey<Self>,
    ) -> Result<(), SpecializedMeshPipelineError> {
        descriptor.primitive.cull_mode = None;
        Ok(())
    }
}

Don't forget to add it to src/materials/mod.rs:

// ... other materials
pub mod d01_08_transform_demo;

The Demo Module (src/demos/d01_08_transform_demo.rs)

The demo module sets up our Bevy scene: a single sphere with our custom material, a camera, and a light. It contains systems to handle user input (handle_input) which allows mode switching via number keys and smooth scaling via arrow keys, and a dedicated update_ui system to visualize the current state.

use crate::materials::d01_08_transform_demo::TransformDemoMaterial;
use bevy::prelude::*;

pub fn run() {
    App::new()
        .add_plugins(DefaultPlugins)
        .add_plugins(MaterialPlugin::<TransformDemoMaterial>::default())
        .add_systems(Startup, setup)
        .add_systems(
            Update,
            (rotate_camera, update_time, handle_input, update_ui),
        )
        .run();
}

fn setup(
    mut commands: Commands,
    mut meshes: ResMut<Assets<Mesh>>,
    mut materials: ResMut<Assets<TransformDemoMaterial>>,
) {
    commands.spawn((
        Mesh3d(meshes.add(Sphere::new(1.0).mesh().uv(32, 18))),
        MeshMaterial3d(materials.add(TransformDemoMaterial {
            demo_mode: 0,
            time: 0.0,
            custom_scale: Vec3::new(1.0, 1.0, 1.0),
        })),
    ));

    commands.spawn((
        PointLight {
            shadows_enabled: true,
            intensity: 2000.0,
            ..default()
        },
        Transform::from_xyz(4.0, 8.0, 4.0),
    ));

    commands.spawn((
        Camera3d::default(),
        Transform::from_xyz(-2.5, 4.5, 9.0).looking_at(Vec3::ZERO, Vec3::Y),
    ));

    commands.spawn((
        Text::new(""),
        Node {
            position_type: PositionType::Absolute,
            top: Val::Px(10.0),
            left: Val::Px(10.0),
            ..default()
        },
    ));
}

fn rotate_camera(time: Res<Time>, mut camera_query: Query<&mut Transform, With<Camera3d>>) {
    for mut transform in camera_query.iter_mut() {
        let radius = 9.0;
        let angle = time.elapsed_secs() * 0.3;
        transform.translation.x = angle.cos() * radius;
        transform.translation.z = angle.sin() * radius;
        transform.look_at(Vec3::ZERO, Vec3::Y);
    }
}

fn update_time(time: Res<Time>, mut materials: ResMut<Assets<TransformDemoMaterial>>) {
    for (_, material) in materials.iter_mut() {
        material.time = time.elapsed_secs();
    }
}

fn handle_input(
    keyboard: Res<ButtonInput<KeyCode>>,
    time: Res<Time>,
    mut materials: ResMut<Assets<TransformDemoMaterial>>,
) {
    let delta = time.delta_secs();

    for (_, material) in materials.iter_mut() {
        if keyboard.just_pressed(KeyCode::Digit1) {
            // Normal Matrix
            material.demo_mode = 0;
        }
        if keyboard.just_pressed(KeyCode::Digit2) {
            // Dot Product
            material.demo_mode = 1;
        }
        if keyboard.just_pressed(KeyCode::Digit3) {
            // Determinant
            material.demo_mode = 2;
        }
        if keyboard.just_pressed(KeyCode::Digit4) {
            // Transform Order Matters
            material.demo_mode = 3;
            // Reset scale when entering mode 3
            material.custom_scale = Vec3::new(1.0, 1.0, 1.0);
        }

        // Don't allow scale adjustment in mode 3 (transform order demo)
        if material.demo_mode == 3 {
            continue;
        }

        if keyboard.pressed(KeyCode::ArrowUp) {
            material.custom_scale.y = (material.custom_scale.y + delta).min(3.0);
        }
        if keyboard.pressed(KeyCode::ArrowDown) {
            material.custom_scale.y = (material.custom_scale.y - delta).max(0.2);
        }
    }
}

fn update_ui(materials: Res<Assets<TransformDemoMaterial>>, mut text_query: Query<&mut Text>) {
    if !materials.is_changed() {
        return;
    }

    if let Some((_, material)) = materials.iter().next() {
        for mut text in text_query.iter_mut() {
            let mode_name = match material.demo_mode {
                0 => {
                    "1 - Normal Matrix (Correct vs. Naive)\n  Orange highlights show where naive normal * model fails"
                }
                1 => {
                    "2 - Dot Product = Diffuse Lighting\n  Watch brightness change as light rotates around sphere"
                }
                2 => {
                    "3 - Determinant = Volume + Orientation\n  Green=neutral, Blue=expanded, Red=compressed, MAGENTA STATIC=mirrored!"
                }
                3 => {
                    "4 - Transform Order Matters!\n  WRONG ORDER (rotation * translation) makes sphere ORBIT instead of spin in place"
                }
                _ => "Unknown",
            };

            **text = format!(
                "[1-4]: Change Mode\n\
                UP/DOWN: Adjust Y-scale\n\
                Mode: {}\n\
                Scale: {:.1}, {:.1}, {:.1}",
                mode_name,
                material.custom_scale.x,
                material.custom_scale.y,
                material.custom_scale.z
            );
        }
    }
}

Don't forget to add it to src/demos/mod.rs:

// ... other demos
pub mod d01_08_transform_demo;

And register it in src/main.rs:

Demo {
    number: "1.8",
    title: "Essential Shader Math Concepts",
    run: demos::d01_08_transform_demo::run,
},

Running the Demo

When you run the application, you will see a sphere and UI text explaining the controls and the current mode.

Controls

KeyAction
1 - 4Switch directly to visualization mode 1, 2, 3 or 4.
Up/Down ArrowsIncrease/decrease the Y-axis scale (not in Mode 3).

What You're Seeing

ModeConcept VisualizedWhat to Do & Look For
0Normal MatrixPress the UP/DOWN arrows to stretch the sphere. Orange highlights will appear. These show where the naive model * normal calculation is wrong. The underlying lighting, using the correct normal matrix, remains perfect.
1Dot ProductThe sphere's brightness directly corresponds to the dot product between the surface normal and the orbiting light's direction. Surfaces facing the light are bright; those angled away are dark. This is the raw output of a diffuse lighting model.
2DeterminantGreen: Neutral volume.
Blue: Expanded (scale Y > 1).
Red: Compressed (scale Y < 1).
Press DOWN until the sphere inverts: you'll see magenta static, confirming that a negative determinant (det < 0) has flipped the object's orientation.
3Transform OrderThe sphere ORBITS the world's center instead of spinning in place. This is because the vertex shader first translates the vertices away from the origin and then rotates them, causing them to revolve around the world's center.

Key Takeaways

You have just absorbed the mathematical foundation of all 3D graphics. This was the most theory-heavy part of our journey, and you've made it through. Let's solidify the most critical concepts:

  • Vectors are Fundamental: They represent both directions (for lighting) and positions. The dot product measures alignment, and the cross product finds perpendiculars.

  • Matrices are Transformation Recipes: A 4x4 matrix describes a new coordinate system. Multiplying a vector by it re-maps the vector's coordinates onto that new system.

  • The W Component is Magic: It allows matrices to distinguish between positions (w=1) and directions (w=0), and it is the key that makes perspective projection work via the Perspective Divide.

  • ORDER MATTERS! Matrix multiplication is applied right-to-left. translation * rotation * scale is the standard because it scales an object in place, rotates it on its own axis, and then moves the final result.

  • The MVP Chain is the Journey: Projection * View * Model takes a vertex from its local model space all the way to the screen.

  • Normals are Special: To transform normals correctly, especially under non-uniform scaling, you must use the normal matrix (transpose(inverse(model_3x3))). Bevy's helper functions handle this for you.

What's Next?

Congratulations! You have successfully completed Phase 1: Foundations. You now possess the core knowledge of the graphics pipeline, WGSL syntax, data layout, and the essential mathematics required to write powerful and correct shaders.

Everything we have done in Phase 1 has been about one thing: taking geometry that already exists and drawing it correctly. We've learned the rules of the road - how to respect the pipeline, how to format our data, and how to apply the standard transformations.

Now that we know the rules, it's time to start bending them.

In Phase 2: Vertex Shaders, we will shift our focus dramatically. Instead of just passing the vertex position through the MVP chain, we will start to actively manipulate it. We will move beyond just drawing static models and begin to create dynamic, procedural, and animated worlds. We will learn how to:

  • Deform meshes with mathematical functions to create waves, pulses, and other organic effects.

  • Use noise to generate complex vertex displacement, like fluttering flags.

  • Leverage instancing to draw thousands of unique objects with incredible performance.

You have built a solid foundation. Now, let's start building on top of it.

Next up: 2.1 - Vertex Transformation Deep Dive


Quick Reference

Vector Operations

  • dot(a, b): Returns a scalar. Measures alignment. For normalized vectors, it returns the cosine of the angle between them (1 for parallel, 0 for perpendicular, -1 for opposite).

  • cross(a, b): Returns a vec3 that is perpendicular to both a and b, following the right-hand rule.

  • normalize(v): Returns a vector with the same direction as v but with a length of 1 (a unit vector).

  • length(v): Returns the scalar magnitude (length) of a vector.

Matrix Operations

  • Order: C = A * B applies transformation B, then A. Transformations are read from right to left.

  • MVP Chain: MVP = Projection * View * Model

  • Model Matrix: Model = Translation * Rotation * Scale

Key Formulas & Concepts

  • W Component: w=1.0 for positions (affected by translation), w=0.0 for directions (ignores translation).

  • Correct Normal Transform: Use the inverse transpose of the model matrix. In Bevy WGSL, this is handled by mesh_functions::mesh_normal_local_to_world().

  • Determinant: A scalar value calculated from a matrix.

    • Invertibility: A matrix is invertible if and only if det != 0.

    • Volume: abs(det) is the factor by which volume is scaled.

    • Orientation: sign(det) indicates if orientation is preserved (+) or flipped (-).