2.5 - Advanced Vertex Displacement

What We're Learning
In the previous article, we learned the fundamentals of vertex deformation. We saw how a simple sine wave can bring a mesh to life, creating predictable pulses and oscillations. While powerful, these basic techniques are like a metronome: they produce a steady, simple, and ultimately artificial rhythm.
This article is about moving from the metronome to the symphony.
Advanced vertex displacement is the art of layering multiple simple motions to create one complex, organic, and believable result. Instead of a single, predictable wave, we will orchestrate an entire ensemble of effects: broad, rolling waves for the main movement, high-frequency noise for subtle texture, and artist-driven maps for deliberate control. This layering is the key to transforming flat, rigid geometry into surfaces that feel dynamic and alive, as if responding to invisible forces like wind, water, or energy.
By the end of this article, you will understand how to:
Layer Multi-Axis Waves: Combine waves traveling in different directions to create complex and natural-looking interference patterns.
Compose with Multiple Frequencies: Use a technique called "octaves" to add detail at different scales, from large, rolling movements to fine, subtle ripples.
Leverage Noise Functions: Use texture-based noise to break the repetition of periodic waves, introducing organic, non-repeating displacement.
Create Advanced Deformations: Implement sophisticated twist and spiral effects with smooth falloffs for more controlled and artistic results.
Use Displacement Maps: Allow artists to directly control deformation by painting displacement details in a texture.
Preserve Mesh Integrity: Implement techniques to prevent visual glitches like gaps or self-intersections during extreme deformation.
Optimize Complex Vertex Shaders: Learn crucial performance strategies to ensure your complex effects run smoothly.
Build a Complete Flag Simulation: Apply all these concepts to create a physically-inspired flag that waves, billows, and flutters in a simulated wind.
Building on Our Foundation
Before we orchestrate our symphony, let's quickly refresh our memory of the fundamental instruments we learned to play in the last article. For a detailed breakdown of these techniques, please review 2.4 - Simple Vertex Deformations.
Our core strategy was to start with a vertex's original position and add a calculated offset. We explored three primary tools for this:
We used sine waves to create rhythmic, oscillating motion - the foundation for any waving or rippling effect. However, a single sine wave is too perfect; its endless repetition and uniformity can look artificial.
We implemented uniform scaling to make objects breathe or pulse. But this motion is monolithic, affecting the entire object at once, unlike organic growth which tends to bulge and stretch locally.
And we used basic twists to rotate geometry around an axis. But this twist was linear and constant, lacking the smooth falloffs and easing found in natural motion.
Each of these tools is powerful but suffers from the same core problem: they are uniform, predictable, and isolated. The key to advanced displacement is to shatter that uniformity. In this article, we will learn to layer these effects, vary them across the surface of a mesh, and introduce controlled randomness to transform these simple mechanical movements into beautifully complex, organic motion.
Multi-Axis Wave Effects
Real-world motion is rarely tidy. Wind doesn't just push a flag up and down; it creates complex, flowing patterns across its surface. A pebble dropped in a pond doesn't create a single wave but a series of expanding ripples that interact with each other. To simulate this beautiful complexity, we must stop thinking in single dimensions and start orchestrating waves that travel in multiple directions at once.
The Magic of Wave Interference
The secret to creating complexity from simplicity lies in a principle called wave interference. When two or more waves meet, they combine by simple addition.
If their peaks align, they amplify each other, creating a larger peak (constructive interference).
If a peak meets a trough, they cancel each other out, creating a flatter surface (destructive interference).

This simple act of adding waves together is the source of the rich, organic, and non-uniform patterns we see in nature. By simulating this in our shader, we can transform a repetitive grid of sine waves into something that feels far more natural.
Implementing Multi-Directional Waves
Let's create this effect in code. We'll write a function that calculates three separate waves traveling in different directions and simply adds their results together.
fn calculate_multi_wave_displacement(
position: vec3<f32>,
time: f32,
) -> vec3<f32> {
var displacement = vec3<f32>(0.0);
// Wave 1: A primary wave moving along the X-axis.
// This is our base movement.
let wave1 = sin(position.x * 3.0 - time * 2.0) * 0.15;
displacement.y += wave1;
// Wave 2: A cross-wave moving along the Z-axis.
// It has a different frequency and speed, creating interference.
let wave2 = sin(position.z * 2.5 - time * 1.5) * 0.12;
displacement.y += wave2;
// Wave 3: A diagonal wave.
// This breaks up the grid-like pattern from the first two waves,
// adding a more chaotic, natural ripple.
let diagonal_input = position.x * 0.707 + position.z * 0.707;
let wave3 = sin(diagonal_input * 4.0 - time * 2.5) * 0.08;
displacement.y += wave3;
return displacement;
}
Key Insight: The magic is in the variation. Each wave has a slightly different frequency, speed, and amplitude. This ensures they are constantly moving in and out of phase with each other, preventing the final pattern from ever looking too regular or predictable.
Propagating Waves from a Center Point
Sometimes we want waves to radiate outwards from a specific point, like the ripples from a raindrop hitting a puddle. To do this, we base our sine wave calculation not on the x or z coordinate, but on the vertex's distance from a center point.
fn calculate_radial_waves(
position: vec3<f32>,
center: vec3<f32>,
time: f32,
) -> vec3<f32> {
// We only care about the distance in the horizontal plane (XZ)
let distance = length(position.xz - center.xz);
// Combine multiple wave frequencies based on distance
let wave1 = sin(distance * 5.0 - time * 3.0) * 0.1;
let wave2 = sin(distance * 3.0 - time * 2.0) * 0.15;
let wave3 = sin(distance * 8.0 - time * 4.0) * 0.05;
let total_wave = wave1 + wave2 + wave3;
// Displace vertically (along the Y-axis)
return vec3<f32>(0.0, total_wave, 0.0);
}
This creates a beautiful ripple effect that spreads outward, with the multiple layered waves giving it a complex and natural-looking surface.
Attenuated Waves with Falloff
In the real world, waves lose energy as they travel. A ripple is strongest at its center and fades to nothing over distance. We can simulate this with a falloff factor that multiplies our wave's amplitude.
fn calculate_attenuated_wave(
position: vec3<f32>,
center: vec3<f32>,
time: f32,
max_distance: f32,
) -> vec3<f32> {
let distance = length(position.xz - center.xz);
let wave = sin(distance * 5.0 - time * 3.0) * 0.2;
// Calculate a falloff factor that goes from 1.0 at the center
// to 0.0 at the max_distance.
let falloff = 1.0 - smoothstep(0.0, max_distance, distance);
return vec3<f32>(0.0, wave * falloff, 0.0);
}
Choosing the right falloff function is an artistic decision that defines the character of your effect:
smoothstep(max_distance, 0.0, distance): The most common choice. Creates a gentle, elegant fade-out with a smooth start and end.Linear (1.0 -
clamp(distance / max_distance, 0.0, 1.0)): A simple, abrupt fade. Can sometimes look unnatural.Exponential (
exp(-distance * falloff_rate)): A very natural, gradual fade that never quite reaches zero.Inverse Square (
1.0 / (1.0 + distance * distance * falloff_rate)): Mimics how physical forces like light and gravity diminish. The falloff is very rapid at first and then slows down.
Displacing Along the Surface Normal
So far, we have only displaced vertices along the world's Y-axis. This works for flat surfaces like planes, but what about a sphere? Displacing a sphere's vertices along Y would just stretch it vertically into an oblong shape.
To create a more natural deformation on any shape, we should displace each vertex along its own surface normal - the vector that points directly "out" from the surface at that vertex.
// This function takes the mesh's normal as an input
fn calculate_normal_displacement(
position: vec3<f32>,
normal: vec3<f32>, // The per-vertex normal vector
time: f32,
) -> vec3<f32> {
// Calculate a wave value based on the vertex's world position
let wave_input = position.x * 3.0 + position.z * 2.0;
let wave_amount = sin(wave_input - time * 2.0) * 0.2;
// Displace along the normal's direction instead of just "up"
return normal * wave_amount;
}
This is a crucial technique. It ensures that the displacement respects the underlying curvature of the mesh, creating a much more believable effect. For a sphere, displacing along the normal doesn't just stretch it - it makes it inflate and deflate, creating a natural "breathing" or pulsing effect that works on any 3D model, not just flat planes.

Combining Multiple Sine Waves for Complexity
The secret to creating natural-looking motion is layering. A single sine wave, no matter how you tweak it, will always look artificial because it has only one frequency. Real-world surfaces are never this simple. Think of the surface of an ocean: there are large, slow swells (low frequency), smaller, faster waves on top of them (medium frequency), and tiny, rapid ripples on top of those (high frequency).
By combining waves of different frequencies and amplitudes, we can mimic this multi-layered detail and create motion that feels rich and organic.
The Concept of Octaves
In music, an octave is the interval between one note and another with double its frequency. In computer graphics, we borrow this term to describe layers of waves, where each successive layer (or "octave") typically has a higher frequency and a lower amplitude than the one before it.
This technique is often called Fractal Brownian Motion (fBm) when applied to noise, but the principle is identical for sine waves. We build the final shape by adding layers of detail.
Octave 1 (Low Frequency, High Amplitude):
Defines the large, primary shapes of the motion.
Like the slow, rolling swells in the deep ocean.
~~~~~~~~~~~~~~~~~~~~~
Octave 2 (Medium Frequency, Medium Amplitude):
Adds secondary motion on top of the primary shapes.
Like the smaller waves riding on the swells.
~~ ~~ ~~ ~~ ~~ ~~
Octave 3 (High Frequency, Low Amplitude):
Adds fine, textural detail.
Like the tiny ripples on the surface of the waves.
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Combined Result:
A complex, natural-looking surface with detail at multiple scales.
~~⁓~⁓~~⁓~~⁓~⁓~⁓~~⁓~
Implementing Layered Waves
The implementation is surprisingly simple: we just loop several times, and in each iteration, we add a wave, increase our frequency, and decrease our amplitude.
fn layered_wave_displacement(
position: vec3<f32>,
time: f32,
) -> f32 {
var total_displacement = 0.0;
var amplitude = 0.3; // Start with the largest amplitude
var frequency = 2.0; // Start with the lowest frequency
// Octave 1: The main, large-scale wave.
total_displacement += sin(position.x * frequency - time * 1.5) * amplitude;
// Octave 2: Smaller, faster wave.
amplitude *= 0.5; // Reduce amplitude.
frequency *= 2.0; // Increase frequency.
total_displacement += sin(position.x * frequency - time * 2.0) * amplitude;
// Octave 3: Even smaller, faster ripples.
amplitude *= 0.5;
frequency *= 2.0;
total_displacement += sin(position.x * frequency - time * 3.0) * amplitude;
// Octave 4: Fine, textural detail.
amplitude *= 0.5;
frequency *= 2.0;
total_displacement += sin(position.x * frequency - time * 4.0) * amplitude;
return total_displacement;
}
This pattern is the core of procedural generation. In each step:
We increase the frequency. This adds detail at a smaller scale.
We decrease the amplitude. This ensures that the smaller details don't overpower the larger, foundational shapes.
A Flexible, Parameterized System
Hard-coding the octaves works, but a more powerful approach is to create a flexible function where we can control the layering process with parameters.
fn multi_octave_wave(
position: vec3<f32>,
time: f32,
octaves: u32,
persistence: f32, // How much amplitude is retained each octave.
lacunarity: f32, // How much frequency increases each octave.
) -> f32 {
var total_displacement = 0.0;
var amplitude = 1.0;
var frequency = 1.0;
for (var i = 0u; i < octaves; i = i + 1u) {
// Calculate the wave for the current octave
let wave = sin(position.x * frequency - time * (1.0 + f32(i) * 0.5));
total_displacement += wave * amplitude;
// Prepare for the next octave
amplitude *= persistence;
frequency *= lacunarity;
}
return total_displacement;
}
This function gives us artistic control over the character of our motion.
Parameter Guide:
octaves: Controls the level of detail.
3or4is usually enough. More octaves add finer detail but cost more performance.persistence: Controls the "roughness." It's the factor by which amplitude decreases for each octave.
A typical value is
0.5. Each octave is half as influential as the last.A lower value (
0.3) creates a smoother result, dominated by large-scale waves.A higher value (
0.7) creates a "noisier," more detailed result where the small ripples are more prominent.
lacunarity: Controls the "gap" between frequencies. It's the factor by which frequency increases for each octave.
A typical value is
2.0. Each octave has double the frequency of the last.A lower value (
1.5) creates a softer, smoother blend between layers.A higher value (
3.0) creates a more chaotic, high-frequency result.
Varying the Direction of Each Octave
For the ultimate level of complexity, we can make each octave's wave travel in a different direction. This breaks up the grid-like alignment and creates a swirling, turbulent effect that is far more natural than waves moving in a single direction.
We achieve this using the dot product. The dot product of two vectors gives a single number that represents how much one vector points in the direction of the other. By projecting our vertex position onto a direction vector, we get a value that tells us "how far along" that direction the vertex is. We can then use this value as the input to our sin function, effectively making the wave travel along our custom direction.
fn directional_multi_wave(
position: vec3<f32>,
time: f32,
) -> vec3<f32> {
var displacement = vec3<f32>(0.0);
// Octave 1: Main wave, traveling along a diagonal.
let dir1 = normalize(vec2<f32>(1.0, 1.0));
let wave_input_1 = dot(position.xz, dir1);
displacement.y += sin(wave_input_1 * 2.0 - time * 1.5) * 0.25;
// Octave 2: A faster wave, traveling along a different diagonal.
let dir2 = normalize(vec2<f32>(1.0, -0.5));
let wave_input_2 = dot(position.xz, dir2);
displacement.y += sin(wave_input_2 * 4.0 - time * 2.0) * 0.12;
// Octave 3: A high-frequency ripple, traveling along yet another direction.
let dir3 = normalize(vec2<f32>(-0.5, 1.0));
let wave_input_3 = dot(position.xz, dir3);
displacement.y += sin(wave_input_3 * 8.0 - time * 3.0) * 0.06;
return displacement;
}
This technique is incredibly powerful. By simply adding together a few directional sine waves, we have created a complex interference pattern that looks chaotic, turbulent, and convincingly natural - perfect for simulating water, wind, or energy fields.
Using Noise Functions for Displacement
Sine waves are powerful, but they have a fundamental limitation: they are periodic. They repeat in a perfectly predictable pattern forever. This repetition is something our brains quickly pick up on, making the effect look artificial. To break this cycle and create motion that feels genuinely organic and unpredictable, we need a source of structured randomness. We need noise.
What is Noise?
In graphics, "noise" doesn't mean the harsh, chaotic static you see on an old TV. Instead, it refers to a special kind of pseudo-randomness that varies smoothly across space. Think of it as the difference between a random number generator and a smoothly rolling landscape.

Common Types of Noise:
Perlin/Simplex Noise: The most famous types. They are specifically designed to look natural and avoid grid-like artifacts.
Value Noise: A simpler form, created by interpolating between random values on a grid.
Worley Noise: Creates cellular or voronoi patterns, perfect for effects like cracked earth or water caustics.
Noise via Texture Lookup
A key point for WGSL developers is that, unlike other shading languages like GLSL, WGSL does not have built-in noise functions like noise() or perlin(). The standard and most performant way to use noise in Bevy and WGSL is to pre-generate it on the CPU, store it in a texture, and then sample that texture in our shader.
// Define the texture and sampler in our material's bind group
@group(2) @binding(1)
var noise_texture: texture_2d<f32>;
@group(2) @binding(2)
var noise_sampler: sampler;
fn sample_noise(position: vec3<f32>, time: f32) -> f32 {
// Use the vertex's world position to create UV coordinates.
// The scale factor controls the "frequency" of the noise.
let uv = position.xz * 0.1;
// To make the noise animate, we scroll the UV coordinates over time.
// Using different speeds for X and Y prevents obvious diagonal scrolling.
let animated_uv = uv + vec2<f32>(time * 0.05, time * 0.03);
// Sample the texture to get the noise value.
let noise_value = textureSampleLevel(
noise_texture,
noise_sampler,
animated_uv,
0.0 // Mip Level
).r; // We only need one channel (red).
// The texture stores values from 0.0 to 1.0.
// We remap this to the range -1.0 to 1.0 for displacement.
return noise_value * 2.0 - 1.0;
}
Why We Must Use textureSampleLevel in Vertex Shaders
You likely noticed we used textureSampleLevel and not the more common textureSample. This is not optional; it is a fundamental requirement when sampling textures inside a vertex shader.
The reason lies in the graphics pipeline. When you use textureSample() in a fragment shader, the GPU performs a clever trick. It looks at neighboring pixels to determine how "zoomed in" or "zoomed out" the texture is on the screen. Based on that, it automatically selects the ideal mipmap level to prevent visual artifacts like shimmering or moiré patterns.
This calculation requires information about the final pixels on the screen. The GPU only knows this after the vertex shader has finished and the triangles have been converted into pixels (a process called rasterization).
Since our vertex shader runs before rasterization, the GPU has no pixel information. It cannot automatically choose a mipmap. Therefore, we must use textureSampleLevel, which lets us manually specify the exact mipmap level we want to sample from. By providing 0.0 as the final argument, we are explicitly telling the GPU, "Use the highest-resolution version of this texture (mip level 0)."
Generating a Noise Texture in Rust
Creating the noise texture on the CPU is straightforward using the noise crate. The following function generates a seamless 2D Perlin noise pattern and stores it in a Bevy Image asset. This is typically done once during your application's setup.
use bevy::prelude::*;
use noise::{NoiseFn, Perlin}; // Add `noise = "0.9"` to your Cargo.toml
// Generates a new Image asset containing Perlin noise.
fn generate_noise_texture(size: u32) -> Image {
// The Perlin generator. The seed ensures we get the same noise every time.
let perlin = Perlin::new(42);
let mut data = Vec::with_capacity((size * size * 4) as usize);
for y in 0..size {
for x in 0..size {
// Normalize coordinates to the 0.0-1.0 range
let nx = x as f64 / size as f64;
let ny = y as f64 / size as f64;
// Sample the noise function. The multiplier (e.g., 5.0) acts
// like a frequency setting for the generated pattern.
let noise_value = perlin.get([nx * 5.0, ny * 5.0]); // Returns -1.0 to 1.0
// Remap the noise value from [-1.0, 1.0] to [0, 255] for the texture
let byte_value = ((noise_value + 1.0) * 0.5 * 255.0) as u8;
// Write the same value to R, G, and B channels for a grayscale image.
data.push(byte_value); // R
data.push(byte_value); // G
data.push(byte_value); // B
data.push(255); // A (fully opaque)
}
}
Image::new(
Extent3d { width: size, height: size, depth_or_array_layers: 1 },
TextureDimension::D2,
data,
TextureFormat::Rgba8Unorm,
RenderAssetUsages::default(),
)
}
Multi-Octave Noise (Fractal Brownian Motion)
Just as we layered sine waves, we can layer noise to create Fractal Brownian Motion (fBm). This is the exact same "octave" principle, but applied to noise samples instead of sin() calls. The result is a much richer, more detailed, and more natural-looking noise pattern.
fn fbm_noise(position: vec3<f32>, time: f32, octaves: u32) -> f32 {
var total_noise = 0.0;
var amplitude = 1.0; // persistence starts at 1.0
var frequency = 1.0; // lacunarity starts at 1.0
for (var i = 0u; i < octaves; i = i + 1u) {
// Sample noise at the current frequency
let uv = position.xz * frequency * 0.1 + vec2<f32>(time * 0.05);
let noise = textureSampleLevel(noise_texture, noise_sampler, uv, 0.0).r * 2.0 - 1.0;
total_noise += noise * amplitude;
// Prepare for the next octave
amplitude *= 0.5; // persistence = 0.5
frequency *= 2.0; // lacunarity = 2.0
}
return total_noise;
}
This gives us rich, organic variation with detail at multiple scales - the hallmark of natural phenomena.
Combining Noise and Waves: The Best of Both Worlds
The most powerful and professional approach is often to combine periodic waves with aperiodic noise. Each component plays a specific role:
Waves provide the rhythm: The large, predictable, underlying motion (e.g., the main flapping of a flag).
Noise provides the chaos: The small, unpredictable, organic variations (e.g., the turbulent fluttering on the flag's surface).
fn hybrid_displacement(
position: vec3<f32>,
time: f32,
) -> vec3<f32> {
// 1. The main, rhythmic motion from a sine wave.
let base_wave = sin(position.x * 2.0 - time * 1.5) * 0.2;
// 2. The organic, turbulent detail from multi-octave noise.
let turbulence = fbm_noise(position, time, 3u) * 0.1;
// 3. Combine them. The wave provides the main structure, and
// the noise "perturbs" it to make it look natural.
let total_displacement = base_wave + turbulence;
return vec3<f32>(0.0, total_displacement, 0.0);
}
This hybrid approach gives you the control and predictability of waves plus the realism and organic detail of noise.
Advanced Twist and Spiral Deformations
In the last article, we introduced a basic twist effect that rotated a mesh around an axis, like a rigid pole. While functional, it was linear and uniform. Now, we'll explore how to add artistic control and organic motion to our twists, making them bend, bulge, and undulate in more complex and interesting ways.
Twist with Controlled Falloff
Real-world twisting rarely happens uniformly. A rope frays more at the ends, and a character's arm twists from the shoulder, not the elbow. We can control where a twist has the most effect by using falloff.
Radial Falloff (Twisting from the Edges)
Let's create a twist that is strongest at the outer edges of a mesh and has no effect at the center. We can achieve this by making the twist angle proportional to the vertex's distance from the central axis.
fn radial_falloff_twist(
position: vec3<f32>,
time: f32,
) -> vec3<f32> {
// Calculate the distance from the center Y-axis in the XZ plane.
let radius = length(position.xz);
// The twist angle is now a function of this radius.
// Vertices with radius = 0 (at the center) will have angle = 0.
let twist_amount = sin(time) * 3.0; // Max twist in radians
let angle = radius * twist_amount;
// Apply the standard 2D rotation to the XZ coordinates.
let cos_a = cos(angle);
let sin_a = sin(angle);
var twisted = position;
twisted.x = position.x * cos_a - position.z * sin_a;
twisted.z = position.x * sin_a + position.z * cos_a;
return twisted;
}
This is great for creating swirling vortex effects or objects that feel like they are being spun from their edges.
Height-Based Falloff with smoothstep
Often, we want a twist to occur only within a specific vertical section of a mesh. Using smoothstep, we can create a beautifully clean transition from "no twist" to "full twist."
fn height_smooth_twist(
position: vec3<f32>,
time: f32,
twist_start_y: f32, // The Y-coordinate where the twist begins.
twist_end_y: f32, // The Y-coordinate where the twist reaches full strength.
) -> vec3<f32> {
// smoothstep creates a value that smoothly goes from 0.0 to 1.0
// as position.y moves from twist_start_y to twist_end_y.
let twist_influence = smoothstep(twist_start_y, twist_end_y, position.y);
// The final angle is the base twist amount scaled by our influence factor.
let base_twist = sin(time * 2.0) * 3.0; // Max twist
let angle = twist_influence * base_twist;
// Apply the rotation.
let cos_a = cos(angle);
let sin_a = sin(angle);
var twisted = position;
twisted.x = position.x * cos_a - position.z * sin_a;
twisted.z = position.x * sin_a + position.z * cos_a;
return twisted;
}
The use of smoothstep is key. It avoids the abrupt, mechanical start and stop of a linear twist, creating a much more elegant and professional-looking deformation.
Spiral Deformations
A spiral is simply a twist combined with an outward (or inward) displacement. As the vertices rotate around the central axis, we also push them away from it, with the push amount depending on their height. This creates a classic spiral or vortex shape.
The logic involves converting a vertex's XZ coordinates to polar coordinates (angle and distance/radius), modifying them, and then converting back to Cartesian coordinates (X and Z).
fn spiral_deformation(
position: vec3<f32>,
time: f32,
) -> vec3<f32> {
// 1. Calculate the twist angle based on height and time.
let twist_angle = position.y * 2.0 + time;
// 2. Calculate the new radius of the vertex.
// It starts with its original radius and expands based on height.
let base_radius = length(position.xz);
let height_influence = position.y * 0.1;
let spiral_radius = base_radius + height_influence;
// 3. Reconstruct the new XZ position.
// First, find the vertex's original angle around the Y-axis.
let original_angle = atan2(position.z, position.x);
// Then, add our twist to get the new angle.
let new_angle = original_angle + twist_angle;
// Convert back from polar (angle, radius) to Cartesian (x, z) coordinates.
var spiraled_position = position;
spiraled_position.x = spiral_radius * cos(new_angle);
spiraled_position.z = spiral_radius * sin(new_angle);
return spiraled_position;
}
This is perfect for effects like tornadoes, magical spells, or stylized horns.
Dynamic Twisting with Waves
Instead of a uniform twist, what if the amount of twist undulated along the mesh? We can achieve this by using a sine wave to control the twist angle at different heights. This creates a much more organic, "living" motion, as if the object is coiling and uncoiling like a snake.
fn wave_twist(
position: vec3<f32>,
time: f32,
) -> vec3<f32> {
// Use one sine wave to determine the twist amount at each height.
let twist_wave = sin(position.y * 3.0 - time * 2.0) * 1.5;
// Use a second, phase-shifted wave to add more complexity.
let phase_wave = cos(position.y * 2.0 + time * 1.5) * 0.5;
// The final angle is the combination of these waves.
let angle = twist_wave + phase_wave;
let cos_a = cos(angle);
let sin_a = sin(angle);
var twisted = position;
twisted.x = position.x * cos_a - position.z * sin_a;
twisted.z = position.x * sin_a + position.z * cos_a;
return twisted;
}
Multi-Axis Twisting
For truly complex deformations, we can apply twists around multiple axes in sequence. For example, we can twist around the Y-axis and then apply another twist around the X-axis.
Important: The order of these rotations matters! Twisting on Y then X produces a different result than twisting on X then Y.
fn multi_axis_twist(
position: vec3<f32>,
time: f32,
) -> vec3<f32> {
var result = position;
// 1. Twist around the Y-axis (affecting X and Z).
let angle_y = position.y * sin(time) * 1.5;
let cos_y = cos(angle_y);
let sin_y = sin(angle_y);
let temp_x = result.x * cos_y - result.z * sin_y;
result.z = result.x * sin_y + result.z * cos_y;
result.x = temp_x;
// 2. Then, twist the result around the X-axis (affecting Y and Z).
let angle_x = position.x * cos(time * 0.7) * 0.5;
let cos_x = cos(angle_x);
let sin_x = sin(angle_x);
let temp_y = result.y * cos_x - result.z * sin_x;
result.z = result.y * sin_x + result.z * cos_x;
result.y = temp_y;
return result;
}
Performance Note: Each rotation involves several multiplications and trigonometric functions. Chaining them together can become computationally expensive. Use multi-axis twisting sparingly, typically for "hero" objects or specific, high-impact visual effects.
Custom Displacement Maps
So far, all of our effects have been procedural - generated by mathematical formulas. This is powerful, but it can be difficult to achieve a specific, deliberate shape. What if you want a monster's veins to bulge in a precise pattern, or terrain to rise into specific mountain peaks? For this, we turn to displacement maps.
A displacement map is a texture that gives artists direct, pixel-level control over vertex deformation. Instead of a formula dictating the displacement, the shader "reads" the color of a pixel from the map and uses that value to determine how much to push or pull the corresponding vertex. This workflow moves the creative control from the programmer to the artist.
Understanding Displacement Maps
Typically, a displacement map is a grayscale image where the brightness of each pixel encodes the displacement amount:
Mid-gray (Value 0.5): Represents zero displacement. Vertices corresponding to these pixels will not move.
White (Value 1.0): Represents maximum positive displacement, pushing vertices "outward" along their normal.
Black (Value 0.0): Represents maximum negative displacement, pulling vertices "inward" along their normal.

This allows an artist to simply paint the desired deformation. Painting with white raises the geometry, while painting with black carves into it.
Sampling a Displacement Map in the Shader
The implementation is very similar to sampling a noise texture. We use the vertex's UV coordinates to look up the correct pixel in the displacement map.
@group(2) @binding(1)
var displacement_map: texture_2d<f32>;
@group(2) @binding(2)
var displacement_sampler: sampler;
fn apply_displacement_map(
position: vec3<f32>,
normal: vec3<f32>,
uv: vec2<f32>,
strength: f32, // A uniform to control the overall effect intensity
) -> vec3<f32> {
// Sample the map using the vertex's UV coordinates.
// Remember: we must use textureSampleLevel in a vertex shader.
let displacement_value = textureSampleLevel(
displacement_map,
displacement_sampler,
uv,
0.0 // Mip Level 0
).r; // We only need the red channel for a grayscale map.
// Remap the texture value from the [0, 1] range to the [-1, 1] range.
// Mid-gray (0.5) becomes 0.0, black (0.0) becomes -1.0, and white (1.0) becomes 1.0.
let displacement_amount = (displacement_value * 2.0 - 1.0) * strength;
// Apply the final displacement along the vertex's normal.
return position + normal * displacement_amount;
}
Animated Displacement Maps
While displacement maps are often static, we can easily animate them by manipulating the UV coordinates we use for sampling. Scrolling the UVs across a tileable displacement map is a classic technique for creating effects like flowing lava, moving force fields, or rippling water surfaces.
fn animated_displacement(
position: vec3<f32>,
normal: vec3<f32>,
uv: vec2<f32>,
time: f32,
) -> vec3<f32> {
// Scroll the UV coordinates over time to create motion.
let animated_uv = uv + vec2<f32>(time * 0.1, time * 0.05);
// Sample the map with the animated coordinates.
let displacement_value = textureSampleLevel(
displacement_map,
displacement_sampler,
animated_uv,
0.0
).r;
let displacement = (displacement_value * 2.0 - 1.0) * 0.3;
return position + normal * displacement;
}
The Hybrid Approach: Maps + Procedural Effects
The most powerful workflow often combines both techniques. An artist creates a displacement map to define the main, static shapes, and then the programmer layers procedural animation on top to add life and dynamic detail.
fn hybrid_displacement_map(
position: vec3<f32>,
normal: vec3<f32>,
uv: vec2<f32>,
time: f32,
) -> vec3<f32> {
// 1. Static, artist-controlled displacement from the map.
let map_value = textureSampleLevel(displacement_map, displacement_sampler, uv, 0.0).r;
let map_displacement = (map_value * 2.0 - 1.0) * 0.2;
// 2. Dynamic, procedural animation layered on top.
let wave = sin(position.x * 5.0 - time * 2.0) * 0.1;
// 3. Combine both for the final result.
let total_displacement = map_displacement + wave;
return position + normal * total_displacement;
}
This gives you the best of both worlds: the deliberate control of a hand-painted map and the dynamic energy of procedural animation.
Advanced: Multi-Channel Displacement Maps
For even more control, we can use a color texture where the R, G, and B channels store displacement along the X, Y, and Z axes respectively. This is often called a Vector Displacement Map. It allows for much more complex deformations, such as undercuts and overhangs, that are impossible with a simple grayscale map that only displaces along the surface normal.
fn multi_channel_displacement(
position: vec3<f32>,
uv: vec2<f32>,
strength: f32,
) -> vec3<f32> {
// Sample the full RGB color from the texture.
let displacement_vector = textureSampleLevel(
displacement_map,
displacement_sampler,
uv,
0.0
).rgb; // .rgb gets a vec3<f32>
// Remap each channel from [0, 1] to [-1, 1].
let displacement = (displacement_vector * 2.0 - 1.0) * strength;
// Displace in world-space X, Y, and Z directions.
return position + displacement;
}
This is an advanced technique commonly used in film and high-end games where complex details from a high-resolution sculpt (e.g., from ZBrush or Blender) are "baked" into a texture to be applied to a lower-resolution game model.
Preserving Mesh Topology
As we apply more extreme displacement, we run the risk of literally tearing our mesh apart. When a vertex is pushed so far that it causes the triangle it belongs to to flip inside-out, intersect with another triangle, or stretch into a razor-thin line, you get ugly visual artifacts. This is a breakdown of the mesh's topology - the fundamental structure of how its vertices, edges, and faces are connected.
Preserving this topology is crucial for creating clean, stable, and professional-looking effects.
Understanding the Problem
Imagine a simple quad made of two triangles. If we apply a wave that is too strong, the vertices at the peak of the wave can be pushed past the vertices in the trough.

Our goal is to apply dramatic displacement without causing this kind of topological breakdown.
Solution 1: Limit the Displacement Magnitude
The simplest and most direct way to prevent topology issues is to put a hard cap on how far any vertex can move. We calculate the desired displacement, measure its length, and if it exceeds our maximum allowed distance, we scale it back.
fn limited_displacement(
position: vec3<f32>,
displacement_vector: vec3<f32>,
max_displacement: f32,
) -> vec3<f32> {
let displacement_length = length(displacement_vector);
if (displacement_length > max_displacement) {
// The displacement is too large.
// We keep its direction but scale its length down to the maximum.
let limited_vector = normalize(displacement_vector) * max_displacement;
return position + limited_vector;
}
// The displacement is within safe limits.
return position + displacement_vector;
}
This is a brute-force but effective method to prevent the most extreme artifacts.
Solution 2: Normal-Based Displacement Filtering
Another common issue is when displacement pushes vertices "through" the surface, causing triangles to flip inside-out. We can prevent this by checking the direction of the displacement against the vertex's normal. If the displacement is trying to move the vertex against its normal (i.e., inward), we can choose to reduce or reject that movement.
fn topology_safe_displacement(
position: vec3<f32>,
normal: vec3<f32>,
displacement_vector: vec3<f32>,
) -> vec3<f32> {
// The dot product tells us how much the displacement aligns with the normal.
// A positive result means it's moving outward.
// A negative result means it's moving inward, against the normal.
let normal_alignment = dot(displacement_vector, normal);
if (normal_alignment < 0.0) {
// This displacement would move the vertex "inside" the surface.
// To prevent this, we can remove the inward component entirely,
// leaving only the part of the displacement that runs parallel
// (tangential) to the surface.
let inward_component = normal * normal_alignment;
let tangential_displacement = displacement_vector - inward_component;
return position + tangential_displacement;
}
// The displacement is moving outward, which is generally safe.
return position + displacement_vector;
}
This technique is particularly useful for effects like shockwaves or explosions where you want a purely outward bulge without any inward-pulling artifacts.
The Real Solution: Mesh Resolution
Ultimately, the single most important factor in preserving topology is the resolution of your mesh.
Low-Resolution Mesh (Few Vertices): Will break down very quickly under displacement. The long edges between vertices have no flexibility and will easily intersect.
High-Resolution Mesh (Many Vertices): Can handle much more extreme displacement. The smaller, more numerous triangles can stretch and deform to accommodate the movement, resulting in a smooth, continuous surface.
This is why effects like realistic water or cloth require highly tessellated (subdivided) geometry. There is a direct trade-off: higher resolution gives better deformation quality but comes at a higher performance cost.
Rule of Thumb: If your maximum displacement amount is D, your mesh's vertices should ideally be spaced no more than D / 2 units apart. This helps ensure that even at maximum displacement, a vertex is unlikely to be pushed past its neighbor.
Culling: The Hidden Gotchas of Vertex Displacement
When we move vertices in a shader, we create a knowledge gap. The CPU, which is responsible for high-level scene management, is no longer perfectly aware of where the mesh actually is. The GPU has the final, displaced vertex positions, but the CPU is still working with the original, pre-displacement data. This disconnect can lead to two frustrating and common rendering bugs.
Problem 1: The Disappearing Mesh (Frustum Culling)
To render a scene efficiently, Bevy's CPU-side renderer first performs frustum culling. It checks the Axis-Aligned Bounding Box (AABB) of each mesh - a simple, invisible box that completely encloses the original geometry - against the camera's view cone (the frustum). If the box is outside the camera's view, the engine concludes the object is not visible and doesn't even bother sending it to the GPU to be rendered. This saves a huge amount of work.
The problem arises when our vertex shader displaces vertices outside of this original bounding box.

The CPU performs its check, sees the original AABB is off-screen, and culls the object. It never even makes it to the GPU, so our vertex shader never runs, and the object simply disappears, even though it should have been visible.
This is only a problem for large-scale displacement that pushes vertices significantly beyond the mesh's original boundaries.
Solution A: The Simple Fix (NoFrustumCulling)
The easiest way to solve this is to tell Bevy to skip the check entirely for this one object. You can do this by adding the NoFrustumCulling component to the entity.
// Tell Bevy: "Don't perform frustum culling on this entity.
// Just send it to the GPU and trust that I know what I'm doing."
commands.spawn((
Mesh3d(my_mesh_handle),
MeshMaterial3d(my_material_handle),
Transform::default(),
NoFrustumCulling, // Add this component
));
This is a perfectly valid solution, but it does remove a potentially useful optimization.
Solution B: The Optimized Fix (Expanding the AABB)
A more robust solution is to give the CPU better information. We can manually compute a larger AABB for our mesh that accounts for the maximum possible displacement and assign it when we create the mesh asset.
// When creating your mesh in Rust...
let mut mesh = Mesh::new(PrimitiveTopology::TriangleList, RenderAssetUsages::default());
// ... populate vertices, etc. ...
// Calculate a new, larger AABB
if let Some(mut aabb) = mesh.compute_aabb() {
// Expand the box by the maximum possible displacement amount
let max_displacement = 2.0; // The largest value your shader can produce
let expansion = Vec3::splat(max_displacement);
aabb.min -= expansion;
aabb.max += expansion;
mesh.set_aabb(Some(aabb));
}
Now, the CPU has an accurate bounding box to work with and can perform culling correctly.
Problem 2: The Invisible Backside (Backface Culling)
The second issue is unrelated to object boundaries but is critical for thin surfaces like our upcoming flag. By default, for performance, GPUs practice backface culling. They only render triangles whose front face is pointing towards the camera. The back side is assumed to be hidden inside a solid object and is discarded.
For a thin object like a flag, this is a disaster. As the flag waves, you will inevitably see its back side. With backface culling enabled, the back of the flag will be invisible, creating ugly, see-through holes.

The Solution: Disable Culling in the Material
We need to tell the render pipeline that for this specific material, both sides of a triangle should be rendered. We do this by overriding the specialize function in our Material implementation. This function is the designated place to configure the low-level render pipeline state for a material.
// In your material's `impl Material` block
fn specialize(
_pipeline: &bevy::pbr::MaterialPipeline<Self>,
descriptor: &mut RenderPipelineDescriptor,
_layout: &MeshVertexBufferLayoutRef,
_key: MaterialPipelineKey<Self>,
) -> Result<(), SpecializedMeshPipelineError> {
// Find the primitive state, which controls how triangles are handled.
// Set `cull_mode` to `None` to disable backface culling.
descriptor.primitive.cull_mode = None;
Ok(())
}
When to disable backface culling:
Thin surfaces: Flags, leaves, paper, single-plane cloth.
Special effects: Some transparent or translucent surfaces.
Any mesh where the user might see the "inside."
Performance Note: Disabling backface culling effectively doubles the number of fragments the GPU might have to process for that mesh. Use it only when necessary. Don't disable it for solid, enclosed objects like a rock or a character model.
Summary for Our Flag Project
Applying this to our upcoming flag simulation:
The flag's waving motion will be relatively contained. The displacement won't be large enough to move it completely outside its original bounding box, so we do not need NoFrustumCulling.
The flag is a thin, two-sided object. We absolutely do need to disable backface culling by setting
cull_mode = Nonein our material.
Vertex Shader Optimization Techniques
A vertex shader runs for every single vertex of a mesh. A detailed character model might have 50,000 vertices, meaning your shader code will execute 50,000 times per frame. Even small inefficiencies can add up quickly and impact your game's performance. Complex displacement effects, with their loops, texture samples, and trigonometric functions, can be particularly demanding.
Here are some essential strategies to keep your vertex shaders fast and efficient.
Principle 1: Compute on the CPU, Use on the GPU
This is the golden rule of shader optimization. Any calculation that produces a result that is the same for every vertex in a draw call should be done once on the CPU, not thousands of times on the GPU.
A classic example is pre-calculating values based on time.
❌ Inefficient: Calculating sin(time) for every vertex.
// In the shader
fn vertex(/*...*/) -> /*...*/ {
// This sin() is calculated for every single vertex!
let displacement = sin(material.time) * 0.5;
// ...
}
✅ Efficient: Calculating sin(time) once on the CPU and passing the result.
// In a Rust system that runs once per frame
fn update_my_materials(time: Res<Time>, mut materials: ResMut<Assets<MyMaterial>>) {
for (_, material) in materials.iter_mut() {
// Do the expensive work ONCE on the CPU
material.uniforms.time_sin = time.elapsed_secs().sin();
material.uniforms.time_cos = time.elapsed_secs().cos();
}
}
// In the shader
fn vertex(/*...*/) -> /*...*/ {
// We just use the pre-calculated result. Fast!
let displacement = material.time_sin * 0.5;
// ...
}
The performance saving is enormous: one calculation on the CPU versus potentially thousands on the GPU.
Principle 2: Level of Detail (LOD)
Not all objects need the same level of visual fidelity. An object right in front of the camera needs complex, detailed displacement, while an object 100 meters away can get by with a much simpler effect, or none at all. This is the principle of Level of Detail (LOD).
We can implement a simple distance-based LOD directly in our shader.
fn adaptive_displacement(
position: vec3<f32>,
time: f32,
camera_position: vec3<f32>, // Passed in as a uniform
model_matrix: mat4x4<f32>,
) -> vec3<f32> {
// We need to know the vertex's position in world space to check its distance.
let world_pos = (model_matrix * vec4<f32>(position, 1.0)).xyz;
let distance_from_camera = length(world_pos - camera_position);
if (distance_from_camera < 10.0) {
// Close up: Use the highest quality effect (e.g., 4 octaves of noise).
return fbm_displacement(position, time, 4u);
} else if (distance_from_camera < 50.0) {
// Medium distance: Use a cheaper effect (e.g., 2 octaves).
return fbm_displacement(position, time, 2u);
} else {
// Far away: Use the cheapest possible effect (e.g., a single sine wave).
return simple_wave_displacement(position, time);
}
}
This ensures you spend your performance budget where it matters most: on the things the player can actually see up close.
A Note on Branching for LOD: You might notice this code uses
if/else, which Principle 4 warns can be expensive. This is indeed a great observation and highlights a key nuance of optimization: it's all about trade-offs.The alternative here would be to calculate all three displacement effects (high, medium, and low) for every single vertex and then blend between them. The cost of running the most complex function for every vertex, even distant ones, would be immense.
In this case, the massive performance gain from completely skipping expensive calculations for distant objects far outweighs the small performance cost of thread divergence at the LOD boundaries. This is a situation where using a branch is the clear and correct optimization.
Principle 3: Minimize Texture Samples
Sampling a texture is a relatively expensive operation because it involves memory access. The fewer texture lookups you perform, the faster your shader will be.
❌ Inefficient: Multiple samples from the same texture to create a crude blur.
let uv = in.uv;
let noise1 = textureSampleLevel(noise_tex, sampler, uv, 0.0).r;
let noise2 = textureSampleLevel(noise_tex, sampler, uv + vec2(0.01, 0.0), 0.0).r;
let noise3 = textureSampleLevel(noise_tex, sampler, uv - vec2(0.01, 0.0), 0.0).r;
let result = (noise1 + noise2 + noise3) / 3.0;
✅ Efficient: If possible, pack different data into the R, G, and B channels of a single texture. This lets you get three distinct patterns with a single memory lookup.
// One lookup gets three different noise patterns.
let noise_sample = textureSampleLevel(my_rgb_noise_tex, sampler, in.uv, 0.0);
let result = noise_sample.r * 0.5 + noise_sample.g * 0.3 + noise_sample.b * 0.2;
Principle 4: Avoid Divergent Branching
GPUs achieve their incredible speed by executing the same instruction on a large batch of threads (representing different vertices or fragments) at the same time. This is called SIMD (Single Instruction, Multiple Data).
An if/else statement can break this lockstep execution. If the if condition is based on per-vertex data (like position.y), some threads in a batch will need to execute the if block while others execute the else block. This is called thread divergence, and it forces the GPU to run both paths, one after the other, effectively serializing the work and slowing things down.
❌ Bad: Branching based on per-vertex data.
var displacement: f32;
if (in.position.y > 0.5) {
displacement = calculate_complex_effect_A(in.position, time); // Path A
} else {
displacement = calculate_complex_effect_B(in.position, time); // Path B
}
✅ Better: Use built-in functions like mix() or step() to create the same result without branching. This is called predication.
let effect_A = calculate_complex_effect_A(in.position, time);
let effect_B = calculate_complex_effect_B(in.position, time);
// step() returns 0.0 if y <= 0.5, and 1.0 if y > 0.5.
let blend_factor = step(0.5, in.position.y);
// mix() selects between A and B based on the blend factor.
let displacement = mix(effect_B, effect_A, blend_factor);
Here, all threads execute all the code, but since the calculations run in parallel, it is often much faster than the cost of divergence.
Note: Branching on a uniform value (e.g.,
if material.use_effect_A) is perfectly fine, because all vertices will take the same path and there will be no divergence.
Principle 5: Trust the Built-ins
The built-in WGSL functions (normalize, length, dot, mix, clamp, etc.) are highly optimized, low-level instructions on the GPU hardware. Always prefer using a built-in function over writing your own version in WGSL.
❌ Slower: let len = sqrt(dot(v, v)); ✅ Faster: let len = length(v);
❌ Slower: let n = v / length(v); ✅ Faster: let n = normalize(v);
Complete Example: Animated Flag with Wind Simulation
Theory is essential, but nothing solidifies understanding like building something tangible. We will now apply everything we've learned to create a physically-inspired, interactive flag simulation. This project will not be a simple, repetitive wave; it will be a dynamic surface that responds to multiple simulated forces, creating a rich and believable sense of motion.
Our Goal
Our goal is to render a flag that looks like it's made of cloth and is being affected by wind. This means it needs to:
Remain fixed along one edge (the flagpole).
Wave and billow in response to a controllable wind force.
Exhibit small, chaotic flutters (turbulence) on top of the main waves.
Droop realistically under gravity when the wind dies down.
Be viewable and correctly lit from all angles.
What This Project Demonstrates
This project is a synthesis of nearly every concept covered in this article:
Multi-Axis & Multi-Octave Waves: We'll layer several sine waves traveling in different directions to create the main wind motion.
Noise for Turbulence: A scrolling noise texture will add organic, high-frequency fluttering.
Hybrid Displacement: The final motion is a combination of procedural waves (wind), noise (turbulence), and simple physics (gravity).
UV-Based Logic: We'll use the mesh's UV coordinates to "pin" the left edge of the flag to the pole and to make the waving effect stronger at the free end.
Normal Recalculation: We will dynamically recalculate the surface normals in the vertex shader to ensure the flag is lit correctly as it deforms.
Backface Culling Disabled: The material will be configured to render both sides of the flag.
Parameter Control: All key simulation parameters (wind speed, direction, gravity) will be exposed as uniforms that we can change in real-time from our Bevy app.
The Shader (assets/shaders/d02_05_flag_simulation.wgsl)
This is where the magic happens. The vertex shader is the heart of the simulation. It executes the following logic for each vertex:
Check if Pinned: It first checks the vertex's
uv.xcoordinate. If it's close to0.0, the vertex is part of the flagpole edge and its position is not modified.Calculate Forces: For all other vertices, it calculates and combines several separate displacement vectors for wind, turbulence, gravity, and fine ripples.
Combine and Apply: These vectors are added together. The total displacement is then scaled by
uv.xto ensure the effect is weakest at the pole and strongest at the free edge.Recalculate Normal: It then calculates a new surface normal based on the displaced position to ensure lighting remains accurate.
Transform to Clip Space: Finally, it computes the final world and clip space positions for rendering.
The fragment shader is simple: it applies basic directional lighting using the recalculated normal and draws a striped pattern to make the flag more recognizable.
#import bevy_pbr::mesh_functions
#import bevy_pbr::view_transformations::position_world_to_clip
struct FlagMaterial {
time: f32,
wind_strength: f32,
wind_direction: vec2<f32>,
wave_frequency: f32,
wave_amplitude: f32,
turbulence_strength: f32,
gravity_strength: f32,
flag_stiffness: f32,
}
@group(2) @binding(0)
var<uniform> material: FlagMaterial;
@group(2) @binding(1)
var noise_texture: texture_2d<f32>;
@group(2) @binding(2)
var noise_sampler: sampler;
struct VertexInput {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
@location(1) normal: vec3<f32>,
@location(2) uv: vec2<f32>,
}
struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
@location(0) world_position: vec3<f32>,
@location(1) world_normal: vec3<f32>,
@location(2) uv: vec2<f32>,
@location(3) displacement_amount: f32,
}
// Sample noise with multiple octaves (Fractal Brownian Motion)
fn sample_fbm_noise(uv: vec2<f32>, octaves: u32) -> f32 {
var result = 0.0;
var amplitude = 1.0;
var frequency = 1.0;
for (var i = 0u; i < octaves; i++) {
let sample_uv = uv * frequency;
// Use textureSampleLevel for vertex shader (mip level 0.0 = full resolution)
let noise = textureSampleLevel(noise_texture, noise_sampler, sample_uv, 0.0).r;
result += (noise * 2.0 - 1.0) * amplitude;
amplitude *= 0.5;
frequency *= 2.0;
}
return result;
}
// Calculate wind displacement with multiple wave frequencies
fn calculate_wind_displacement(
position: vec3<f32>,
uv: vec2<f32>,
time: f32,
) -> vec3<f32> {
var displacement = vec3<f32>(0.0);
// Distance from pole (left edge) - flags wave more at the free edge
let edge_influence = uv.x; // 0.0 at pole, 1.0 at free edge
// Primary wind wave - travels along wind direction
let wind_phase = dot(vec2<f32>(position.x, position.z), material.wind_direction);
let primary_wave = sin(wind_phase * material.wave_frequency - time * 3.0)
* material.wave_amplitude;
// Secondary wave - different frequency and direction
let secondary_dir = vec2<f32>(-material.wind_direction.y, material.wind_direction.x);
let secondary_phase = dot(vec2<f32>(position.x, position.z), secondary_dir);
let secondary_wave = sin(secondary_phase * material.wave_frequency * 1.5 - time * 2.0)
* material.wave_amplitude * 0.5;
// Tertiary wave - high frequency detail
let detail_wave = sin(position.x * material.wave_frequency * 4.0 - time * 5.0)
* material.wave_amplitude * 0.25;
// Combine waves
let wave_displacement = (primary_wave + secondary_wave + detail_wave) * edge_influence;
// Apply in wind direction and upward (flags billow up in wind)
displacement.x += material.wind_direction.x * wave_displacement * material.wind_strength;
displacement.y += abs(wave_displacement) * material.wind_strength * 0.5; // Billow upward
displacement.z += material.wind_direction.y * wave_displacement * material.wind_strength;
return displacement;
}
// Add turbulent motion using noise
fn calculate_turbulence(
position: vec3<f32>,
uv: vec2<f32>,
time: f32,
) -> vec3<f32> {
// Sample noise texture with animation
let noise_uv = uv * 2.0 + vec2<f32>(time * 0.1, time * 0.05);
let noise_value = sample_fbm_noise(noise_uv, 3u);
// Edge influence - more turbulence at free edge
let edge_influence = smoothstep(0.0, 1.0, uv.x);
// Create turbulent displacement
var turbulence = vec3<f32>(0.0);
// Sample noise at different positions for each axis
let noise_x = sample_fbm_noise(noise_uv + vec2<f32>(0.0, 0.0), 2u);
let noise_y = sample_fbm_noise(noise_uv + vec2<f32>(1.0, 0.0), 2u);
let noise_z = sample_fbm_noise(noise_uv + vec2<f32>(0.0, 1.0), 2u);
turbulence.x = noise_x * material.turbulence_strength * edge_influence;
turbulence.y = noise_y * material.turbulence_strength * edge_influence * 0.5;
turbulence.z = noise_z * material.turbulence_strength * edge_influence;
return turbulence;
}
// Apply gravity effect - flag droops when not held by wind
fn calculate_gravity_effect(
position: vec3<f32>,
uv: vec2<f32>,
wind_strength: f32,
) -> vec3<f32> {
// Edge influence - more droop at free edge
let edge_influence = uv.x * uv.x; // Quadratic for more natural droop
// Gravity is counteracted by wind
let effective_gravity = material.gravity_strength * (1.0 - wind_strength * 0.5);
// Droop downward
let droop = -edge_influence * effective_gravity;
return vec3<f32>(0.0, droop, 0.0);
}
// Calculate ripples that travel along the flag surface
fn calculate_surface_ripples(
position: vec3<f32>,
uv: vec2<f32>,
time: f32,
) -> vec3<f32> {
// Ripples travel from pole to edge
let ripple_phase = uv.x * 10.0 - time * 4.0;
let ripple = sin(ripple_phase) * 0.02;
// Vertical waves
let vertical_phase = uv.y * 8.0 - time * 3.0;
let vertical_ripple = sin(vertical_phase) * 0.015;
// Apply ripples perpendicular to surface
return vec3<f32>(0.0, ripple + vertical_ripple, 0.0);
}
// Calculate perturbed normal for proper lighting
fn calculate_flag_normal(
position: vec3<f32>,
original_normal: vec3<f32>,
uv: vec2<f32>,
time: f32,
) -> vec3<f32> {
// Estimate surface gradient from wave function
let epsilon = 0.01;
// Sample displacement at nearby points
let center_disp = calculate_wind_displacement(position, uv, time);
let right_disp = calculate_wind_displacement(
position + vec3<f32>(epsilon, 0.0, 0.0),
uv + vec2<f32>(epsilon, 0.0),
time
);
let up_disp = calculate_wind_displacement(
position + vec3<f32>(0.0, epsilon, 0.0),
uv + vec2<f32>(0.0, epsilon),
time
);
// Calculate tangent vectors
let tangent_x = vec3<f32>(epsilon, 0.0, 0.0) + (right_disp - center_disp);
let tangent_y = vec3<f32>(0.0, epsilon, 0.0) + (up_disp - center_disp);
// Cross product gives approximate normal
let perturbed_normal = cross(tangent_x, tangent_y);
// Blend with original normal for stability
return normalize(mix(original_normal, perturbed_normal, 0.5));
}
@vertex
fn vertex(in: VertexInput) -> VertexOutput {
var out: VertexOutput;
// Start with original position
var displaced_position = in.position;
// Check if vertex is at the pole (left edge) - these should not move
let is_pinned = in.uv.x < 0.05; // Pin vertices near x=0
if !is_pinned {
// Calculate all displacement components
let wind_disp = calculate_wind_displacement(in.position, in.uv, material.time);
let turbulence_disp = calculate_turbulence(in.position, in.uv, material.time);
let gravity_disp = calculate_gravity_effect(in.position, in.uv, material.wind_strength);
let ripple_disp = calculate_surface_ripples(in.position, in.uv, material.time);
// Combine all displacements
var total_displacement = wind_disp + turbulence_disp + gravity_disp + ripple_disp;
// Apply stiffness - resist extreme deformation
let max_displacement = 1.0 / material.flag_stiffness;
let displacement_magnitude = length(total_displacement);
if displacement_magnitude > max_displacement {
total_displacement = normalize(total_displacement) * max_displacement;
}
// Apply displacement
displaced_position += total_displacement;
// Store displacement amount for fragment shader
out.displacement_amount = displacement_magnitude / max_displacement;
} else {
out.displacement_amount = 0.0;
}
// Calculate perturbed normal if not pinned
var final_normal = in.normal;
if !is_pinned {
final_normal = calculate_flag_normal(in.position, in.normal, in.uv, material.time);
}
// Transform to world space
let model = mesh_functions::get_world_from_local(in.instance_index);
let world_position = mesh_functions::mesh_position_local_to_world(
model,
vec4<f32>(displaced_position, 1.0)
);
let world_normal = mesh_functions::mesh_normal_local_to_world(
final_normal,
in.instance_index
);
// Transform to clip space
out.clip_position = position_world_to_clip(world_position.xyz);
out.world_position = world_position.xyz;
out.world_normal = normalize(world_normal);
out.uv = in.uv;
return out;
}
@fragment
fn fragment(in: VertexOutput) -> @location(0) vec4<f32> {
let normal = normalize(in.world_normal);
// Simple directional lighting
let light_dir = normalize(vec3<f32>(1.0, 1.0, 1.0));
let diffuse = max(0.0, dot(normal, light_dir)) * 0.7;
let ambient = 0.3;
// Base color - simple flag pattern
let stripe_frequency = 8.0;
let stripe_pattern = step(0.5, fract(in.uv.y * stripe_frequency));
let color1 = vec3<f32>(0.8, 0.2, 0.2); // Red
let color2 = vec3<f32>(0.9, 0.9, 0.9); // White
let base_color = mix(color1, color2, stripe_pattern);
// Add slight color variation based on displacement
let displacement_tint = vec3<f32>(1.0) * (1.0 + in.displacement_amount * 0.2);
// Combine lighting and color
let lit_color = base_color * displacement_tint * (ambient + diffuse);
return vec4<f32>(lit_color, 1.0);
}
The Rust Material (src/materials/d02_05_flag_simulation.rs)
This file defines the bridge between our Bevy app and the shader. It contains the FlagUniforms struct, which matches the shader's uniform block, and the FlagMaterial struct which implements Bevy's Material trait. Critically, it includes the specialize function override to disable backface culling, ensuring both sides of our flag are rendered. It also provides the helper function to generate the noise texture.
use bevy::pbr::MaterialPipelineKey;
use bevy::prelude::*;
use bevy::render::mesh::MeshVertexBufferLayoutRef;
use bevy::render::render_resource::{AsBindGroup, ShaderRef};
use bevy::render::render_resource::{RenderPipelineDescriptor, SpecializedMeshPipelineError};
use noise::{NoiseFn, Perlin};
mod uniforms {
#![allow(dead_code)]
use bevy::prelude::*;
use bevy::render::render_resource::ShaderType;
#[derive(ShaderType, Debug, Clone)]
pub struct FlagMaterial {
pub time: f32,
pub wind_strength: f32,
pub wind_direction: Vec2,
pub wave_frequency: f32,
pub wave_amplitude: f32,
pub turbulence_strength: f32,
pub gravity_strength: f32,
pub flag_stiffness: f32,
}
impl Default for FlagMaterial {
fn default() -> Self {
Self {
time: 0.0,
wind_strength: 1.0,
wind_direction: Vec2::new(1.0, 0.0),
wave_frequency: 2.0,
wave_amplitude: 0.3,
turbulence_strength: 0.15,
gravity_strength: 0.2,
flag_stiffness: 0.5,
}
}
}
}
pub use uniforms::FlagMaterial as FlagUniforms;
#[derive(Asset, TypePath, AsBindGroup, Debug, Clone)]
pub struct FlagMaterial {
#[uniform(0)]
pub uniforms: FlagUniforms,
#[texture(1)]
#[sampler(2)]
pub noise_texture: Handle<Image>,
}
impl Material for FlagMaterial {
fn vertex_shader() -> ShaderRef {
"shaders/d02_05_flag_simulation.wgsl".into()
}
fn fragment_shader() -> ShaderRef {
"shaders/d02_05_flag_simulation.wgsl".into()
}
fn specialize(
_pipeline: &bevy::pbr::MaterialPipeline<Self>,
descriptor: &mut RenderPipelineDescriptor,
_layout: &MeshVertexBufferLayoutRef,
_key: MaterialPipelineKey<Self>,
) -> Result<(), SpecializedMeshPipelineError> {
// Disable backface culling for double-sided rendering
// Flags are thin surfaces visible from both sides
descriptor.primitive.cull_mode = None;
Ok(())
}
}
// Helper function to generate a Perlin noise texture
pub fn generate_noise_texture(size: u32) -> Image {
let perlin = Perlin::new(42);
let mut data = Vec::with_capacity((size * size * 4) as usize);
for y in 0..size {
for x in 0..size {
let nx = x as f64 / size as f64;
let ny = y as f64 / size as f64;
// Sample Perlin noise at multiple scales
let noise_value = perlin.get([nx * 4.0, ny * 4.0]);
// Remap from [-1, 1] to [0, 255]
let byte_value = ((noise_value + 1.0) * 0.5 * 255.0) as u8;
// RGBA format
data.push(byte_value);
data.push(byte_value);
data.push(byte_value);
data.push(255);
}
}
let mut image = Image::new(
bevy::render::render_resource::Extent3d {
width: size,
height: size,
depth_or_array_layers: 1,
},
bevy::render::render_resource::TextureDimension::D2,
data,
bevy::render::render_resource::TextureFormat::Rgba8Unorm,
bevy::render::render_asset::RenderAssetUsages::default(),
);
// Set sampler to repeat - the default linear filtering is fine
image.sampler = bevy::image::ImageSampler::Descriptor(bevy::image::ImageSamplerDescriptor {
address_mode_u: bevy::image::ImageAddressMode::Repeat,
address_mode_v: bevy::image::ImageAddressMode::Repeat,
..Default::default()
});
image
}
Don't forget to add it to src/materials/mod.rs:
// ... other materials
pub mod d02_05_flag_simulation;
The Demo Module (src/demos/d02_05_flag_simulation.rs)
The Bevy application is responsible for setting up the scene and controlling the simulation. The setup system creates the camera, light, a simple cylinder for the flagpole, and our flag entity. Crucially, it generates a highly subdivided plane mesh to ensure we have enough vertices for smooth deformation. The handle_input and update_ui systems allow us to interactively tweak the material's uniform values using the keyboard, providing real-time feedback on how each parameter affects the simulation.
use crate::materials::d02_05_flag_simulation::{FlagMaterial, FlagUniforms, generate_noise_texture};
use bevy::prelude::*;
use std::f32::consts::PI;
pub fn run() {
App::new()
.add_plugins(DefaultPlugins)
.add_plugins(MaterialPlugin::<FlagMaterial>::default())
.add_systems(Startup, setup)
.add_systems(Update, (update_time, handle_input, update_ui))
.run();
}
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<FlagMaterial>>,
mut standard_materials: ResMut<Assets<StandardMaterial>>,
mut images: ResMut<Assets<Image>>,
) {
// Generate noise texture
let noise_texture = images.add(generate_noise_texture(256));
// Create flag mesh (plane subdivided for smooth deformation)
let flag_mesh = create_flag_mesh(40, 20, 4.0, 2.0);
// Spawn flag
commands.spawn((
Mesh3d(meshes.add(flag_mesh)),
MeshMaterial3d(materials.add(FlagMaterial {
uniforms: FlagUniforms::default(),
noise_texture: noise_texture.clone(),
})),
Transform::from_xyz(0.0, 0.0, 0.0),
));
// Add a flag pole (simple cylinder)
commands.spawn((
Mesh3d(meshes.add(Cylinder::new(0.05, 3.0))),
MeshMaterial3d(standard_materials.add(StandardMaterial {
base_color: Color::srgb(0.3, 0.2, 0.1),
..default()
})),
Transform::from_xyz(-2.0, 0.0, 0.0),
));
// Lighting
commands.spawn((
DirectionalLight {
illuminance: 10000.0,
shadows_enabled: false,
..default()
},
Transform::from_rotation(Quat::from_euler(EulerRot::XYZ, -PI / 4.0, PI / 4.0, 0.0)),
));
// Camera
commands.spawn((
Camera3d::default(),
Transform::from_xyz(4.0, 2.0, 6.0).looking_at(Vec3::new(0.0, 0.0, 0.0), Vec3::Y),
));
// UI
commands.spawn((
Text::new(
"[W/S] Wind Strength | [A/D] Wind Direction | [Q/E] Wave Frequency\n\
[Z/X] Gravity | [C/V] Turbulence | [R] Reset\n\
\n\
Wind: 1.0 | Direction: â†' | Frequency: 2.0 | Gravity: 0.2 | Turbulence: 0.15",
),
Node {
position_type: PositionType::Absolute,
top: Val::Px(10.0),
left: Val::Px(10.0),
..default()
},
TextFont {
font_size: 16.0,
..default()
},
));
}
// Create a subdivided plane mesh for the flag
fn create_flag_mesh(width_segments: u32, height_segments: u32, width: f32, height: f32) -> Mesh {
use bevy::render::mesh::{Indices, PrimitiveTopology};
use bevy::render::render_asset::RenderAssetUsages;
let mut positions = Vec::new();
let mut normals = Vec::new();
let mut uvs = Vec::new();
let mut indices = Vec::new();
// Generate vertices
for y in 0..=height_segments {
for x in 0..=width_segments {
let u = x as f32 / width_segments as f32;
let v = y as f32 / height_segments as f32;
// Position centered at origin, extending in +X direction
let pos_x = (u - 0.5) * width;
let pos_y = (v - 0.5) * height;
let pos_z = 0.0;
positions.push([pos_x, pos_y, pos_z]);
normals.push([0.0, 0.0, 1.0]); // Face forward
uvs.push([u, v]);
}
}
// Generate indices
for y in 0..height_segments {
for x in 0..width_segments {
let quad_start = y * (width_segments + 1) + x;
// First triangle
indices.push(quad_start);
indices.push(quad_start + width_segments + 1);
indices.push(quad_start + 1);
// Second triangle
indices.push(quad_start + 1);
indices.push(quad_start + width_segments + 1);
indices.push(quad_start + width_segments + 2);
}
}
let mut mesh = Mesh::new(
PrimitiveTopology::TriangleList,
RenderAssetUsages::default(),
);
mesh.insert_attribute(Mesh::ATTRIBUTE_POSITION, positions);
mesh.insert_attribute(Mesh::ATTRIBUTE_NORMAL, normals);
mesh.insert_attribute(Mesh::ATTRIBUTE_UV_0, uvs);
mesh.insert_indices(Indices::U32(indices));
mesh
}
fn update_time(time: Res<Time>, mut materials: ResMut<Assets<FlagMaterial>>) {
for (_, material) in materials.iter_mut() {
material.uniforms.time = time.elapsed_secs();
}
}
fn handle_input(
keyboard: Res<ButtonInput<KeyCode>>,
time: Res<Time>,
mut materials: ResMut<Assets<FlagMaterial>>,
) {
let delta = time.delta_secs();
for (_, material) in materials.iter_mut() {
// Wind strength
if keyboard.pressed(KeyCode::KeyW) {
material.uniforms.wind_strength = (material.uniforms.wind_strength + delta).min(3.0);
}
if keyboard.pressed(KeyCode::KeyS) {
material.uniforms.wind_strength = (material.uniforms.wind_strength - delta).max(0.0);
}
// Wind direction (rotate)
if keyboard.pressed(KeyCode::KeyA) {
let angle = material
.uniforms
.wind_direction
.y
.atan2(material.uniforms.wind_direction.x);
let new_angle = angle + delta;
material.uniforms.wind_direction = Vec2::new(new_angle.cos(), new_angle.sin());
}
if keyboard.pressed(KeyCode::KeyD) {
let angle = material
.uniforms
.wind_direction
.y
.atan2(material.uniforms.wind_direction.x);
let new_angle = angle - delta;
material.uniforms.wind_direction = Vec2::new(new_angle.cos(), new_angle.sin());
}
// Wave frequency
if keyboard.pressed(KeyCode::KeyQ) {
material.uniforms.wave_frequency =
(material.uniforms.wave_frequency - delta * 2.0).max(0.5);
}
if keyboard.pressed(KeyCode::KeyE) {
material.uniforms.wave_frequency =
(material.uniforms.wave_frequency + delta * 2.0).min(10.0);
}
// Gravity
if keyboard.pressed(KeyCode::KeyZ) {
material.uniforms.gravity_strength =
(material.uniforms.gravity_strength - delta * 0.5).max(0.0);
}
if keyboard.pressed(KeyCode::KeyX) {
material.uniforms.gravity_strength =
(material.uniforms.gravity_strength + delta * 0.5).min(1.0);
}
// Turbulence
if keyboard.pressed(KeyCode::KeyC) {
material.uniforms.turbulence_strength =
(material.uniforms.turbulence_strength - delta * 0.3).max(0.0);
}
if keyboard.pressed(KeyCode::KeyV) {
material.uniforms.turbulence_strength =
(material.uniforms.turbulence_strength + delta * 0.3).min(0.5);
}
// Reset
if keyboard.just_pressed(KeyCode::KeyR) {
*material = FlagMaterial {
uniforms: FlagUniforms::default(),
noise_texture: material.noise_texture.clone(),
};
}
}
}
fn update_ui(materials: Res<Assets<FlagMaterial>>, mut text_query: Query<&mut Text>) {
if !materials.is_changed() {
return;
}
if let Some((_, material)) = materials.iter().next() {
// Calculate wind direction as compass direction
let angle = material
.uniforms
.wind_direction
.y
.atan2(material.uniforms.wind_direction.x);
let angle_degrees = angle.to_degrees();
let direction_arrow = if angle_degrees.abs() < 45.0 {
">"
} else if angle_degrees > 45.0 && angle_degrees < 135.0 {
"^"
} else if angle_degrees.abs() > 135.0 {
"<"
} else {
"v"
};
for mut text in text_query.iter_mut() {
**text = format!(
"[W/S] Wind Strength | [A/D] Wind Direction | [Q/E] Wave Frequency\n\
[Z/X] Gravity | [C/V] Turbulence | [R] Reset\n\
\n\
Wind: {:.1} | Direction: {} ({:.0} deg) | Frequency: {:.1}\n\
Gravity: {:.2} | Turbulence: {:.2}",
material.uniforms.wind_strength,
direction_arrow,
angle_degrees,
material.uniforms.wave_frequency,
material.uniforms.gravity_strength,
material.uniforms.turbulence_strength,
);
}
}
}
Don't forget to add it to src/demos/mod.rs:
// ... other demos
pub mod d02_05_flag_simulation;
And register it in src/main.rs:
codeRust
Demo {
number: "2.5",
title: "Advanced Vertex Displacement",
run: demos::d02_05_flag_simulation::run,
},
Important: You'll need to add the noise crate to your Cargo.toml:
[dependencies]
noise = "0.9"
Running the Demo
When you run the project, you'll see a flag attached to a pole, waving with a complex and natural-looking motion. Use the controls to experiment with the different forces acting upon it. This interactivity is key to developing an intuition for how these layered effects combine.
Controls
| Key(s) | Action | Effect |
| W / S | Wind Strength | Increases or decreases the overall power of the wind. |
| A / D | Wind Direction | Rotates the direction the wind is blowing from. |
| Q / E | Wave Frequency | Changes the size of the main waves in the flag. |
| Z / X | Gravity | Increases or decreases the downward pull on the flag. |
| C / V | Turbulence | Adjusts the amount of chaotic, noisy flutter. |
| R | Reset | Returns all simulation parameters to their default values. |
What You're Seeing

Observe how the different layers of motion interact:
| Condition | What to Observe | Concept Illustrated |
| Low Wind, High Gravity | The flag droops realistically, with only minor flutters. | calculate_gravity_effect dominates the final displacement. |
| High Wind, Low Turbulence | The flag makes large, smooth, rolling waves. | The multi-octave sine waves from calculate_wind_displacement are the primary movers. |
| High Wind, High Turbulence | The motion becomes much more chaotic and violent, with small ripples appearing all over the surface. | calculate_turbulence adds high-frequency noise, breaking up the smooth sine waves. |
| Any Mode | The left edge of the flag remains perfectly still, while the right edge moves the most dramatically. | UV-Based Logic. The is_pinned check and edge_influence scaling control the effect's strength across the surface. |
| Rotate the Camera | As the flag folds and curves, the lighting on its surface changes dynamically and correctly. | Normal Recalculation. calculate_flag_normal is successfully updating normals to match the new geometry. |
Key Takeaways
Complexity from Layers: The core principle of advanced displacement is creating complex results by adding simple layers together (e.g., waves + noise + gravity).
Noise is Essential for Realism: Procedural noise, sampled from a texture, is the key to breaking up repetitive patterns and adding organic, unpredictable motion.
Use UVs for Spatial Logic: UV coordinates are not just for textures. They are a powerful tool for controlling shader effects based on a vertex's position on the mesh surface (e.g., pinning the flag's edge).
Normals Must Be Updated: If you significantly displace vertices, you must also recalculate their normals to ensure the object's lighting remains correct.
Address Culling Issues: Be mindful of frustum culling for large displacements (
NoFrustumCulling) and always disable backface culling (cull_mode = None) for thin, two-sided surfaces.textureSampleLevel is Mandatory: Remember that you must use
textureSampleLevel(nottextureSample) when sampling textures in a vertex shader.Optimize Intelligently: Keep expensive calculations on the CPU, use LODs for distant objects, and understand the trade-offs of branching in your shader code.
What's Next?
You are now equipped with a powerful arsenal of techniques for manipulating geometry directly on the GPU. You can make surfaces breathe, ripple, twist, and wave in complex and believable ways.
We have spent this chapter shaping the canvas; now, it's time to learn how to paint on it. In the next phase of the curriculum, we will shift our focus from the vertex shader to the fragment shader, exploring the rich world of colors, procedural patterns, and lighting models.
Next up: 2.6 - Normal Vector Transformation
Quick Reference
Core Concepts for Effect Design
Layering Creates Complexity: The most important principle. Create rich, believable motion by adding simple, independent effects together (e.g., broad waves + fine noise + physical forces like gravity).
Use Frequencies for Detail: Stack waves or noise at different scales (octaves). Low frequencies create the large, main motion, while high frequencies add the small, realistic surface details.
Noise Breaks Repetition: Sine waves are predictable and repetitive. Use noise textures to introduce organic, chaotic, and non-repeating variations that make motion feel natural.
UVs for Spatial Control: Use a vertex's UV coordinates to control where on a mesh an effect is applied. This is the key to pinning one edge of a flag or making an effect fade in across a surface.
Essential Technical Rules
Recalculate Normals for Correct Lighting: If your shader displaces vertices, it must also calculate a new normal vector. If you don't, the lighting on your deformed object will be flat and incorrect.
Handle Culling for Correct Visibility:
For thin, two-sided objects (like flags or leaves), you must disable backface culling in your Rust
Material(cull_mode = None), or the back will be invisible.For very large displacements, you may need to add the
NoFrustumCullingcomponent in Rust to prevent the object from disappearing when it moves outside its original bounds.
Vertex Shaders Require textureSampleLevel: When sampling a texture in a vertex shader, you must use this specific function. The more common
textureSamplewill not compile.






