4.2 - Texture Filtering and Mipmapping

What We're Learning
You've learned how to sample textures in shaders using UV coordinates. Now comes a critical question: how exactly does the GPU calculate the color between pixels?
When you render a 3D wall, it's rare that one pixel on your screen aligns perfectly with one "pixel" (texel) on the texture image. The texture might be stretched across a huge wall, or shrunk onto a tiny object in the distance.
This is where texture filtering comes in. The choices you make here determine whether your game looks like a crisp retro arcade game, a blurry N64 title, or a modern high-end render.
In this article, you'll understand:
Texels vs. Pixels: The fundamental disconnect between image data and screen space.
Filtering Modes: The difference between
Nearest,Linear, andAnisotropicfiltering.Mipmapping: How pre-calculating smaller versions of textures solves shimmering and aliasing.
The Sampler Resource: How to configure these settings in Bevy and WGSL.
Performance Trade-offs: Why "better" filtering isn't always free.
The Texture Sampling Problem
To understand filtering, we must distinguish between two types of "pixels":
Pixels: The physical dots of light on your monitor.
Texels (Texture Elements): The color data points stored in your texture image.
When the GPU runs your fragment shader, it has to assign a color to a screen pixel. It calculates a UV coordinate (e.g., 0.501, 0.501) and asks the texture for data.
There is almost never a 1:1 relationship.
Magnification: The texture is small, but the object is close. One texel covers many screen pixels. The GPU needs to fill the space between texels.
Minification: The texture is large, but the object is far away. One screen pixel covers many texels. The GPU needs to summarize multiple data points into one color.
Anisotropy: The surface is angled (like a floor). One screen pixel covers a trapezoid shape of texels - long in one direction, short in the other.
The Sampler is the GPU component that answers these questions based on the rules you provide.
Point Filtering (Nearest Neighbor)
The simplest rule is: "Just give me the single texel closest to my UV coordinate."
How It Works
Imagine a grid of colors. If your UV coordinate falls at (1.1, 1.1), the GPU rounds down to (1, 1) and returns that exact color. It ignores the fact that you are slightly closer to 1.2.
When To Use It
In Bevy, this is called ImageFilterMode::Nearest.
✓ Pixel Art: Essential for keeping sprites crisp.
✓ Minecraft-style Voxel Games: Preserves the blocky aesthetic.
✓ Data Textures: If your texture stores non-color data (like object IDs), you never want to blend values (e.g., blending ID 1 and ID 5 to get ID 3 is wrong).
Visual Result
Up close, the texture looks blocky. Each texel appears as a distinct square.
Linear Filtering (Bilinear)
The standard rule for 3D graphics is: "Look at the four closest texels and blend them together."
How It Works
If your UV coordinate is (1.5, 1.5), the GPU grabs the texels at (1,1), (2,1), (1,2), and (2,2). It then performs a weighted average (Linear Interpolation, or "Lerp") based on how close the coordinate is to each center.
When To Use It
In Bevy, this is ImageFilterMode::Linear.
✓ Realistic 3D: Almost everything in a standard game (skin, rocks, metal) uses this.
✓ UI Elements: Ensures smooth edges on text and icons.
Visual Result
Up close, the texture looks smooth and blurry. You can't distinguish individual texels.
Mipmapping: Solving the Distance Problem
Linear filtering works great when the texture is close (magnification). It fails miserably when the texture is far away (minification).
Imagine a high-resolution brick wall texture (1024x1024) rendered on an object that is only 10 pixels wide on screen. Each screen pixel effectively covers 100 texels. The GPU can't sample all 100 texels for every pixel - that would be incredibly slow. Instead, it just samples 4 random ones (linear filtering).
As the camera moves, the "4 random pixels" picked will change drastically, causing the surface to sparkle, shimmer, and produce jagged patterns called Moiré patterns.
The Solution: Mipmaps
Mipmaps (from the Latin multum in parvo, "much in little") are a sequence of progressively smaller versions of the main image.
Level 0: Original (1024x1024)
Level 1: 512x512
Level 2: 256x256
...
Level N: 1x1
When rendering, the GPU calculates how "dense" the UVs are.
If the object is close, it reads from Level 0.
If the object is far, it reads from Level 3 or 4.
This effectively pre-blends the distant pixels, eliminating shimmering and improving performance (because the GPU reads smaller chunks of memory).
Trilinear Filtering
When an object moves from a distance requiring Mip Level 1 to a distance requiring Mip Level 2, you might see a visible "pop" or line where the sharpness changes.
Trilinear Filtering solves this by blending between the mip levels. It samples Level 1 (bilinear) and Level 2 (bilinear), then blends those two results together.
In Bevy, you control this with the mipmap_filter field in the sampler descriptor.
Anisotropic Filtering
Standard Linear/Trilinear filtering assumes the "footprint" of a pixel is a square. But for a floor stretching into the distance, a screen pixel corresponds to a long, thin trapezoid of texels.
If you use standard filtering on a floor, it will look excessively blurry in the distance because the GPU is sampling a square region that includes data "from the sides" that shouldn't be there, while missing data "from the back" that should be included.
Anisotropic Filtering takes multiple samples along the angle of the surface to recover detail on oblique angles. Bevy exposes this via anisotropy_clamp (typically set to 16 for high quality).
Configuring Samplers in Bevy
In Bevy, the texture data and the "rules for reading it" (the Sampler) are bundled together in the Image asset. By default, Bevy sets up images with Linear filtering and Repeat address mode, which is good for general 3D use.
To change this, you modify the sampler field of the Image asset using an ImageSamplerDescriptor.
The ImageSamplerDescriptor Struct
This struct is your control panel for texture quality.
use bevy::image::{ImageSampler, ImageSamplerDescriptor, ImageFilterMode};
// 1. Pixel Art / Retro Style
// Sharp pixels, no blending
let pixel_art_sampler = ImageSampler::Descriptor(ImageSamplerDescriptor {
mag_filter: ImageFilterMode::Nearest, // When close (magnified)
min_filter: ImageFilterMode::Nearest, // When far (minified)
mipmap_filter: ImageFilterMode::Nearest, // Sharp transitions between mips
..default()
});
// 2. Standard 3D (Trilinear)
// Smooth blending everywhere
let standard_sampler = ImageSampler::Descriptor(ImageSamplerDescriptor {
mag_filter: ImageFilterMode::Linear,
min_filter: ImageFilterMode::Linear,
mipmap_filter: ImageFilterMode::Linear, // Smooth transitions between mips
..default()
});
// 3. High Quality Ground (Anisotropic)
// Crisp details at oblique angles
let ground_sampler = ImageSampler::Descriptor(ImageSamplerDescriptor {
// Anisotropy is an integer (usually 1, 2, 4, 8, or 16)
anisotropy_clamp: 16,
// We still use Linear filtering for the base colors
mag_filter: ImageFilterMode::Linear,
min_filter: ImageFilterMode::Linear,
mipmap_filter: ImageFilterMode::Linear,
..default()
});
Applying Samplers in Systems
Since textures are assets, you typically modify them in a system after they have loaded, or configure them via ImageLoaderSettings if you are loading them manually.
For the purpose of our custom shaders, here is how you might configure a texture in a startup system:
fn configure_textures(
mut images: ResMut<Assets<Image>>,
my_texture_handle: Res<MyTextureHandle>, // Assuming you stored the handle
) {
if let Some(image) = images.get_mut(&my_texture_handle.0) {
image.sampler = ImageSampler::Descriptor(ImageSamplerDescriptor {
mag_filter: ImageFilterMode::Nearest,
min_filter: ImageFilterMode::Nearest,
mipmap_filter: ImageFilterMode::Nearest,
..default()
});
}
}
Note: If you change a sampler on an existing image, Bevy automatically updates the GPU resource for you.
Performance vs. Quality
You might be tempted to just set anisotropy_clamp: 16 and Linear on everything. Why not?
The Cost of Filtering
Every time your shader calls textureSample(), the GPU performs memory lookups.
Nearest (Point): 1 memory fetch. The GPU grabs exactly one texel. This is the fastest possible operation.
Bilinear: 4 memory fetches. The GPU grabs the four surrounding texels and blends them.
Trilinear: 8 memory fetches. The GPU grabs 4 texels from Mip Level X and 4 from Mip Level X+1.
Anisotropic (16x): Up to 128 memory fetches (in theory, though hardware is highly optimized). It takes many samples along the viewing angle.
Hardware Reality
Modern GPUs (even on mobile phones) are incredibly optimized for Bilinear and Trilinear filtering. The hardware has dedicated circuits ("Texture Units") to do this math for free. You will rarely see a frame rate drop going from Bilinear to Trilinear.
However, Anisotropic filtering hits memory bandwidth. Because it fetches so much data per pixel, using 16x Anisotropic on every surface can slow down games on lower-end hardware or mobile devices.
Recommended Settings
| Surface Type | Filtering Mode | Anisotropy | Why? |
|---|---|---|---|
| UI / 2D Sprites | Linear or Nearest | 1 (Off) | Viewed flat; anisotropy does nothing. |
| Walls / Props | Trilinear | 1 or 4 | Usually viewed head-on; moderate anisotropy is fine. |
| Ground / Floors | Trilinear | 16 | Viewed at steep angles; needs max anisotropy. |
| Pixel Art | Nearest | 1 (Off) | Blending destroys the art style. |
Common Pitfall: Filtering at the Edge
Understanding filtering explains one of the most annoying bugs in graphics programming: Texture Bleeding.
Imagine you have a texture atlas (a sprite sheet) where multiple images are packed next to each other. You want to display just one sprite. You calculate your UVs perfectly. But on screen, you see a faint line of color from the neighboring sprite at the edge of your character.
Why does this happen?
It's Linear filtering's fault.
When the GPU renders a pixel at the very edge of your sprite (e.g., UV 0.25), Linear filtering asks for the "neighboring" texels to blend with.
If your UV is
0.25, the filter might look at0.249and0.251.If
0.251falls inside the next sprite in the atlas, the GPU happily blends that color in.
The Fixes:
Padding: Add empty (transparent) space between sprites in your atlas.
Point Filtering: Switch to
Nearestfiltering. Since it doesn't blend, it never looks at the neighbor.Address Modes: If it's a single texture (not an atlas), setting
AddressMode::ClampToEdgeensures that when the filter hits the edge (1.0), it doesn't try to wrap around to 0.0 to find a blending partner.
Advanced: Manual Mip Control
Usually, you want the GPU to calculate mip levels automatically. But sometimes, you need to take control.
1. Fine-Tuning Sharpness (LOD Bias)
Sometimes, the GPU plays it too safe. To prevent aliasing, it might switch to a lower-resolution mipmap too early, causing a texture to look blurry when viewed at an angle.
You can force the GPU to "upscale" slightly - using a higher-resolution mip level than it thinks it needs - by applying a LOD Bias.
// WGSL
// textureSampleBias(texture, sampler, uv, bias)
// A negative bias forces a sharper (higher res) mip level.
// -1.0 means "use one mip level higher than calculated"
let crisp_color = textureSampleBias(my_texture, my_sampler, in.uv, -1.0);
Negative Bias (-0.5 to -1.0): Makes textures sharper/crisper. Great for detailed ground textures, but can introduce shimmering (aliasing) if pushed too far.
Positive Bias (+1.0): Makes textures softer. Useful for reducing noise on very high-frequency patterns.
2. Reading Textures in the Vertex Shader
In 4.1, we mentioned that textureSample() is forbidden in Vertex Shaders. Now you know why.
textureSample() relies on derivatives (calculating how UVs change between neighboring pixels) to pick the right mip level.
Fragment Shader: Runs on pixels in groups (quads), so it knows about neighbors.
Vertex Shader: Runs on individual vertices. It has no idea how far away the camera is or how dense the pixels are.
Therefore, the GPU cannot calculate the mip level automatically. If you want to read a texture in a vertex shader (e.g., a heightmap for terrain displacement), you must specify the level explicitly:
@vertex
fn vertex(in: VertexInput) -> VertexOutput {
// ... setup ...
// Explicitly read from Mip Level 0 (Full resolution)
// Note: We typically use a dedicated sampler for this, or a shared one.
let height = textureSampleLevel(height_map, my_sampler, in.uv, 0.0).r;
// Displace vertex
let new_pos = in.position + vec3(0.0, height * 10.0, 0.0);
// ... output ...
}
Complete Example: The Filtering Comparator
We are going to build a tool that lets you toggle between different filtering modes in real-time. To make this comparison valid, we need a special texture: one that has high-frequency details (prone to aliasing) and a complete chain of Mipmaps.
Since we are generating the texture procedurally, we will also write a helper to generate the mipmaps on the CPU. This ensures that Trilinear and Anisotropic filtering have data to work with.
Our Goal
Procedural Texture: Generate a "torture test" pattern (checkerboards and concentric circles) with full mipmaps.
Custom Shader: A shader that can either sample the texture normally or visualize the calculated Mip Level.
Comparator UI: A system to hot-swap the Sampler configuration on the fly.
The Shader (assets/shaders/d04_02_filtering_comparison.wgsl)
This shader handles the sampling. Note the "Mip Level Visualization" mode (Mode 5), which uses the dpdx and dpdy derivatives to calculate exactly which mip level the GPU would select for a given pixel.
#import bevy_pbr::mesh_functions
#import bevy_pbr::view_transformations::position_world_to_clip
struct FilteringMaterial {
display_mode: u32,
base_color: vec4<f32>,
}
@group(2) @binding(0)
var<uniform> material: FilteringMaterial;
@group(2) @binding(1)
var base_texture: texture_2d<f32>;
@group(2) @binding(2)
var base_sampler: sampler;
struct VertexInput {
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>,
@location(1) normal: vec3<f32>,
@location(2) uv: vec2<f32>,
}
struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
@location(0) world_position: vec3<f32>,
@location(1) world_normal: vec3<f32>,
@location(2) uv: vec2<f32>,
}
@vertex
fn vertex(in: VertexInput) -> VertexOutput {
var out: VertexOutput;
let model = mesh_functions::get_world_from_local(in.instance_index);
let world_position = mesh_functions::mesh_position_local_to_world(
model,
vec4<f32>(in.position, 1.0)
);
// Pass data to fragment shader
out.clip_position = position_world_to_clip(world_position.xyz);
out.world_position = world_position.xyz;
out.world_normal = mesh_functions::mesh_normal_local_to_world(in.normal, in.instance_index);
out.uv = in.uv;
return out;
}
@fragment
fn fragment(in: VertexOutput) -> @location(0) vec4<f32> {
var final_color: vec4<f32>;
// MODE 5: Mip Level Visualization
if (material.display_mode == 5u) {
// Get texture dimensions to calculate accurate derivatives
let dims = vec2<f32>(textureDimensions(base_texture));
// Calculate the rate of change of UVs relative to texture size
let dx = dpdx(in.uv * dims);
let dy = dpdy(in.uv * dims);
// The max length of the derivative vector determines the mip level
let delta_max_sq = max(dot(dx, dx), dot(dy, dy));
let mip_level = 0.5 * log2(delta_max_sq);
// Color code based on level: Red (0) -> Yellow -> Green -> Blue (High)
let colors = array<vec3<f32>, 6>(
vec3(1.0, 0.0, 0.0), // Level 0: Red
vec3(1.0, 0.5, 0.0), // Level 1: Orange
vec3(1.0, 1.0, 0.0), // Level 2: Yellow
vec3(0.0, 1.0, 0.0), // Level 3: Green
vec3(0.0, 1.0, 1.0), // Level 4: Cyan
vec3(0.0, 0.0, 1.0) // Level 5+: Blue
);
let idx = u32(clamp(mip_level, 0.0, 5.0));
final_color = vec4<f32>(colors[idx], 1.0);
}
// STANDARD MODES (0-4)
else {
final_color = textureSample(base_texture, base_sampler, in.uv) * material.base_color;
}
// Apply simple directional lighting
let N = normalize(in.world_normal);
let L = normalize(vec3<f32>(1.0, 1.0, 0.5));
let diffuse = max(dot(N, L), 0.0);
let ambient = 0.2;
return vec4<f32>(final_color.rgb * (diffuse + ambient), final_color.a);
}
The Rust Material (src/materials/d04_02_filtering_comparison.rs)
This file includes a helper function create_test_texture.
Why is this function so long?
When you load an image from disk (like a PNG), Bevy usually generates mipmaps for you. Since we are creating a texture procedurally (pixel by pixel), we must generate the mipmaps ourselves. We do this by repeatedly downscaling the image (averaging 4 pixels into 1) until we reach a 1x1 size.
Without this step, "Trilinear" filtering would look exactly like "Bilinear" because there would be no lower-resolution levels to blend with!
use bevy::prelude::*;
use bevy::render::render_asset::RenderAssetUsages;
use bevy::render::render_resource::{
AsBindGroup, Extent3d, ShaderRef, TextureDimension, TextureFormat,
};
#[derive(Asset, TypePath, AsBindGroup, Debug, Clone)]
pub struct FilteringMaterial {
#[uniform(0)]
pub display_mode: u32,
#[uniform(0)]
pub base_color: LinearRgba,
#[texture(1)]
#[sampler(2)]
pub texture: Handle<Image>,
}
impl Material for FilteringMaterial {
fn fragment_shader() -> ShaderRef {
"shaders/d04_02_filtering_comparison.wgsl".into()
}
}
/// Generates a high-frequency test pattern with full mipmaps.
/// This ensures Trilinear and Anisotropic filtering work correctly.
pub fn create_test_texture(size: u32) -> Image {
// 1. Generate Base Level (Level 0)
let mut pixels = Vec::with_capacity((size * size * 4) as usize);
for y in 0..size {
for x in 0..size {
let (r, g, b) = generate_pattern_pixel(x, y, size);
pixels.extend_from_slice(&[r, g, b, 255]);
}
}
// 2. Setup Image
let mut image = Image::new(
Extent3d {
width: size,
height: size,
depth_or_array_layers: 1,
},
TextureDimension::D2,
pixels,
TextureFormat::Rgba8Unorm,
RenderAssetUsages::default(),
);
// 3. Manually Generate Mipmaps (Box Filter)
let mut current_width = size;
let mut current_height = size;
// FIX: Clone the inner Vec<u8> (data is Option<Vec<u8>> in Bevy 0.15+)
let mut current_data = image.data.clone().expect("Image created without data");
let mip_levels = (size as f32).log2().floor() as u32 + 1;
image.texture_descriptor.mip_level_count = mip_levels;
for _level in 1..mip_levels {
let next_width = current_width / 2;
let next_height = current_height / 2;
let mut next_data = Vec::with_capacity((next_width * next_height * 4) as usize);
for y in 0..next_height {
for x in 0..next_width {
// Average the 4 pixels from the previous level
let src_x = x * 2;
let src_y = y * 2;
let p1 = get_pixel(¤t_data, current_width, src_x, src_y);
let p2 = get_pixel(¤t_data, current_width, src_x + 1, src_y);
let p3 = get_pixel(¤t_data, current_width, src_x, src_y + 1);
let p4 = get_pixel(¤t_data, current_width, src_x + 1, src_y + 1);
next_data
.push(((p1[0] as u32 + p2[0] as u32 + p3[0] as u32 + p4[0] as u32) / 4) as u8);
next_data
.push(((p1[1] as u32 + p2[1] as u32 + p3[1] as u32 + p4[1] as u32) / 4) as u8);
next_data
.push(((p1[2] as u32 + p2[2] as u32 + p3[2] as u32 + p4[2] as u32) / 4) as u8);
next_data.push(255);
}
}
// FIX: Unwrap image.data to append the new mip level
if let Some(data) = &mut image.data {
data.extend_from_slice(&next_data);
}
// Prepare for next iteration
current_data = next_data;
current_width = next_width;
current_height = next_height;
}
image
}
fn get_pixel(data: &[u8], width: u32, x: u32, y: u32) -> [u8; 4] {
let idx = ((y * width + x) * 4) as usize;
[data[idx], data[idx + 1], data[idx + 2], data[idx + 3]]
}
fn generate_pattern_pixel(x: u32, y: u32, size: u32) -> (u8, u8, u8) {
let fx = x as f32;
let fy = y as f32;
// High frequency checkerboard (1x1)
let ultra_fine = (x + y) % 2 == 0;
// Larger checkerboard (8x8)
let fine_checker = ((x / 8) + (y / 8)) % 2 == 0;
// Concentric circles
let center = size as f32 / 2.0;
let dist = ((fx - center).powi(2) + (fy - center).powi(2)).sqrt();
let circles = (dist / 4.0) as u32 % 2 == 0;
let mut val = 40; // Base dark gray
if ultra_fine {
val += 40;
}
if fine_checker {
val += 60;
}
if circles {
val += 80;
}
// Color tinting
let r = val;
let g = if circles { val } else { val / 2 };
let b = if fine_checker { val } else { val / 3 };
(r, g, b)
}
Don't forget to register it in src/materials/mod.rs:
pub mod d04_02_filtering_comparison;
The Demo Module (src/demos/d04_02_filtering_comparison.rs)
This system sets up the scene and handles the user input. When you press keys 1-4, we generate a new ImageSamplerDescriptor and assign it to the image. Bevy handles updating the GPU resource automatically.
We also implement a simple Spherical Orbit Camera so you can easily inspect the ground plane from different angles.
use crate::materials::d04_02_filtering_comparison::{FilteringMaterial, create_test_texture};
use bevy::image::{ImageAddressMode, ImageFilterMode, ImageSampler, ImageSamplerDescriptor};
use bevy::pbr::MeshMaterial3d;
use bevy::prelude::*;
use std::f32::consts::PI;
#[derive(Component)]
struct Rotator;
#[derive(Component)]
struct OrbitCamera {
radius: f32,
pitch: f32,
yaw: f32,
focus: Vec3,
}
impl Default for OrbitCamera {
fn default() -> Self {
Self {
radius: 15.0,
pitch: 0.5,
yaw: 0.0,
focus: Vec3::ZERO,
}
}
}
#[derive(Resource)]
struct DemoState {
texture_handle: Handle<Image>,
current_mode: usize,
auto_rotate_obj: bool,
}
pub fn run() {
App::new()
.add_plugins(DefaultPlugins)
.add_plugins(MaterialPlugin::<FilteringMaterial>::default())
.add_systems(Startup, setup)
.add_systems(
Update,
(handle_input, rotate_objects, update_camera, update_ui),
)
.run();
}
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<FilteringMaterial>>,
mut images: ResMut<Assets<Image>>,
) {
// 1. Create and add texture
let mut image = create_test_texture(512);
// Start with Nearest (Point) filtering
image.sampler = ImageSampler::Descriptor(ImageSamplerDescriptor {
mag_filter: ImageFilterMode::Nearest,
min_filter: ImageFilterMode::Nearest,
mipmap_filter: ImageFilterMode::Nearest,
address_mode_u: ImageAddressMode::Repeat,
address_mode_v: ImageAddressMode::Repeat,
..default()
});
let texture_handle = images.add(image);
// 2. Create Material
let material = materials.add(FilteringMaterial {
display_mode: 0,
base_color: LinearRgba::WHITE,
texture: texture_handle.clone(),
});
// 3. Spawn Scene
// Large ground plane (shows Anisotropy best)
commands.spawn((
Mesh3d(meshes.add(Plane3d::default().mesh().size(50.0, 50.0))),
MeshMaterial3d(material.clone()),
Transform::from_xyz(0.0, 0.0, 0.0),
Rotator,
));
// Light
commands.spawn((
DirectionalLight {
illuminance: 10_000.0,
..default()
},
Transform::from_rotation(Quat::from_euler(EulerRot::XYZ, -1.0, 0.5, 0.0)),
));
// Camera
commands.spawn((
Camera3d::default(),
OrbitCamera::default(),
Transform::default(), // Will be set by update_camera system
));
// UI
commands.spawn((
Text::new(""),
Node {
position_type: PositionType::Absolute,
top: Val::Px(12.0),
left: Val::Px(12.0),
..default()
},
));
// Init State
commands.insert_resource(DemoState {
texture_handle,
current_mode: 1, // Start at mode 1
auto_rotate_obj: false, // Default to manual control so you can inspect
});
}
fn handle_input(
keyboard: Res<ButtonInput<KeyCode>>,
mut state: ResMut<DemoState>,
mut images: ResMut<Assets<Image>>,
mut materials: ResMut<Assets<FilteringMaterial>>,
mat_query: Query<&MeshMaterial3d<FilteringMaterial>>,
) {
let mut changed = false;
// Mode Switching
if keyboard.just_pressed(KeyCode::Digit1) {
state.current_mode = 1;
changed = true;
}
if keyboard.just_pressed(KeyCode::Digit2) {
state.current_mode = 2;
changed = true;
}
if keyboard.just_pressed(KeyCode::Digit3) {
state.current_mode = 3;
changed = true;
}
if keyboard.just_pressed(KeyCode::Digit4) {
state.current_mode = 4;
changed = true;
}
if keyboard.just_pressed(KeyCode::Digit5) {
state.current_mode = 5;
changed = true;
}
if keyboard.just_pressed(KeyCode::Space) {
state.auto_rotate_obj = !state.auto_rotate_obj;
}
if changed {
// Update Uniforms (For Mode 5 visualization)
for handle in &mat_query {
if let Some(mat) = materials.get_mut(handle) {
mat.display_mode = if state.current_mode == 5 { 5 } else { 0 };
}
}
// Update Sampler (For Modes 1-4)
if let Some(image) = images.get_mut(&state.texture_handle) {
let desc = match state.current_mode {
1 => ImageSamplerDescriptor {
// Nearest
mag_filter: ImageFilterMode::Nearest,
min_filter: ImageFilterMode::Nearest,
mipmap_filter: ImageFilterMode::Nearest,
..default()
},
2 => ImageSamplerDescriptor {
// Bilinear
mag_filter: ImageFilterMode::Linear,
min_filter: ImageFilterMode::Linear,
mipmap_filter: ImageFilterMode::Nearest,
..default()
},
3 => ImageSamplerDescriptor {
// Trilinear
mag_filter: ImageFilterMode::Linear,
min_filter: ImageFilterMode::Linear,
mipmap_filter: ImageFilterMode::Linear,
..default()
},
4 => ImageSamplerDescriptor {
// Anisotropic
mag_filter: ImageFilterMode::Linear,
min_filter: ImageFilterMode::Linear,
mipmap_filter: ImageFilterMode::Linear,
anisotropy_clamp: 16,
..default()
},
_ => return,
};
let mut full_desc = desc;
full_desc.address_mode_u = ImageAddressMode::Repeat;
full_desc.address_mode_v = ImageAddressMode::Repeat;
image.sampler = ImageSampler::Descriptor(full_desc);
}
}
}
fn update_camera(
time: Res<Time>,
keyboard: Res<ButtonInput<KeyCode>>,
mut query: Query<(&mut Transform, &mut OrbitCamera)>,
) {
let dt = time.delta_secs();
for (mut transform, mut orbit) in &mut query {
// Manual Orbit Controls
if keyboard.pressed(KeyCode::ArrowLeft) {
orbit.yaw += 2.0 * dt;
}
if keyboard.pressed(KeyCode::ArrowRight) {
orbit.yaw -= 2.0 * dt;
}
if keyboard.pressed(KeyCode::ArrowUp) {
orbit.pitch += 1.0 * dt;
}
if keyboard.pressed(KeyCode::ArrowDown) {
orbit.pitch -= 1.0 * dt;
}
// Zoom
if keyboard.pressed(KeyCode::KeyW) {
orbit.radius -= 10.0 * dt;
}
if keyboard.pressed(KeyCode::KeyS) {
orbit.radius += 10.0 * dt;
}
// Clamp
orbit.pitch = orbit.pitch.clamp(0.05, PI / 2.0 - 0.05); // Don't go below ground or flip over
orbit.radius = orbit.radius.clamp(2.0, 50.0);
// Spherical to Cartesian conversion
// x = r * cos(pitch) * sin(yaw)
// y = r * sin(pitch)
// z = r * cos(pitch) * cos(yaw)
// Note: In Bevy Y is up. So we map Pitch to Y-height and Yaw to XZ plane.
let r_xz = orbit.radius * orbit.pitch.cos();
let x = r_xz * orbit.yaw.sin();
let y = orbit.radius * orbit.pitch.sin();
let z = r_xz * orbit.yaw.cos();
transform.translation = orbit.focus + Vec3::new(x, y, z);
transform.look_at(orbit.focus, Vec3::Y);
}
}
fn rotate_objects(
time: Res<Time>,
state: Res<DemoState>,
mut query: Query<&mut Transform, With<Rotator>>,
) {
if state.auto_rotate_obj {
for mut transform in &mut query {
transform.rotate_y(time.delta_secs() * 0.1);
}
}
}
fn update_ui(state: Res<DemoState>, mut query: Query<&mut Text>) {
let mode_text = match state.current_mode {
1 => "1: Point (Nearest) - Pixelated, shimmer in distance",
2 => "2: Bilinear - Smooth close, sharp mip transitions",
3 => "3: Trilinear - Smooth everywhere, blurry at angles",
4 => "4: Anisotropic 16x - Sharp at angles",
5 => "5: Mip Level Viz - Red(0) to Blue(5)",
_ => "",
};
let rotate_status = if state.auto_rotate_obj { "On" } else { "Off" };
for mut text in &mut query {
**text = format!(
"CONTROLS:\n\
[1-4] Set Filtering Mode\n\
[5] Visualize Mip Levels\n\
[Arrows] Orbit Camera\n\
[W/S] Zoom In/Out\n\
[Space] Rotate Floor: {}\n\
\n\
Current: {}",
rotate_status, mode_text
);
}
}
Don't forget to add it to src/demos/mod.rs:
pub mod d04_02_filtering_comparison;
And register it in src/main.rs:
Demo {
number: "4.2",
title: "Texture Filtering and Mipmapping",
run: demos::d04_02_filtering_comparison::run,
},
Running the Demo
When you run the demo, look specifically at the Ground Plane receding into the distance.
Controls
| Key | Mode | What to Look For |
|---|---|---|
| 1 | Point | Shimmering/Sparkling in the distance. The checkerboard looks like a chaotic noise field far away. |
| 2 | Bilinear | The shimmering stops, but you might see horizontal bands on the floor where the texture sharpness changes abruptly (Mip 0 vs Mip 1). |
| 3 | Trilinear | The bands disappear. The floor transitions smoothly from sharp to blurry, but becomes very blurry at a distance. |
| 4 | Anisotropic | Magic happens. The distant floor becomes sharp again without shimmering. |
| 5 | Viz | A heatmap showing which mip level is being used. Red is high-res, Blue is low-res. |
What You're Seeing
This demo proves why Mipmaps and Filtering are essential. Without them (Mode 1), the game looks broken and noisy. With basic filtering (Mode 3), it looks stable but blurry. With Anisotropic filtering (Mode 4), you get the quality of high-res textures with the stability of mipmaps.
Key Takeaways
Mipmaps are Mandatory: Always generate mipmaps for 3D textures. Without them, you get aliasing (noise) or have to rely on expensive supersampling.
Trilinear is the Standard: For most objects, Linear min/mag/mipmap filters are the correct choice.
Anisotropy for Ground: Use anisotropy_clamp: 16 for floors and terrain. It costs a bit more performance but massively improves visual clarity.
Sampler Objects: In Bevy/WGSL, the sampler is a distinct resource that tells the GPU how to read the texture data. You can swap samplers without reloading the texture.
What's Next?
We've covered how to sample a single texture. But real materials use multiple textures combined together, and they often need to tile or clamp at the edges.
In the next article, we will master Texture Coordinates (UVs) and Address Modes to create complex, scrolling, and tiling materials.
Next up: 4.3 - Texture Wrapping Modes
Quick Reference
1. Filtering Modes Cheat Sheet
| Mode | How it works | Visual Look | Best Used For |
|---|---|---|---|
| Nearest (Point) | Picks 1 closest pixel | Blocky, sharp edges | Pixel Art, voxel games, debugging |
| Bilinear | Blends 4 pixels | Smooth but blurry | 2D UI, Sprites, Particles |
| Trilinear | Blends 8 pixels (2 mip levels) | Smooth, no popping | Standard 3D objects (Props, Walls) |
| Anisotropic | Blends many pixels along slope | Sharp at oblique angles | Ground planes, Roads, Floors |
2. Understanding Mipmaps
What: A chain of progressively smaller versions of a texture (50%, 25%, 12.5%...).
Why: Prevents "shimmering" and aliasing when textures are far away.
Cost: Increases memory usage by ~33%.
Rule: Always enable for 3D objects. Disable only for 2D pixel art or non-color data textures.
3. Performance Hierarchy
From cheapest to most expensive (in terms of memory bandwidth):
Nearest: 1 Fetch (Fastest)
Bilinear: 4 Fetches
Trilinear: 8 Fetches
Anisotropic (16x): Up to 128 Fetches (Heavy on bandwidth, use selectively)
4. Bevy Configuration
The "Standard 3D" Sampler (Trilinear):
ImageSampler::Descriptor(ImageSamplerDescriptor {
mag_filter: ImageFilterMode::Linear, // Smooth up close
min_filter: ImageFilterMode::Linear, // Smooth far away
mipmap_filter: ImageFilterMode::Linear, // Smooth transitions between distances
..default()
})
The "High Quality Ground" Sampler (Anisotropic):
ImageSampler::Descriptor(ImageSamplerDescriptor {
mag_filter: ImageFilterMode::Linear,
min_filter: ImageFilterMode::Linear,
mipmap_filter: ImageFilterMode::Linear,
anisotropy_clamp: 16, // Range: 1 (Off) to 16 (Max)
..default()
})
The "Pixel Art" Sampler:
ImageSampler::Descriptor(ImageSamplerDescriptor {
mag_filter: ImageFilterMode::Nearest,
min_filter: ImageFilterMode::Nearest,
mipmap_filter: ImageFilterMode::Nearest,
..default()
})





