1.6 - Shader Attributes and Data Flow

What We're Learning
In the last article, you built a complete, working Material from scratch. In the process, you encountered a lot of special syntax starting with an @ symbol: @vertex, @fragment, @location, @builtin, @group, and @binding. These are called attributes, and they are the key to understanding the flow of data in a modern graphics pipeline.
If your variables are the "what" (the data itself), then attributes are the "where." They are the address labels we attach to our data, telling the GPU where it came from, where it's going, and how it should be treated. Mastering this address system is the final piece of the foundational puzzle. It demystifies the connection between your Bevy components, your Rust Material struct, and the variables inside your WGSL shader.
In this article, we'll dive deep into this data flow. We will systematically explore the attributes that control data - @location, @builtin, @group, and @binding - explaining their exact purpose and showing how they work together to create a seamless pipeline from your CPU all the way to the final colored pixel.
By the end of this article, you will understand:
The purpose of WGSL's four main data flow attributes:
@location,@builtin,@group, and@binding.How vertex attributes (
@location) deliver mesh data to your vertex shader.How the GPU automatically provides special values via built-in attributes (
@builtin).How uniforms and textures are organized and accessed using bind groups (
@group/@binding).The complete data flow, visualized from a piece of Rust data to an interpolated value in a fragment shader.
The Attribute System: Address Labels for Your Data
WGSL uses a system of attributes - special keywords that begin with an @ symbol - to give the GPU precise instructions about the data in your shader.
The most effective way to understand these attributes is to think of a shader entry point (like @vertex or @fragment) as a highly constrained function call. The attributes are the strictly enforced calling convention and API contract that define how this function receives its inputs, how it returns its outputs, and how it accesses external resources.
For this system to work, every piece of data needs a clear, unambiguous role defined by an attribute.
The Four Main Attribute Types
Within this "shader as a function" model, the four primary attributes map to familiar programming concepts:
| Attribute | Analogy | Purpose |
| @location(N) | Function Parameters & Return Values | For data passed directly from one stage to the next. The vertex shader's input struct is its parameter list. Its output struct is its return value, which becomes the input parameter for the fragment shader. |
| @builtin(NAME) | Runtime-Injected Context Variables | For special values provided by the execution environment (the GPU hardware), not by the calling function. Think of these as magical, read-only global variables that describe the current state of the pipeline. |
| @group(N) | A Global Resource "Module" | Organizes shared, global resources (uniforms, textures) that the function can access. @group(2) is the "module" dedicated to your material's resources. |
| @binding(N) | A Specific Export from that Module | Specifies the exact resource within a module. If @group(2) is the material module, @binding(0) is the specific "export" that gives you access to the uniform data buffer. |
Over the next few sections, we will explore each of these in detail. Understanding their distinct roles is the key to mastering the flow of data into and through your shaders.
@location: The Data Channels
In our "shader as a function" model, the @location(N) attribute is the primary mechanism for defining a shader's parameters and return values. Its meaning changes depending on where you use it, but it always defines a direct data channel between different parts of the pipeline.
Context 1: Vertex Shader Input (Data from the Mesh)
When you use @location in the input struct of your vertex shader, it defines the function's parameter list. It specifies which vertex attribute buffer from your Mesh asset should be mapped to each parameter field. This is the direct link between your mesh data in Bevy and your shader code.
// This struct defines the parameter list for our vertex function.
// It tells the GPU how to interpret the data stream for each vertex.
struct VertexInput {
// Read from the mesh's attribute buffer #0
@location(0) position: vec3<f32>,
// Read from the mesh's attribute buffer #1
@location(1) normal: vec3<f32>,
// Read from the mesh's attribute buffer #2
@location(2) uv: vec2<f32>,
}
Where do these numbers come from?
These location numbers are not arbitrary; they are a contract defined by Bevy's Mesh format. When you use a standard Bevy mesh (like one from Sphere::new() or a GLTF file), Bevy guarantees the following layout:
| Location | Bevy Mesh Attribute | Description |
| @location(0) | Mesh::ATTRIBUTE_POSITION | The vertex's position in local 3D space. |
| @location(1) | Mesh::ATTRIBUTE_NORMAL | The normal vector, indicating which way the surface faces. |
| @location(2) | Mesh::ATTRIBUTE_UV_0 | The first set of 2D texture coordinates. |
| @location(3) | Mesh::ATTRIBUTE_TANGENT | A vector used for advanced lighting (like normal mapping). |
| @location(4) | Mesh::ATTRIBUTE_COLOR | An optional per-vertex color attribute. |
For your vertex shader to receive the correct data, the @location numbers in your VertexInput struct must match this standard layout.
Context 2: Inter-Stage Data (Vertex to Fragment)
When you use @location in the output struct of your vertex shader, you are defining the function's return value. This struct is the data "package" that the vertex function returns. The GPU pipeline then passes this package as the main input parameter to the fragment function after the rasterizer has interpolated its fields.
// This struct defines the return value of our vertex function.
struct VertexOutput {
// This field uses a different attribute, which we'll cover next.
@builtin(position) clip_position: vec4<f32>,
// These are OUR custom return fields. The numbers just need to be unique.
@location(0) world_position: vec4<f32>, // We've decided channel 0 is for world position.
@location(1) world_normal: vec3<f32>, // We've decided channel 1 is for the normal.
@location(2) uv: vec2<f32>, // We've decided channel 2 is for UVs.
}
Here, you are the one choosing the numbers. The only rule is that they must be unique and start from 0. This VertexOutput struct acts as the bridge. The GPU's rasterizer sees this struct and knows that for every fragment it generates, it needs to interpolate the data found in @location(0), @location(1), and so on, from the three vertices of the triangle.
This two-part system is the core of the data flow:
Vertex Input
@locations: Define the function's parameters, reading data from pre-defined mesh attribute slots.Vertex Output
@locations: Define the function's return values, writing data to custom channels for interpolation.
@builtin: Special Values from the GPU
While @location is for data you pass into or out of a shader stage, the @builtin(NAME) attribute gives you access to special variables provided by the execution environment itself (the GPU hardware). In our function call analogy, these are like runtime-injected context variables that give your function crucial information about its own execution state.
Their meaning can change depending on which shader stage you are in.
Common Vertex Shader Built-ins
In the vertex shader, the built-ins provide information about the specific vertex being processed and define the one mandatory output channel.
| Built-in | Type | Description |
@builtin(vertex_index) | u32 | The index of the current vertex in the mesh's vertex buffer (e.g., 0, 1, 2, ...). Useful for effects that need to treat specific vertices differently. |
@builtin(instance_index) | u32 | The index of the current object being drawn when using instanced rendering. This is essential for getting the correct model matrix for each object in a batch. |
@builtin(position) | vec4<f32> | (Output Only) This is the one mandatory return value of a vertex shader. You must write the final Clip Space position of the vertex to this special variable. The rasterizer physically cannot function without it. |
Common Fragment Shader Built-ins
In the fragment shader, the built-ins provide information about the specific pixel being processed.
| Built-in | Type | Description |
@builtin(position) | vec4<f32> | (Input Only) The screen-space coordinates of the current fragment. The .xy components are the pixel coordinates, and the .z component is its depth value. This is the key to creating screen-space effects. |
@builtin(front_facing) | bool | A boolean that is true if the triangle this fragment belongs to is facing the camera. Essential for rendering two-sided materials. |
@builtin(sample_index) | u32 | An advanced value used for multisample anti-aliasing (MSAA) that tells you which sub-pixel sample is being processed. |
The Dual Meaning of @builtin(position)
It is critical to understand that @builtin(position) means two completely different things in the two main shader stages:
In the Vertex Shader: It is an OUTPUT. You are writing to it. It represents the final, transformed position of a vertex in 3D Clip Space.
In the Fragment Shader: It is an INPUT. You are reading from it. It represents the 2D pixel coordinate of the fragment on your screen.
Because of this dual meaning, a single struct cannot be used for both vertex output and fragment input if it contains a @builtin(position) field. This is why you will often see a pattern where the fragment shader receives its @location data via an input struct, but gets its @builtin(position) as a separate, distinct parameter to the function.
// Vertex shader returns this struct.
struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
@location(0) uv: vec2<f32>,
}
// Fragment shader receives the `@location` data in this struct...
struct FragmentInput {
@location(0) uv: vec2<f32>,
}
// ...and gets its own position as a separate parameter to the main function.
@fragment
fn fragment(in: FragmentInput, @builtin(position) frag_coord: vec4<f32>) -> ... {
// ...
}
@group and @binding: Accessing Your Global Resources
While @location and @builtin handle data for individual vertices and fragments, @group and @binding are used to access global resources that are shared across all the vertices and fragments in a draw call. This is how you get your uniforms (material properties), textures, and samplers into your shader.
In our "shader as a function" analogy, this is the mechanism for accessing data from an external module or global namespace.
The API Contract: Groups and Bindings
@group(N): This specifies a top-level "Module" of resources. Each module is dedicated to a specific category of data.@binding(N): This specifies a numbered "Export" from that module. Each export provides exactly one resource (one uniform data block, one texture, one sampler).
Bevy uses a standard, conventional organization for these groups that you must follow to access engine-provided data and to ensure your own data doesn't cause conflicts:
| Group | Name | Purpose |
@group(0) | View Group | Reserved by Bevy for global, view-level data that is the same for the entire frame. This includes the camera's view/projection matrices and global uniforms like time. |
@group(1) | Mesh Group | Reserved by Bevy for mesh-level data that is specific to the object being drawn. This primarily includes the object's model matrix (world_from_local). |
@group(2) | Material Group | This group is yours. You use it to provide all the custom resources that define your unique material: your uniform struct, your textures, and your samplers. |
@group(3) | Light Group | Reserved by Bevy for lighting data. |
Accessing Resources in WGSL
In your shader, you declare a global var for each resource, using @group and @binding to give it the correct "address" or import path.
// This is the struct for our custom uniform data.
struct MyMaterialUniforms {
color: vec4<f32>,
roughness: f32,
}
// Import the uniform buffer for our material.
// Address: From Module #2, Export #0.
@group(2) @binding(0)
var<uniform> material: MyMaterialUniforms;
// Import the first texture for our material.
// Address: From Module #2, Export #1.
@group(2) @binding(1)
var base_texture: texture_2d<f32>;
// Import the sampler for that texture.
// Address: From Module #2, Export #2.
@group(2) @binding(2)
var base_sampler: sampler;
Matching Rust to WGSL: The AsBindGroup Pattern
On the Rust side, your Material struct is responsible for telling Bevy what to put into each "export" slot of the @group(2) module. This is where the AsBindGroup derive macro and the two-struct pattern become essential.
Here is the correct, robust pattern that matches the WGSL code above:
// In src/materials/my_material.rs
mod uniforms {
#![allow(dead_code)]
use bevy::prelude::*;
use bevy::render::render_resource::ShaderType;
// This struct MUST match the WGSL uniform struct exactly.
// It derives `ShaderType` to handle GPU memory layout.
#[derive(ShaderType, Debug, Clone)]
pub struct MyMaterialUniforms {
pub color: LinearRgba,
pub roughness: f32,
}
}
pub use uniforms::MyMaterialUniforms;
// This is the main Material struct. It derives `AsBindGroup`
// to organize all the resources.
#[derive(Asset, TypePath, AsBindGroup, Debug, Clone)]
pub struct MyMaterial {
// This tells Bevy to put the `uniforms` data into a buffer
// and provide it as "Export #0" (`@binding(0)`).
#[uniform(0)]
pub uniforms: MyMaterialUniforms,
// This tells Bevy to provide the `base_texture` asset
// as "Export #1" (`@binding(1)`) as a texture...
#[texture(1)]
// ...and to create and provide a sampler for it as "Export #2" (`@binding(2)`).
#[sampler(2)]
pub base_texture: Handle<Image>,
}
This pattern correctly separates the concerns:
The
MyMaterialstruct is the high-level Module Definition, telling Bevy which resource goes into which binding slot.The
MyMaterialUniformsstruct is the low-level Data Formatter, ensuring the raw data is laid out in memory exactly as the GPU expects for a uniform buffer.
Data Flow: The Complete Picture
You've now learned about all the different attributes that act as the API contract for your shader functions. Let's trace the journey of data from your Rust code, through the GPU pipeline, to the final pixel, seeing how each attribute plays its part in this highly structured process.
1. The Setup (CPU Side - Your Bevy App)
It all begins in your Rust code. You define the data you want the GPU to process.
You create a Mesh: This is a collection of vertices, each with its own set of raw data (position, normal, uv). This data will become the arguments for the vertex shader.
You create a Material: You instantiate your custom Material struct, filling its fields with values like a tint color or animation speed. These become the global resources.
// Rust: Create material instance (global resources)
let material_handle = materials.add(CustomMaterial {
uniforms: CustomMaterialUniforms {
tint_color: LinearRgba::RED,
intensity: 0.8,
}
});
// Rust: Spawn entity with a mesh (per-vertex data)
commands.spawn((
Mesh3d(meshes.add(Sphere::new(1.0))),
MeshMaterial3d(material_handle),
));
2. Arrival at the Vertex Shader (GPU Side)
The GPU begins processing a single vertex. The VertexInput struct uses attributes to define the function's parameters, receiving data from multiple sources simultaneously:
@vertex
fn vertex(
// GPU provides this from its internal state.
@builtin(instance_index) instance_index: u32,
// GPU reads these from the Mesh's vertex buffers,
// mapping them to the function's parameters.
@location(0) position: vec3<f32>,
@location(1) normal: vec3<f32>,
) -> VertexOutput {
At the same time, the shader function has global access to the "external modules":
// Access the 'export' from the Material Module (@group(2)).
let tint = material.tint_color;
// Access the 'export' from the View Module (`@group(0)`).
let time = view.time;
3. Processing and Return in the Vertex Shader
The vertex shader's job is to process this incoming data and return a new package of data for the next stage.
var out: VertexOutput;
// --- Calculations ---
let world_pos = transform_position(position, instance_index);
let world_norm = transform_normal(normal, instance_index);
let custom_val = sin(world_pos.y + time); // A value from multiple sources
// --- Prepare the Return Struct ---
// Write to the MANDATORY special builtin return value for the rasterizer.
out.clip_position = to_clip_space(world_pos);
// Write our calculated values to our custom return fields.
out.world_normal = world_norm;
out.custom_value = custom_val;
return out;
}
4. The Magic of Interpolation (GPU Hardware)
The Rasterizer, a fixed-function hardware stage, now takes over. For each triangle:
It receives the
VertexOutputreturn value from all three of the triangle's vertices.It determines which pixels on the screen the triangle covers.
For every single one of those pixels, it creates a fragment and smoothly blends (interpolates) all the data that was in the
@locationreturn fields.
5. Arrival at the Fragment Shader
The fragment shader is called for a single fragment. Its input struct (its main parameter) receives the beautifully interpolated data.
@fragment
fn fragment(
// This fragment's unique data, a smooth blend from the triangle's corners.
in: FragmentInput,
// GPU provides this pixel's unique screen coordinate as another parameter.
@builtin(position) frag_coord: vec4<f32>,
) -> @location(0) vec4<f32> {
The fragment shader also has global access to the same external modules as the vertex shader:
// Access the 'export' from the Material Module (`@group(2)`).
let tint = material.tint_color;
6. Final Coloring in the Fragment Shader
The fragment shader uses all this readily available data - its direct parameters and the global resources - to compute and return a final color.
// Use the interpolated input parameters.
let normal = normalize(in.world_normal);
let custom_effect = in.custom_value;
// Use the global uniform data.
let base_color = calculate_lighting(normal) * tint.rgb;
let final_color = base_color + custom_effect;
// Return the final color to the screen's output channel.
return vec4<f32>(final_color, 1.0);
}
This completes the journey. Every attribute has played its part in defining the strict API contract that directs the flow of data from its source, through the stages of the pipeline, to its final use in coloring a pixel.
Interpolation: The Automatic Blending Between Stages
Understanding interpolation is the key to grasping the relationship between the vertex and fragment shaders. It's the automatic, hardware-accelerated process that turns a few discrete points of data (at the vertices) into a smooth, continuous surface of data (for the fragments).
When your vertex shader returns a VertexOutput struct, the GPU's rasterizer takes the output from all three vertices of a triangle. For every pixel that triangle covers, it generates a fragment by blending the values from those three vertices.
For example, if you pass color from the vertex to the fragment shader, the rasterizer will automatically create a smooth gradient across the triangle's face:

This happens for every @location field in your VertexOutput struct. But how does the GPU blend them? WGSL gives you explicit control over this process using the @interpolate attribute. You can choose from three distinct behaviors, each with specific use cases.
struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
// We can add an @interpolate attribute to any @location.
@location(0) @interpolate(flat) object_id: u32,
@location(1) @interpolate(linear) screen_coord: vec2<f32>,
@location(2) @interpolate(perspective) uv: vec2<f32>, // This is the default
}
flat Interpolation
What it is: This qualifier disables interpolation entirely.
Instead of blending, the GPU picks the value from a single "provoking" vertex (usually the first vertex of the triangle) and applies that exact, unmodified value to every single fragment of that triangle.
When to use it:
Low-Poly or Faceted Art Style: This is the easiest way to achieve a retro, flat-shaded look. If you pass vertex colors and use flat interpolation, each triangle will be a single, solid color.
Per-Triangle Data: When you need to pass data that applies to the whole triangle, not the individual vertices. A classic example is a unique
object_id. Blendingu32values101and102would produce meaningless results. Withflat, every fragment correctly receives101or102.Requirement for Integers: You must mark any integer (
i32,u32) or boolean (bool) values passed between stages asflat. The GPU hardware cannot blend these types.
linear Interpolation
What it is: A simple, direct, screen-space blend.
The GPU calculates the fragment's position on the 2D screen relative to the triangle's vertices and performs a simple linear interpolation of the values. It does not account for 3D depth or perspective.
When to use it:
2D Graphics and UI: For user interfaces or 2D games where there is no 3D perspective,
linearis efficient and correct.Screen-Space Effects: When you are working with data that is already in screen-space, this is the appropriate and cheapest mode.
You should avoid using linear for properties on 3D surfaces, like texture coordinates (uv). Doing so will cause a classic graphical artifact where the texture appears to warp, swim, or slide across the surface as the camera moves, because the interpolation isn't being corrected for perspective.
perspective Interpolation (The Default)
What it is: A sophisticated interpolation that correctly handles 3D perspective.
This is the "smart" mode. The GPU performs a division by the w coordinate of the vertex's clip-space position during interpolation. This mathematical correction ensures that the interpolated values change correctly with distance. A texture will appear properly "stuck" to a surface, and gradients will look natural in a 3D scene.
When to use it:
- Almost Everything in 3D: For any data that represents a property of a 3D surface - texture coordinates (UVs), vertex colors, world positions, normals, etc. - this is the mode you want.
Because this is the correct behavior for the vast majority of 3D rendering scenarios, perspective is the default interpolation mode. If you do not add an @interpolate attribute to a field, the GPU will automatically use perspective-correct interpolation. This is why in most shaders, you don't see this attribute written out explicitly.
Complete Example: Multi-Channel Data Flow Visualizer
Let's build a complete material that demonstrates every data flow concept we've discussed. This shader will have multiple modes, each designed to visualize a specific attribute type, making the abstract flow of data tangible and visible on screen.
Our Goal
We will create an interactive material that visualizes data from every source:
@location(from mesh): We'll show the interpolated normals and UVs.@location(inter-stage): We'll show theworld_position, a value calculated in the vertex shader and interpolated for the fragment shader.@builtin: We'll use@builtin(front_facing)to color the front and back sides of our mesh differently.@group/@binding: We'll usetimeanddemo_modeuniforms from our material to drive animation and switch between visualizations.
What This Project Demonstrates
The complete data pipeline from mesh attributes and GPU built-ins to the vertex shader.
The flow of custom data from the vertex shader to the fragment shader via
@locationchannels.The use of both
@locationand@builtindata within the same fragment shader.How to use the
specializefunction in a Bevy Material to change render pipeline state, such as disabling backface culling.A practical use of discard to create holes in a mesh to see front- and back-facing polygons simultaneously.
The Shader (assets/shaders/d01_06_data_flow.wgsl)
This shader uses a demo_mode uniform to switch its behavior. Each if block isolates and visualizes data from a specific attribute source, turning abstract data like normals or UVs into visible colors.
#import bevy_pbr::mesh_functions
#import bevy_pbr::view_transformations::position_world_to_clip
// Material uniforms (@group 2, @binding 0)
struct DataFlowMaterial {
tint_color: vec4<f32>,
time: f32,
demo_mode: u32,
}
@group(2) @binding(0)
var<uniform> material: DataFlowMaterial;
// Vertex input from mesh
struct VertexInput {
@builtin(instance_index) instance_index: u32, // GPU provides this
@location(0) position: vec3<f32>, // From mesh
@location(1) normal: vec3<f32>, // From mesh
@location(2) uv: vec2<f32>, // From mesh
}
// Data passed from vertex to fragment shader
struct VertexOutput {
@builtin(position) clip_position: vec4<f32>, // Required output
@location(0) world_position: vec4<f32>, // Will be interpolated
@location(1) world_normal: vec3<f32>, // Will be interpolated
@location(2) uv: vec2<f32>, // Will be interpolated
@location(3) distance_from_center: f32, // Custom data!
}
@vertex
fn vertex(in: VertexInput) -> VertexOutput {
var out: VertexOutput;
// Use @builtin to get transform
let world_from_local = mesh_functions::get_world_from_local(in.instance_index);
// Transform position (using @location 0 input)
let world_position = mesh_functions::mesh_position_local_to_world(
world_from_local,
vec4<f32>(in.position, 1.0)
);
// Output to @builtin(position) - REQUIRED
out.clip_position = position_world_to_clip(world_position.xyz);
// Transform normal (using @location 1 input)
out.world_normal = mesh_functions::mesh_normal_local_to_world(
in.normal,
in.instance_index
);
// Pass through UV (from @location 2 input to @location 2 output)
out.uv = in.uv;
// Calculate custom data to pass to fragment shader
out.world_position = world_position;
out.distance_from_center = length(in.position);
return out;
}
@fragment
fn fragment(
in: VertexOutput,
@builtin(front_facing) is_front: bool, // GPU provides this
) -> @location(0) vec4<f32> {
var color = vec3<f32>(0.0);
// Mode 0: Show interpolated normals
if material.demo_mode == 0u {
color = (in.world_normal + 1.0) * 0.5;
}
// Mode 1: Show interpolated UVs
else if material.demo_mode == 1u {
color = vec3<f32>(in.uv, 0.0);
}
// Mode 2: Show custom interpolated data with horizontal rings
else if material.demo_mode == 2u {
// Use world position Y to create horizontal rings
let ring_frequency = 5.0;
let rings = sin(in.world_position.y * ring_frequency - material.time * 3.0) * 0.5 + 0.5;
let color_a = vec3<f32>(0.2, 0.6, 1.0); // Blue
let color_b = vec3<f32>(1.0, 0.4, 0.2); // Orange
color = mix(color_a, color_b, rings);
}
// Mode 3: Show front/back facing with holes to see both sides (using @builtin)
else if material.demo_mode == 3u {
// Create a grid pattern and discard some pixels to create holes
let scale = 12.0;
let grid_x = floor(in.uv.x * scale);
let grid_y = floor(in.uv.y * scale);
let cell = (i32(grid_x) + i32(grid_y)) % 3;
// Discard every third cell to create holes
if cell == 0 {
discard;
}
if is_front {
// Front faces: Blue
color = vec3<f32>(0.2, 0.5, 1.0);
} else {
// Back faces: Orange (visible through holes!)
color = vec3<f32>(1.0, 0.5, 0.2);
}
}
// Apply uniform tint
color = color * material.tint_color.rgb;
// Add pulsing effect using uniform time
let pulse = (sin(material.time * 2.0) + 1.0) * 0.5;
color = color * (0.7 + pulse * 0.3);
return vec4<f32>(color, 1.0);
}
The Rust Material (src/materials/d01_06_data_flow.rs)
A key feature here is the specialize function. By default, Bevy culls (discards) the back faces of meshes to save performance. For our front_facing demo, we need to see both sides. We override specialize to tell the renderer to set cull_mode = None, effectively disabling this optimization and making our effect possible.
use bevy::pbr::{Material, MaterialPipelineKey};
use bevy::prelude::*;
use bevy::render::mesh::MeshVertexBufferLayoutRef;
use bevy::render::render_resource::{AsBindGroup, ShaderRef};
use bevy::render::render_resource::{RenderPipelineDescriptor, SpecializedMeshPipelineError};
// Uniform types in a separate module to isolate the dead_code warnings
mod uniforms {
#![allow(dead_code)] // Suppresses ShaderType's generated check functions
use bevy::prelude::*;
use bevy::render::render_resource::ShaderType;
// Properly aligned uniform struct matching WGSL layout
#[derive(ShaderType, Debug, Clone)]
pub struct DataFlowUniforms {
pub tint_color: LinearRgba,
pub time: f32,
pub demo_mode: u32,
}
}
// Re-export the uniform type
pub use uniforms::DataFlowUniforms;
#[allow(dead_code)]
#[derive(Asset, TypePath, AsBindGroup, Debug, Clone)]
pub struct DataFlowMaterial {
#[uniform(0)]
pub uniforms: DataFlowUniforms,
}
impl Material for DataFlowMaterial {
fn vertex_shader() -> ShaderRef {
"shaders/d01_06_data_flow.wgsl".into()
}
fn fragment_shader() -> ShaderRef {
"shaders/d01_06_data_flow.wgsl".into()
}
// Disable backface culling so we can see both front and back faces
fn specialize(
_pipeline: &bevy::pbr::MaterialPipeline<Self>,
descriptor: &mut RenderPipelineDescriptor,
_layout: &MeshVertexBufferLayoutRef,
_key: MaterialPipelineKey<Self>,
) -> Result<(), SpecializedMeshPipelineError> {
descriptor.primitive.cull_mode = None;
Ok(())
}
}
Don't forget to add it to src/materials/mod.rs:
// ... other materials
pub mod d01_06_data_flow;
The Demo Module (src/demos/d01_06_data_flow.rs)
This module sets up our scene, registers the MaterialPlugin, spawns a sphere with our DataFlowMaterial, and includes a system that listens for keyboard input to cycle the demo_mode uniform.
use crate::materials::d01_06_data_flow::{DataFlowMaterial, DataFlowUniforms};
use bevy::prelude::*;
pub fn run() {
App::new()
.add_plugins(DefaultPlugins)
.add_plugins(MaterialPlugin::<DataFlowMaterial>::default())
.add_systems(Startup, setup)
.add_systems(Update, (rotate_camera, update_time, cycle_mode))
.run();
}
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<DataFlowMaterial>>,
) {
commands.spawn((
Mesh3d(meshes.add(Sphere::new(1.0).mesh().uv(32, 18))),
MeshMaterial3d(materials.add(DataFlowMaterial {
uniforms: DataFlowUniforms {
tint_color: LinearRgba::WHITE,
time: 0.0,
demo_mode: 0,
},
})),
));
commands.spawn((
PointLight {
shadows_enabled: true,
..default()
},
Transform::from_xyz(4.0, 8.0, 4.0),
));
commands.spawn((
Camera3d::default(),
Transform::from_xyz(-2.5, 4.5, 9.0).looking_at(Vec3::ZERO, Vec3::Y),
));
commands.spawn((
Text::new("Press SPACE to cycle modes\nMode 0: Interpolated Normals (@location)"),
Node {
position_type: PositionType::Absolute,
top: Val::Px(10.0),
left: Val::Px(10.0),
..default()
},
));
}
fn rotate_camera(time: Res<Time>, mut camera_query: Query<&mut Transform, With<Camera3d>>) {
for mut transform in camera_query.iter_mut() {
let radius = 9.0;
let angle = time.elapsed_secs() * 0.3;
transform.translation.x = angle.cos() * radius;
transform.translation.z = angle.sin() * radius;
transform.look_at(Vec3::ZERO, Vec3::Y);
}
}
fn update_time(time: Res<Time>, mut materials: ResMut<Assets<DataFlowMaterial>>) {
for (_, material) in materials.iter_mut() {
material.uniforms.time = time.elapsed_secs();
}
}
fn cycle_mode(
keyboard: Res<ButtonInput<KeyCode>>,
mut materials: ResMut<Assets<DataFlowMaterial>>,
mut text_query: Query<&mut Text>,
) {
if keyboard.just_pressed(KeyCode::Space) {
for (_, material) in materials.iter_mut() {
material.uniforms.demo_mode = (material.uniforms.demo_mode + 1) % 4;
for mut text in text_query.iter_mut() {
let mode_text = match material.uniforms.demo_mode {
0 => "Mode 0: Interpolated Normals (@location data)",
1 => "Mode 1: Interpolated UVs (@location data)",
2 => "Mode 2: Horizontal Animated Rings (@location world pos)",
3 => "Mode 3: Front (Blue) / Back (Orange) with Holes (@builtin)",
_ => "Unknown",
};
**text = format!("Press SPACE to cycle modes\n{}", mode_text);
}
}
}
}
Don't forget to add it to src/demos/mod.rs:
// ... other demoss
pub mod d01_06_data_flow;
And register it in src/main.rs:
Demo {
number: "1.6",
title: "Shader Attributes and Data Flow",
run: demos::d01_06_data_flow::run,
},
Running the Visualization
Run the project and press the spacebar to cycle through the modes.
Controls
| Control | Action |
| SPACE | Cycle through the four modes. |
What You're Seeing




This demo makes the abstract flow of data visible.
| Mode | Data Source Visualized | What You're Seeing |
| 0: Normals | @location (from mesh -> inter-stage) | The smooth, multi-colored gradient shows the world_normal vector being correctly calculated in the vertex shader and then interpolated across the surface for the fragment shader. |
| 1: UVs | @location (from mesh -> inter-stage) | The red/green/yellow gradient visualizes the uv coordinates being passed through the vertex shader and interpolated, confirming the texture coordinate space is intact. |
| 2: World Position | @location (inter-stage) | Animated horizontal rings move up the sphere. This effect is driven by the interpolated world_position.y, demonstrating that data calculated in the vertex shader is smoothly available to the fragment shader. |
| 3: Front/Back | @builtin(front_facing) | The sphere is rendered with a grid cut out of it (using discard). Front-facing polygons are blue, while the orange back-facing polygons are visible through the holes. This directly visualizes the boolean value provided by @builtin(front_facing). |
| (All Modes) | @group(2) (Uniforms) | The gentle pulsing brightness seen in all modes is driven by the time uniform, showing that this global data is always available to the fragment shader. |
Key Takeaways
This article demystified the "address system" of the GPU pipeline. Understanding these attributes is the key to controlling the flow of data from your application all the way to the final pixel.
Attributes Define the Shader's API Contract.
Think of a shader stage as a function call. The four core attributes -@location,@builtin,@group, and@binding- are not just syntax; they are the explicit instructions that define the function's parameters, its return values, and its access to external resources.@locationhas a Dual Role.In a vertex shader's input,
@location(N)corresponds to a specific, numbered attribute on a Mesh asset (e.g., location 0 is alwaysPOSITION).In a vertex shader's output,
@location(N)defines your own custom, numbered channel for passing interpolated data to the fragment shader.
@builtinis Data Provided by the GPU.
These are special, runtime-injected variables that the hardware generates for you.instance_indexandvertex_indexgive you context in the vertex shader, whilefront_facingandposition(as a screen coordinate) give you context in the fragment shader.Data is Organized into Standard "Modules" (
@group).
Bevy uses a strict but powerful organization for its resources. For your custom materials, all your data (uniforms, textures) will almost always live in@group(2). This separation prevents conflicts and keeps the data pipeline clean and predictable.
What's Next?
You now have a complete, high-level understanding of how data flows through the entire shader pipeline, from a Mesh component in Rust to an interpolated vec3 in your fragment shader.
However, we've glossed over one very important and often tricky detail: memory layout. When you define a struct in Rust and an equivalent struct in WGSL, how do you guarantee they are arranged in memory in the exact same way? What are the GPU's strict alignment rules, and how does Bevy's ShaderType macro help you solve them?
In the next article, we will dive deep into the world of GPU memory layout, padding, and alignment. This is the final piece of the puzzle for creating complex, robust, and error-free custom materials.
Next up: 1.7 - Uniforms and GPU Memory Layout
Quick Reference
A cheat sheet for the four main WGSL attributes and their roles.
The Four Core Attributes
| Attribute | Function Call Analogy | Primary Use |
@location(N) | Function Parameters & Return Values | For mesh vertex data (input) and for passing interpolated data between shader stages (output). |
@builtin(NAME) | Runtime-Injected Context Variables | For accessing values automatically generated by the hardware, like instance_index. |
@group(N) | A Global Resource "Module" | Organizes resources. Use @group(2) for your custom material data. |
@binding(N) | A Specific Export from a Module | Specifies the exact slot for a single uniform buffer, texture, or sampler within a group. |
Common Built-ins
| Stage | Built-in | Description |
| Vertex (Input) | @builtin(instance_index) | The ID of the current mesh instance being drawn. |
| Vertex (Output) | @builtin(position) | Required. The final Clip Space position of the vertex. |
| Fragment (Input) | @builtin(position) | The screen-space pixel coordinate (.xy) and depth (.z). |
| Fragment (Input) | @builtin(front_facing) | A bool that is true if the current pixel is on a front-facing triangle. |
Data Flow Summary
The following diagram illustrates the complete journey of data, showing how different sources feed into the main GPU pipeline stages.

How to Read the Diagram:
Data Sources (Top): Your
Bevy Appprovides the mesh and material data. TheGPU Hardwareitself provides special built-in values.Inputs to the Vertex Shader: It receives three distinct inputs:
(1)Mesh data is fed into the vertex shader via@locationattributes.(2)Uniforms and textures are made available via@groupand@binding.(3)Built-in values likeinstance_indexare provided by the hardware via@builtin.
Vertex Shader to Rasterizer: The vertex shader's return value (its output struct) is passed to the rasterizer. This contains the mandatory
@builtin(position)and your custom@locationdata.Rasterizer to Fragment Shader: The rasterizer calculates which pixels to draw and provides the fragment shader with the interpolated values from the vertex shader's
@locationoutputs.Inputs to the Fragment Shader: The fragment shader receives its own set of inputs:
The interpolated data from the rasterizer arrives via
@locationattributes.The same uniforms
(2)and its own set of built-ins(3)are also available.
Final Output: The fragment shader returns the final RGBA color for the pixel.






