render iteration

This commit is contained in:
Jonas H
2026-02-08 14:06:35 +01:00
parent 2422106725
commit 82c3e1e3b0
67 changed files with 6381 additions and 1564 deletions

1
.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
/target

1027
BLENDER_SNOW_WORKFLOW.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -81,14 +81,13 @@ As of December 2025, SDL3 Rust bindings are usable but still maturing:
**Using wgpu instead of OpenGL:**
- Modern GPU API abstraction (Vulkan/Metal/DX12/OpenGL backends)
- Better cross-platform support
- WGSL shader language (WebGPU Shading Language)
- WESL shader language (WebGPU Shading Language)
- Type-safe API with explicit resource management
- Low-res framebuffer rendering with 3-bit RGB dithering (retro aesthetic)
**Rendering Architecture:**
- wgpu for 3D mesh rendering with custom shaders
- Low-resolution framebuffer (160×120) upscaled to window size
- Bayer 8×8 dithering for 3-bit RGB color (8 colors total)
- Multiple rendering pipelines: standard meshes and terrain
- Separate bind groups for different material types
@@ -141,7 +140,6 @@ SDL Events → InputState → player_input_system() → InputComponent → movem
**wgpu Rendering System:**
- Low-res framebuffer (160×120) renders to texture
- Bayer 8×8 dithering reduces colors to 3-bit RGB (8 colors)
- Final blit pass upscales framebuffer to window using nearest-neighbor sampling
- Depth buffer for 3D rendering with proper occlusion
@@ -199,10 +197,10 @@ cargo fmt
## Shader Files
WGSL shaders are stored in the `shaders/` directory:
- `shaders/standard.wgsl` - Standard mesh rendering with directional lighting
- `shaders/terrain.wgsl` - Terrain rendering with shadow mapping (no displacement)
- `shaders/blit.wgsl` - Fullscreen blit for upscaling low-res framebuffer
WESL shaders are stored in the `src/shaders/` directory:
- `src/shaders/standard.wesl` - Standard mesh rendering with directional lighting
- `src/shaders/terrain.wesl` - Terrain rendering with shadow mapping (no displacement)
- `src/shaders/blit.wgsl` - Fullscreen blit for upscaling low-res framebuffer
Shaders are loaded at runtime via `std::fs::read_to_string()`, allowing hot-reloading by restarting the application.
@@ -230,7 +228,7 @@ Shaders are loaded at runtime via `std::fs::read_to_string()`, allowing hot-relo
**Rendering:**
- `render.rs` - wgpu renderer, pipelines, bind groups, DrawCall execution
- `shader.rs` - Standard mesh shader (WGSL) with diffuse+ambient lighting
- `shader.rs` - Standard mesh shader (WESL) with diffuse+ambient lighting
- `terrain.rs` - Terrain entity spawning, glTF loading, EXR heightmap → physics collider
- `postprocess.rs` - Low-res framebuffer and blit shader for upscaling
- `mesh.rs` - Vertex/Mesh structs, plane/cube mesh generation, glTF loading
@@ -578,86 +576,3 @@ Time::init(); // In main() before game loop
let time = Time::get_time_elapsed(); // Anywhere in code
```
## Current Implementation Status
### Implemented Features
**ECS Architecture:**
- ✅ Full ECS conversion completed
- ✅ Entity system with EntityManager (spawn/despawn/query)
- ✅ Component storages (Transform, Mesh, Physics, Movement, Input, PlayerTag, StateMachine)
- ✅ Systems pipeline (input → state machine → physics → physics sync → render)
- ✅ No `Rc<RefCell<>>` - clean component ownership
- ✅ Event bus integrated as complementary to systems
**Core Rendering:**
- ✅ wgpu renderer with Vulkan backend
- ✅ Low-res framebuffer (160×120) with Bayer dithering
- ✅ Multiple render pipelines (standard mesh + terrain)
- ✅ Directional lighting with diffuse + ambient
- ✅ Terrain rendering (glTF with baked heights, no shader displacement)
- ✅ EXR heightmap loading for physics colliders
- ✅ glTF mesh loading
- ✅ render_system (ECS-based DrawCall generation)
**Input System:**
- ✅ Two-layer input pipeline (InputState → InputComponent)
- ✅ player_input_system converts raw input to gameplay commands
- ✅ SDL event handling in InputState
- ✅ Per-entity InputComponent for controllable entities
**Camera & Debug:**
- ✅ 3D camera with rotation (yaw/pitch)
- ✅ Noclip mode for development (in debug/noclip.rs)
- ✅ Mouse look with relative mouse mode
- ✅ Toggle with 'I' key, 'N' for noclip mode
**Physics:**
- ✅ rapier3d integration with PhysicsManager singleton
- ✅ PhysicsComponent storage (rigidbody/collider handles)
- ✅ physics_sync_system (syncs physics → transforms)
- ✅ Physics step integrated into game loop
- ⚠️ Ground detection not yet implemented
- ⚠️ Movement physics not yet connected
**State Machines:**
- ✅ Generic StateMachine implementation
- ✅ StateMachineStorage (ECS component)
- ✅ state_machine_system updates all state machines
- ✅ Transitions can query ECS components
- ⚠️ Player state transitions not yet configured
**Player:**
- ✅ Player entity spawning function
- ✅ Components: Transform, Mesh, Physics, Movement, Input, PlayerTag
- ⚠️ Movement system not yet implemented
- ⚠️ State machine not yet attached to player
- ⚠️ Currently inactive (noclip camera used instead)
**Movement Configuration:**
- ✅ Horizontal movement config (Bezier acceleration curves)
- ✅ Vertical movement config (jump mechanics)
- ✅ MovementComponent storage
- ⚠️ Movement system not yet implemented
- ⚠️ Not yet integrated with physics
### Not Yet Implemented
- ❌ Movement system (apply InputComponent → physics velocities)
- ❌ Ground detection and collision response
- ❌ Player state machine configuration
- ❌ Camera follow behavior (tracks player entity)
- ❌ Snow deformation compute shaders
- ❌ Debug UI system
### Current Focus
**ECS migration is complete!** The architecture is now fully entity-component-system based with clean separation of data and logic. The next steps are:
1. Implement movement_system to apply InputComponent to physics
2. Configure player state machine transitions
3. Implement ground detection
4. Add camera follow system
5. Integrate snow deformation
The noclip camera mode serves as the primary navigation method for testing. Press 'N' to toggle noclip mode.

868
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -11,10 +11,13 @@ glam = "0.30"
anyhow = "1.0"
rapier3d = "0.31"
bytemuck = { version = "1.14", features = ["derive"] }
gltf = "1.4"
image = { version = "0.25", features = ["exr"] }
gltf = { version = "1.4", features = ["KHR_lights_punctual", "extras"] }
image = { version = "0.25", default-features = false, features = ["png"] }
exr = "1.72"
half = "2.4"
kurbo = "0.11"
nalgebra = { version = "0.34.1", features = ["convert-glam030"] }
serde_json = "1.0"
wesl = "0.2"
[build-dependencies]
wesl = "0.2"

351
QWEN.md Normal file
View File

@@ -0,0 +1,351 @@
# Snow Trail SDL Project
## Model Usage Guide
**When interacting with this codebase, I follow a step-by-step, concise approach:**
1. **Start with exploration**: Read files to understand context before making changes
2. **Build incrementally**: Make small, targeted changes and verify them
3. **Test after changes**: Run `cargo check` and relevant tests
4. **Keep explanations brief**: Code should speak for itself; comments only for complex logic
5. **Follow existing patterns**: Mimic the style, structure, and conventions in the codebase
**For task management**, I use the todo list to track multi-step work:
- Mark tasks as `in_progress` when starting
- Mark tasks as `completed` immediately after finishing
- Add new tasks when scope expands
- Never batch multiple completions
## Project Overview
This is a pure Rust game engine implementation (not a game yet) that serves as a migration from a Godot-based project. It's a 3D game using SDL3 for windowing/input, wgpu for rendering, rapier3d for physics, and features a low-res retro aesthetic with dithering.
The project implements an ECS (Entity Component System) architecture without engine dependencies, providing core systems for rendering, physics, input handling, and entity management.
### Key Technologies
- **SDL3**: Windowing and input handling (latest stable bindings)
- **wgpu**: Modern GPU API abstraction for rendering (Vulkan/Metal/DX12/OpenGL backends)
- **rapier3d**: Fast 3D physics engine
- **glam**: Fast vector/matrix math library
- **gltf**: Loading 3D models in glTF format
- **exr**: Loading EXR heightmap files for physics colliders
- **kurbo**: Bezier curve evaluation for movement acceleration curves
### Architecture: Pure ECS
- **Entities**: Just IDs (`EntityHandle = u64`), managed by `EntityManager`
- **Components**: Pure data structures stored in component storages (HashMap)
- **Systems**: Functions that query entities with specific component combinations
- **No Rc<RefCell<>>** - Clean ownership model with components as data in hashmaps
- **Component Storages** owned by single `World` struct
## Building and Running
### Build Commands
```bash
cargo build
cargo build --release
cargo check
cargo test
cargo run
cargo fmt
```
### Shader Compilation
The project uses a custom shader compilation system via the `wesl` crate:
- WGSL/WESL shaders are compiled at build time via `build.rs`
- Shaders are loaded at runtime via `std::fs::read_to_string()`, allowing hot-reloading by restarting the application
- Build artifact: `standard` shader package
### Runtime Behavior
- Window resolution: 800×600 (resizable)
- Rendering resolution: Low-res framebuffer (160×120) upscaled to window
- Target FPS: 60 FPS with fixed timestep physics (1/60s)
- Default mode: Noclip camera active
- Toggle modes: Press 'N' to toggle noclip/follow modes
## Development Conventions
### Code Style (from CLAUDE.md)
- **NO inline comments unless ABSOLUTELY necessary** - Code must be self-documenting
- **Doc comments (`///`)** only for public APIs and complex algorithms
- **All `use` statements must be at the file level** (module top), not inside function bodies
- **NO inline paths** - always add `use` statements at the top of files
- **Formatting**: `brace_style = "AlwaysNextLine"`, `control_brace_style = "AlwaysNextLine"`
### File Structure
```
src/
├── main.rs - SDL3 event loop, game loop orchestration, system execution order
├── entity.rs - EntityManager for entity lifecycle (spawn/despawn/query)
├── world.rs - World struct owning all component storages
├── camera.rs - 3D camera with rotation and follow behavior
├── physics.rs - PhysicsManager singleton (rapier3d world)
├── player.rs - Player entity spawning function
├── terrain.rs - Terrain entity spawning, glTF loading, EXR heightmap loading
├── render.rs - wgpu renderer, pipelines, bind groups, DrawCall execution
├── postprocess.rs - Low-res framebuffer and blit shader for upscaling
├── mesh.rs - Vertex/Mesh structs, plane/cube mesh generation, glTF loading
├── shader.rs - Standard mesh shader (WGSL) with diffuse+ambient lighting
├── state.rs - Generic StateMachine implementation
├── event.rs - Type-safe event bus (complementary to ECS)
├── picking.rs - Ray casting for mouse picking (unused currently)
├── heightmap.rs - EXR heightmap loading utilities
├── draw.rs - DrawManager (legacy, kept for compatibility)
├── texture_loader.rs - Texture loading utilities
└── systems/ - ECS systems (input, state_machine, physics_sync, render, camera)
├── input.rs
├── state_machine.rs
├── physics_sync.rs
├── render.rs
├── camera_follow.rs
├── camera_input.rs
└── camera_noclip.rs
├── components/ - ECS component definitions
├── input.rs
├── mesh.rs
├── movement.rs
├── physics.rs
├── player_tag.rs
├── jump.rs
├── camera.rs
└── camera_follow.rs
├── utility/ - Utility modules
├── input.rs - InputState (raw SDL input handling)
├── time.rs - Time singleton (game time tracking)
└── transform.rs - Transform struct (position/rotation/scale data type)
├── debug/ - Debug utilities
├── noclip.rs - Noclip camera controller
└── render_collider_debug.rs
└── shaders/ - WGSL/WESL shader files
├── shared.wesl - Shared shader utilities
├── standard.wesl - Standard mesh rendering with directional lighting
├── terrain.wesl - Terrain rendering with shadow mapping
└── blit.wgsl - Fullscreen blit for upscaling low-res framebuffer
```
### System Execution Order (main.rs game loop)
1. **SDL Events → InputState**: Poll events, handle raw input
2. **InputState → InputComponent**: `player_input_system()` converts raw input to gameplay commands
3. **State Machine Update**: `state_machine_physics_system()` and `state_machine_system()`
4. **Physics Simulation**: Fixed timestep physics step
5. **Physics → Transforms**: `physics_sync_system()` syncs physics bodies to transforms
6. **Rendering**: `render_system()` generates DrawCalls, renderer executes pipeline
7. **Cleanup**: Clear just-pressed states
### ECS Component Storages
All storages are owned by the `World` struct:
- `TransformStorage` - Position, rotation, scale
- `MeshStorage` - Mesh data + render pipeline
- `PhysicsStorage` - Rapier3d rigidbody/collider handles
- `MovementStorage` - Movement config + state
- `JumpStorage` - Jump mechanics state
- `InputStorage` - Gameplay input commands
- `PlayerTagStorage` - Marker for player entities
- `StateMachineStorage` - Behavior state machines
- `CameraStorage` - Camera components
- `CameraFollowStorage` - Camera follow behavior
### Input Handling (Two-Layer Pipeline)
**Layer 1: Raw Input** (`utility/input.rs` - `InputState`):
- Global singleton for SDL event handling
- Tracks raw hardware state (W/A/S/D pressed, mouse delta, etc.)
- Handles SDL events via `handle_event()` method
**Layer 2: Gameplay Commands** (`components/input.rs` - `InputComponent`):
- Per-entity ECS component
- Stores processed gameplay commands (move_direction, jump_pressed)
- Filled by `player_input_system()` which reads `InputState`
**Input Flow**: SDL Events → InputState → InputComponent → Movement Systems
**Current Controls**:
- `W/A/S/D`: Movement
- `Space`: Jump
- `Shift`: Speed boost (noclip mode)
- `I`: Toggle mouse capture
- `Escape`: Quit game
- `N`: Toggle noclip/follow mode
- Mouse motion: Camera look (yaw/pitch)
### Rendering Pipeline
- **Low-res framebuffer** (160×120) renders to texture
- **Bayer 8×8 dithering** reduces colors to 3-bit RGB (8 colors)
- **Final blit pass** upscales framebuffer to window using nearest-neighbor sampling
- **Depth buffer** for 3D rendering with proper occlusion
- **Multiple render pipelines**: Standard mesh and terrain pipelines
- **Directional lighting**: Diffuse + ambient (basic Phong model)
### Terrain System
- **glTF mesh** exported from Blender 5.0 with baked height values in vertices
- **EXR heightmap** loaded for physics colliders (single-channel R32Float format)
- **Heightfield collider** created directly from EXR data
- **No runtime displacement** - vertices rendered directly
- **Separate terrain pipeline** for terrain-specific rendering
### Build System
- **`build.rs`**: Custom build script using `wesl` crate
- **Shader compilation**: `package::standard``standard` artifact
- **No external dependencies** needed for shader compilation at build time
### Dependencies Rationale
- **sdl3**: Modern SDL3 bindings for future-proofing
- **wgpu**: Modern GPU API with cross-platform support
- **rapier3d**: Fast physics engine with good Rust integration
- **gltf**: Standard 3D model format for asset pipeline
- **exr**: High-dynamic-range heightmap loading for physics
- **kurbo**: Bezier curves for smooth movement acceleration
- **bytemuck**: Safe byte casting for GPU buffer uploads
## Current Implementation Status
### Implemented Features
✅ Full ECS architecture (entities, components, systems)
✅ SDL3 windowing and input handling
✅ wgpu rendering with low-res framebuffer
✅ Multiple render pipelines (standard mesh + terrain)
✅ Bayer dithering for retro aesthetic
✅ glTF mesh loading
✅ EXR heightmap loading for physics
✅ rapier3d physics integration
✅ State machine system (generic implementation)
✅ Event bus (complementary to ECS)
✅ Camera system (free look + follow modes)
✅ Noclip mode for development
✅ Two-layer input pipeline
### In Progress
⚠️ Movement system (apply InputComponent → physics velocities)
⚠️ Ground detection and collision response
⚠️ Player state machine configuration (transitions not yet set up)
⚠️ Camera follow behavior (partial implementation)
⚠️ Snow deformation compute shaders
⚠️ Debug UI system
### Known Limitations
- Player entity spawns but is inactive (noclip camera used for testing)
- Movement physics not yet connected to input
- Ground detection not implemented
- State machine transitions not configured
- Camera follow needs refinement
## Content Creation Workflow
### Blender 5.0 (blender/)
- **Terrain modeling**: `terrain.blend`
- **Player character**: `player_mesh.blend`
- **Export formats**:
- glTF: `meshes/` for rendering (baked heights in vertices)
- EXR: `textures/` single-channel float heightmap for physics
### GIMP (gimp/)
- **Dither patterns**: `dither_patterns.xcf` (Bayer matrix patterns)
### Export Process
1. Model terrain in Blender 5.0
2. Export as glTF with baked height values
3. Export same terrain as EXR heightmap
4. Both files represent same data (visual/physics sync guaranteed)
## Future Development
### Next Steps (from CLAUDE.md)
1. Implement `movement_system` to apply `InputComponent` to physics velocities
2. Configure player state machine transitions (idle → walking → jumping → falling)
3. Implement ground detection (raycasting with QueryPipeline)
4. Add camera follow system (tracks player entity)
5. Integrate snow deformation compute shaders
6. Implement debug UI system for parameter tweaking
### Testing Strategy
- Systems are pure functions (easy to test)
- Create multiple `World` instances for isolation
- Query patterns are predictable
- State machine transitions are testable
## Technical Notes
### EXR Heightmap Loading (Physics Only)
```rust
use exr::prelude::{ReadChannels, ReadLayers};
let builder = exr::Image::new("heightmap.exr")
.no_deep_data()
.largest_resolution_level()
.all_channels()
.all_layers()
.all_attributes();
```
### Component Storage Pattern
```rust
pub struct TransformStorage {
pub components: HashMap<EntityHandle, Transform>,
}
impl TransformStorage {
pub fn with_mut<F, R>(&mut self, entity: EntityHandle, f: F) -> Option<R>
where
F: FnOnce(&mut Transform) -> R,
{
self.components.get_mut(&entity).map(f)
}
}
```
### State Machine Integration
- TypeId-based state identification
- Transitions as closures (can capture entity ID)
- State callbacks receive `&mut World` for component access
- Safe pattern: Remove → Update → Insert to avoid borrow conflicts
### Event System (Complementary to ECS)
- Handles irregular, one-time occurrences
- Cross-system messaging without tight coupling
- Global `add_listener()` and `emit()` functions
- Example: FootstepEvent for snow deformation
### Shader Hot-Reloading
- Shaders loaded at runtime via `std::fs::read_to_string()`
- Restart application to reload shaders
- No recompilation needed
## Quick Reference
### Running the Project
```bash
cd /home/jonas/projects/snow_trail_sdl
cargo run
```
### Building for Release
```bash
cargo build --release
```
### Formatting Code
```bash
cargo fmt
```
### Toggling Modes at Runtime
- Press **N** to toggle between noclip and follow modes
- Press **I** to toggle mouse capture
- Press **Escape** to quit
### Working with Shaders
- Edit `.wesl` files in `src/shaders/`
- Changes take effect on application restart
- Build script compiles `package::standard``standard` artifact
### Adding New Components
1. Define component struct (pure data, no Rc<RefCell>)
2. Add storage to `world.rs` (HashMap<EntityHandle, Component>)
3. Add storage to `World` struct
4. Update `World::despawn()` to clean up component
5. Create systems that query and modify the component
### Adding New Systems
1. Add function in `systems/` directory
2. Import at top of `main.rs`
3. Add to system execution order in game loop
4. Systems receive `&mut World` (or `&World` for read-only)

139
blender/scripts/README.md Normal file
View File

@@ -0,0 +1,139 @@
# Blender Export Scripts
Python scripts for generating textures and heightmaps from Blender terrain meshes.
## Prerequisites
- Blender 5.0+
- Terrain mesh object (default name: "TerrainPlane")
## Scripts
### generate_heightmap.py
Bakes EXR heightmap from terrain mesh using Blender's render system.
**Output:** `textures/terrain_heightmap.exr` (R32Float single-channel)
**Usage:**
```python
# In Blender's Scripting workspace - just run the script!
# It will automatically find TerrainPlane and bake to textures/terrain_heightmap.exr
```
**Or run from command line:**
```bash
blender terrain.blend --background --python scripts/generate_heightmap.py
```
**Custom parameters:**
```python
from generate_heightmap import bake_heightmap
bake_heightmap(
terrain_obj=bpy.data.objects["TerrainPlane"],
resolution=1000,
output_path="path/to/output.exr"
)
```
### generate_normal_map.py
Generates normal map from terrain mesh for neighbor sampling in shaders.
**Output:** `textures/terrain_normals.png` (RGB encoded normals)
**Usage:**
```python
from generate_normal_map import save_normal_map
save_normal_map(
output_path=project_root / "textures" / "terrain_normals.png",
resolution=1024,
blur_iterations=2,
terrain_name="TerrainPlane"
)
```
### generate_flowmap.py
Generates flowmap for water/snow flow effects.
**Output:** `textures/terrain_flowmap.png`
## Terrain Export Workflow
1. **Model terrain in Blender 5.0**
- Create/sculpt terrain mesh
- Add modifiers (Subdivision, Displacement, etc.)
- Ensure terrain has UV mapping
2. **Bake heightmap**
- Run `generate_heightmap.py` script
- Uses Blender's baking system (like baking a texture)
- Creates `textures/terrain_heightmap.exr`
- Automatically applies all modifiers
3. **Export glTF with baked heights**
- Select terrain mesh
- File → Export → glTF 2.0
- Save as `meshes/terrain.gltf`
- Heights are baked in vertex positions
4. **Both files in sync**
- glTF: rendering (vertices with baked heights)
- EXR: physics (rapier3d heightfield collider)
- Both from same source = guaranteed match
## Resolution Guidelines
- **Heightmap (EXR):** 512×512, 1000×1000, or 1024×1024
- Higher = more accurate collision
- Lower = faster loading
- Default: 1000×1000
- Uses Blender's render sampling (no gaps!)
- **Normal Map:** 1024×1024 or 2048×2048
- For shader neighbor sampling
- Higher quality for detailed terrain
## Customization
Change parameters by editing the script or calling directly:
```python
from generate_heightmap import bake_heightmap
bake_heightmap(
terrain_obj=bpy.data.objects["MyTerrain"],
resolution=1024,
output_path="custom/path.exr"
)
```
## Output Files
```
project_root/
├── meshes/
│ └── terrain.gltf # Mesh with baked heights (manual export)
└── textures/
├── terrain.exr # Heightmap for physics (generated)
├── terrain_normals.png # Normal map (generated)
└── terrain_flowmap.png # Flow map (generated)
```
## Troubleshooting
**"Object not found":**
- Ensure terrain object exists
- Check object name matches parameter
- Script will auto-detect objects with "terrain" or "plane" in name
**"Mesh has no vertices":**
- Apply all modifiers before running script
- Check mesh is not empty
**EXR export fails:**
- Ensure Blender has EXR support enabled
- Check output directory exists and is writable

View File

@@ -0,0 +1,135 @@
import bpy
from pathlib import Path
def bake_heightmap(terrain_obj, resolution=1024, output_path=None):
"""
Bake terrain heightmap using Blender's render/bake system.
Args:
terrain_obj: Terrain mesh object
resolution: Texture resolution (square)
output_path: Path to save EXR file
"""
print(f"Baking heightmap for: {terrain_obj.name}")
print(f"Resolution: {resolution}×{resolution}")
# Ensure object has UV map
if not terrain_obj.data.uv_layers:
print("Adding UV map...")
terrain_obj.data.uv_layers.new(name="UVMap")
# Create new image for baking
bake_image = bpy.data.images.new(
name="Heightmap_Bake",
width=resolution,
height=resolution,
alpha=False,
float_buffer=True,
is_data=True
)
# Setup material for baking
if not terrain_obj.data.materials:
mat = bpy.data.materials.new(name="Heightmap_Material")
terrain_obj.data.materials.append(mat)
else:
mat = terrain_obj.data.materials[0]
mat.use_nodes = True
nodes = mat.node_tree.nodes
nodes.clear()
# Create nodes for height baking
# Geometry node to get position
geo_node = nodes.new(type='ShaderNodeNewGeometry')
# Separate XYZ to get Z (height)
separate_node = nodes.new(type='ShaderNodeSeparateXYZ')
mat.node_tree.links.new(geo_node.outputs['Position'], separate_node.inputs['Vector'])
# Emission shader to output height value
emission_node = nodes.new(type='ShaderNodeEmission')
mat.node_tree.links.new(separate_node.outputs['Z'], emission_node.inputs['Color'])
# Material output
output_node = nodes.new(type='ShaderNodeOutputMaterial')
mat.node_tree.links.new(emission_node.outputs['Emission'], output_node.inputs['Surface'])
# Add image texture node (required for baking target)
image_node = nodes.new(type='ShaderNodeTexImage')
image_node.image = bake_image
image_node.select = True
nodes.active = image_node
# Select object and set mode
bpy.context.view_layer.objects.active = terrain_obj
terrain_obj.select_set(True)
# Setup render settings for baking
bpy.context.scene.render.engine = 'CYCLES'
bpy.context.scene.cycles.samples = 1
bpy.context.scene.cycles.bake_type = 'EMIT'
print("Baking...")
bpy.ops.object.bake(type='EMIT', use_clear=True)
print("Bake complete!")
# Save as EXR
if output_path:
bake_image.filepath_raw = str(output_path)
bake_image.file_format = 'OPEN_EXR'
bake_image.use_half_precision = False
scene = bpy.context.scene
original_color_mode = scene.render.image_settings.color_mode
original_color_depth = scene.render.image_settings.color_depth
scene.render.image_settings.color_mode = 'BW'
scene.render.image_settings.color_depth = '32'
bake_image.save_render(str(output_path), scene=scene)
scene.render.image_settings.color_mode = original_color_mode
scene.render.image_settings.color_depth = original_color_depth
print(f"Saved to: {output_path}")
# Cleanup
bpy.data.images.remove(bake_image)
return True
if __name__ == "__main__":
project_root = Path(bpy.data.filepath).parent.parent
output_path = project_root / "textures" / "terrain_heightmap.exr"
output_path.parent.mkdir(parents=True, exist_ok=True)
# Find terrain object
terrain_obj = bpy.data.objects.get("TerrainPlane")
if not terrain_obj:
print("'TerrainPlane' not found. Searching for terrain mesh...")
for obj in bpy.data.objects:
if obj.type == 'MESH' and ('terrain' in obj.name.lower() or 'plane' in obj.name.lower()):
terrain_obj = obj
print(f"Using: {obj.name}")
break
if not terrain_obj:
raise ValueError("No terrain object found!")
bake_heightmap(
terrain_obj=terrain_obj,
resolution=1000,
output_path=output_path
)
print("\n" + "="*60)
print("Heightmap baking complete!")
print(f"Output: {output_path}")
print("="*60)

View File

@@ -0,0 +1,268 @@
import bpy
from pathlib import Path
def find_snow_modifier(terrain_obj):
"""
Find the Geometry Nodes modifier that contains snow_depth attribute.
Returns the modifier or None if not found.
"""
for mod in terrain_obj.modifiers:
if mod.type == 'NODES' and mod.node_group:
# Check if this modifier's node tree has Store Named Attribute with "snow_depth"
for node in mod.node_group.nodes:
if node.type == 'STORE_NAMED_ATTRIBUTE':
if hasattr(node, 'data_type') and node.name and 'snow' in node.name.lower():
return mod
# Check inputs for the name
for input in node.inputs:
if input.name == 'Name' and hasattr(input, 'default_value'):
if input.default_value == 'snow_depth':
return mod
# Fallback: check modifier name
if 'snow' in mod.name.lower():
return mod
return None
def bake_snow_depth(terrain_obj, resolution=512, output_path=None, modifier_name=None):
"""
Bake snow depth attribute to texture using shader-based Cycles baking.
Uses the same approach as generate_heightmap.py.
Requires:
- Terrain object with Geometry Nodes modifier that stores 'snow_depth' attribute
- UV map on terrain mesh
Args:
terrain_obj: Terrain mesh object with snow_depth attribute
resolution: Texture resolution (square)
output_path: Path to save EXR file
modifier_name: Optional specific modifier name to use (e.g., "Snow")
"""
print(f"Baking snow depth for: {terrain_obj.name}")
print(f"Resolution: {resolution}×{resolution}")
# Find the snow geometry nodes modifier
if modifier_name:
geo_nodes_modifier = terrain_obj.modifiers.get(modifier_name)
if not geo_nodes_modifier:
raise ValueError(f"Modifier '{modifier_name}' not found on {terrain_obj.name}")
print(f"Using specified modifier: {modifier_name}")
else:
geo_nodes_modifier = find_snow_modifier(terrain_obj)
if not geo_nodes_modifier:
print("\nAvailable Geometry Nodes modifiers:")
for mod in terrain_obj.modifiers:
if mod.type == 'NODES':
print(f" - {mod.name}")
raise ValueError(
f"No Geometry Nodes modifier with 'snow_depth' attribute found on {terrain_obj.name}!\n"
f"Either add snow accumulation modifier, or specify modifier_name parameter."
)
print(f"Found snow modifier: {geo_nodes_modifier.name}")
modifier_states = {}
print(f"\nDisabling modifiers after '{geo_nodes_modifier.name}' for baking...")
target_mod_index = list(terrain_obj.modifiers).index(geo_nodes_modifier)
for i, mod in enumerate(terrain_obj.modifiers):
modifier_states[mod.name] = {
'show_viewport': mod.show_viewport,
'show_render': mod.show_render
}
if i > target_mod_index:
print(f" Temporarily disabling: {mod.name}")
mod.show_viewport = False
mod.show_render = False
bpy.context.view_layer.update()
depsgraph = bpy.context.evaluated_depsgraph_get()
evaluated_obj = terrain_obj.evaluated_get(depsgraph)
eval_mesh = evaluated_obj.to_mesh()
if 'snow_depth' not in eval_mesh.attributes:
evaluated_obj.to_mesh_clear()
for mod_name, state in modifier_states.items():
mod = terrain_obj.modifiers.get(mod_name)
if mod:
mod.show_viewport = state['show_viewport']
mod.show_render = state['show_render']
raise ValueError("snow_depth attribute missing from evaluated geometry")
print(f"✓ Verified 'snow_depth' attribute exists")
evaluated_obj.to_mesh_clear()
# Ensure object has UV map
if not terrain_obj.data.uv_layers:
print("Adding UV map...")
terrain_obj.data.uv_layers.new(name="UVMap")
# Create new image for baking
bake_image = bpy.data.images.new(
name="SnowDepth_Bake",
width=resolution,
height=resolution,
alpha=False,
float_buffer=True,
is_data=True
)
print(f"Created bake image: {bake_image.name}")
original_materials = list(terrain_obj.data.materials)
print(f"Object has {len(original_materials)} material slot(s): {[mat.name if mat else 'None' for mat in original_materials]}")
mat = bpy.data.materials.new(name="SnowDepth_BakeMaterial")
mat.use_nodes = True
nodes = mat.node_tree.nodes
nodes.clear()
attr_node = nodes.new(type='ShaderNodeAttribute')
attr_node.attribute_name = 'snow_depth'
emission_node = nodes.new(type='ShaderNodeEmission')
mat.node_tree.links.new(attr_node.outputs['Fac'], emission_node.inputs['Color'])
output_node = nodes.new(type='ShaderNodeOutputMaterial')
mat.node_tree.links.new(emission_node.outputs['Emission'], output_node.inputs['Surface'])
image_node = nodes.new(type='ShaderNodeTexImage')
image_node.image = bake_image
image_node.select = True
nodes.active = image_node
terrain_obj.data.materials.clear()
terrain_obj.data.materials.append(mat)
print(f"Temporarily replaced all materials with bake material")
# Select object and set mode
bpy.context.view_layer.objects.active = terrain_obj
terrain_obj.select_set(True)
# Ensure we're in object mode
if bpy.context.object and bpy.context.object.mode != 'OBJECT':
bpy.ops.object.mode_set(mode='OBJECT')
# Setup render settings for baking
bpy.context.scene.render.engine = 'CYCLES'
bpy.context.scene.cycles.samples = 1
bpy.context.scene.cycles.bake_type = 'EMIT'
print("Baking with Cycles (EMIT)...")
bpy.ops.object.bake(type='EMIT', use_clear=True)
print("Bake complete!")
# Verify bake has data (not all black/zero)
pixels = list(bake_image.pixels)
max_value = max(pixels) if pixels else 0.0
avg_value = sum(pixels) / len(pixels) if pixels else 0.0
non_zero_count = sum(1 for p in pixels if p > 0.0001)
print(f"Baked image stats: max={max_value:.4f}, avg={avg_value:.4f}")
print(f"Non-zero pixels: {non_zero_count} ({non_zero_count / len(pixels) * 100:.1f}%)")
if max_value < 0.0001:
print("\n⚠️ WARNING: Baked image appears to be all black!")
print(" Possible causes:")
print(" - 'snow_depth' attribute doesn't exist in the geometry")
print(" - Geometry Nodes modifier is disabled")
print(" - Store Named Attribute node is not connected")
print(" - Wrong modifier selected (try specifying modifier_name)")
print("\n Continuing anyway, but check your setup...")
else:
print(f"✓ Bake contains data (values up to {max_value:.4f}m)")
# Save as EXR
if output_path:
bake_image.filepath_raw = str(output_path)
bake_image.file_format = 'OPEN_EXR'
bake_image.use_half_precision = False
scene = bpy.context.scene
original_color_mode = scene.render.image_settings.color_mode
original_color_depth = scene.render.image_settings.color_depth
original_exr_codec = scene.render.image_settings.exr_codec
# Use BW mode for single channel (same as heightmap)
scene.render.image_settings.color_mode = 'BW'
scene.render.image_settings.color_depth = '32'
scene.render.image_settings.exr_codec = 'ZIP'
print(f"Saving EXR with settings: color_mode=BW, depth=32, codec=ZIP")
bake_image.save_render(str(output_path), scene=scene)
scene.render.image_settings.color_mode = original_color_mode
scene.render.image_settings.color_depth = original_color_depth
scene.render.image_settings.exr_codec = original_exr_codec
print(f"Saved to: {output_path}")
print(f"Format: OpenEXR, 32-bit float, ZIP compression")
print(f"File size: {output_path.stat().st_size / 1024:.1f} KB")
bpy.data.images.remove(bake_image)
bpy.data.materials.remove(mat)
terrain_obj.data.materials.clear()
for original_mat in original_materials:
terrain_obj.data.materials.append(original_mat)
print(f"Restored {len(original_materials)} original material(s)")
print("\nRestoring modifier states...")
for mod_name, state in modifier_states.items():
mod = terrain_obj.modifiers.get(mod_name)
if mod:
mod.show_viewport = state['show_viewport']
mod.show_render = state['show_render']
print("✓ Modifiers restored")
return True
if __name__ == "__main__":
project_root = Path(bpy.data.filepath).parent.parent
output_path = project_root / "textures" / "snow_depth.exr"
output_path.parent.mkdir(parents=True, exist_ok=True)
# Find terrain object
terrain_obj = bpy.data.objects.get("TerrainPlane")
if not terrain_obj:
print("'TerrainPlane' not found. Searching for terrain mesh...")
for obj in bpy.data.objects:
if obj.type == 'MESH' and ('terrain' in obj.name.lower() or 'plane' in obj.name.lower()):
terrain_obj = obj
print(f"Using: {obj.name}")
break
if not terrain_obj:
raise ValueError("No terrain object found!")
# CONFIGURATION: Specify modifier name if you have multiple Geometry Nodes modifiers
# Leave as None to auto-detect the snow modifier
# Example: modifier_name = "Snow" or "Snow Accumulation"
modifier_name = "Snow Accumulation" # Auto-detect by looking for 'snow_depth' attribute
bake_snow_depth(
terrain_obj=terrain_obj,
resolution=1000,
output_path=output_path,
modifier_name=modifier_name # Specify "Snow" if auto-detect fails
)
print("\n" + "="*60)
print("Snow depth baking complete!")
print(f"Output: {output_path}")
print("="*60)
print("\nNext steps:")
print("1. Verify snow_depth.exr in textures/ directory")
print("2. Open in image viewer to check it's not black")
print("3. Load in game with SnowLayer::load()")
print("4. Test deformation with player movement")
print("\nIf bake is black:")
print("- Check that 'snow_depth' attribute exists in Spreadsheet Editor")
print("- Verify Geometry Nodes modifier has Store Named Attribute node")
print("- Try specifying modifier_name='Snow' explicitly in script")

Binary file not shown.

Binary file not shown.

7
build.rs Normal file
View File

@@ -0,0 +1,7 @@
fn main()
{
let wesl = wesl::Wesl::new("src/shaders");
wesl.build_artifact(&"package::standard".parse().unwrap(), "standard");
wesl.build_artifact(&"package::shadow".parse().unwrap(), "shadow");
wesl.build_artifact(&"package::terrain".parse().unwrap(), "terrain");
}

Binary file not shown.

Binary file not shown.

View File

@@ -4,8 +4,36 @@
"version":"2.0"
},
"extensionsUsed":[
"KHR_lights_punctual",
"EXT_mesh_gpu_instancing"
],
"extensionsRequired":[
"KHR_lights_punctual"
],
"extensions":{
"KHR_lights_punctual":{
"lights":[
{
"color":[
1,
1,
1
],
"intensity":543.5141306588226,
"spot":{
"innerConeAngle":0.18840259313583374,
"outerConeAngle":0.18840259313583374
},
"type":"spot",
"range":1000,
"name":"Spot",
"extras":{
"light_tag":"lighthouse"
}
}
]
}
},
"scene":0,
"scenes":[
{
@@ -13,25 +41,33 @@
"nodes":[
0,
1,
2
2,
3,
4
]
}
],
"nodes":[
{
"children":[
3
],
"mesh":1,
"name":"TerrainPlane"
},
{
"mesh":2,
"mesh":0,
"name":"TreePrime",
"translation":[
0,
16.22920036315918,
29.08228302001953,
39.89393615722656
]
},
{
"children":[
5,
6
],
"mesh":3,
"name":"TerrainPlane",
"scale":[
1.000100016593933,
1,
0
1
]
},
{
@@ -42,36 +78,77 @@
-66.48489379882812
]
},
{
"name":"PlayerSpawn",
"translation":[
-351.4849853515625,
119.54279327392578,
202.97006225585938
]
},
{
"extensions":{
"KHR_lights_punctual":{
"light":0
}
},
"name":"Spot",
"rotation":[
-0.16434744000434875,
-0.37808698415756226,
0.006467622704803944,
0.9110424518585205
],
"translation":[
-392.0350036621094,
238.72787475585938,
244.30006408691406
]
},
{
"extensions":{
"EXT_mesh_gpu_instancing":{
"attributes":{
"TRANSLATION":11,
"ROTATION":12,
"SCALE":13
"TRANSLATION":17,
"ROTATION":18,
"SCALE":19
}
}
},
"mesh":0,
"mesh":1,
"name":"TerrainPlane.0"
},
{
"extensions":{
"EXT_mesh_gpu_instancing":{
"attributes":{
"TRANSLATION":17,
"ROTATION":18,
"SCALE":19
}
}
},
"mesh":2,
"name":"TerrainPlane.1"
}
],
"materials":[
{
"doubleSided":true,
"emissiveFactor":[
1,
1,
1
],
"name":"heightmap",
"name":"terrain"
},
{
"doubleSided":true,
"name":"snow",
"pbrMetallicRoughness":{
"baseColorFactor":[
0,
0,
0,
0.800000011920929,
0.800000011920929,
0.800000011920929,
1
]
],
"metallicFactor":0,
"roughnessFactor":0.5
}
}
],
@@ -90,7 +167,7 @@
]
},
{
"name":"Plane.001",
"name":"Cylinder",
"primitives":[
{
"attributes":{
@@ -98,7 +175,7 @@
"NORMAL":5,
"TEXCOORD_0":6
},
"indices":7,
"indices":3,
"material":0
}
]
@@ -108,11 +185,35 @@
"primitives":[
{
"attributes":{
"POSITION":8,
"NORMAL":9,
"TEXCOORD_0":10
"POSITION":7,
"NORMAL":8,
"TEXCOORD_0":9
},
"indices":3
"indices":3,
"material":1
}
]
},
{
"name":"Plane.001",
"primitives":[
{
"attributes":{
"POSITION":10,
"NORMAL":11,
"TEXCOORD_0":12
},
"indices":13,
"material":0
},
{
"attributes":{
"POSITION":14,
"NORMAL":15,
"TEXCOORD_0":16
},
"indices":13,
"material":1
}
]
}
@@ -121,204 +222,296 @@
{
"bufferView":0,
"componentType":5126,
"count":704,
"count":1280,
"max":[
1,
11.999963760375977,
1
5.561562538146973,
16.066009521484375,
5.561562538146973
],
"min":[
-1,
-5.561562538146973,
0,
-1
-5.561562538146973
],
"type":"VEC3"
},
{
"bufferView":1,
"componentType":5126,
"count":704,
"count":1280,
"type":"VEC3"
},
{
"bufferView":2,
"componentType":5126,
"count":704,
"count":1280,
"type":"VEC2"
},
{
"bufferView":3,
"componentType":5123,
"count":1908,
"count":2100,
"type":"SCALAR"
},
{
"bufferView":4,
"componentType":5126,
"count":18196,
"count":1280,
"max":[
500,
122.76703643798828,
500
5.561562538146973,
16.066009521484375,
5.561562538146973
],
"min":[
-500,
-0.000225067138671875,
-500
-5.561562538146973,
0,
-5.561562538146973
],
"type":"VEC3"
},
{
"bufferView":5,
"componentType":5126,
"count":18196,
"count":1280,
"type":"VEC3"
},
{
"bufferView":6,
"componentType":5126,
"count":18196,
"count":1280,
"type":"VEC2"
},
{
"bufferView":7,
"componentType":5123,
"count":61206,
"type":"SCALAR"
"componentType":5126,
"count":1280,
"max":[
5.561562538146973,
16.066009521484375,
5.561562538146973
],
"min":[
-5.561562538146973,
0,
-5.561562538146973
],
"type":"VEC3"
},
{
"bufferView":8,
"componentType":5126,
"count":704,
"max":[
1,
11.999963760375977,
1
],
"min":[
-1,
0,
-1
],
"count":1280,
"type":"VEC3"
},
{
"bufferView":9,
"componentType":5126,
"count":704,
"type":"VEC3"
"count":1280,
"type":"VEC2"
},
{
"bufferView":10,
"componentType":5126,
"count":704,
"type":"VEC2"
"count":10404,
"max":[
500,
110.45686340332031,
500
],
"min":[
-500,
-0.9473495483398438,
-500
],
"type":"VEC3"
},
{
"bufferView":11,
"componentType":5126,
"count":5588,
"count":10404,
"type":"VEC3"
},
{
"bufferView":12,
"componentType":5126,
"count":5588,
"type":"VEC4"
"count":10404,
"type":"VEC2"
},
{
"bufferView":13,
"componentType":5123,
"count":61206,
"type":"SCALAR"
},
{
"bufferView":14,
"componentType":5126,
"count":5588,
"count":10404,
"max":[
500,
110.7568588256836,
500
],
"min":[
-500,
-0.6476199626922607,
-500
],
"type":"VEC3"
},
{
"bufferView":15,
"componentType":5126,
"count":10404,
"type":"VEC3"
},
{
"bufferView":16,
"componentType":5126,
"count":10404,
"type":"VEC2"
},
{
"bufferView":17,
"componentType":5126,
"count":2380,
"type":"VEC3"
},
{
"bufferView":18,
"componentType":5126,
"count":2380,
"type":"VEC4"
},
{
"bufferView":19,
"componentType":5126,
"count":2380,
"type":"VEC3"
}
],
"bufferViews":[
{
"buffer":0,
"byteLength":8448,
"byteLength":15360,
"byteOffset":0,
"target":34962
},
{
"buffer":0,
"byteLength":8448,
"byteOffset":8448,
"byteLength":15360,
"byteOffset":15360,
"target":34962
},
{
"buffer":0,
"byteLength":5632,
"byteOffset":16896,
"byteLength":10240,
"byteOffset":30720,
"target":34962
},
{
"buffer":0,
"byteLength":3816,
"byteOffset":22528,
"byteLength":4200,
"byteOffset":40960,
"target":34963
},
{
"buffer":0,
"byteLength":218352,
"byteOffset":26344,
"byteLength":15360,
"byteOffset":45160,
"target":34962
},
{
"buffer":0,
"byteLength":218352,
"byteOffset":244696,
"byteLength":15360,
"byteOffset":60520,
"target":34962
},
{
"buffer":0,
"byteLength":145568,
"byteOffset":463048,
"byteLength":10240,
"byteOffset":75880,
"target":34962
},
{
"buffer":0,
"byteLength":15360,
"byteOffset":86120,
"target":34962
},
{
"buffer":0,
"byteLength":15360,
"byteOffset":101480,
"target":34962
},
{
"buffer":0,
"byteLength":10240,
"byteOffset":116840,
"target":34962
},
{
"buffer":0,
"byteLength":124848,
"byteOffset":127080,
"target":34962
},
{
"buffer":0,
"byteLength":124848,
"byteOffset":251928,
"target":34962
},
{
"buffer":0,
"byteLength":83232,
"byteOffset":376776,
"target":34962
},
{
"buffer":0,
"byteLength":122412,
"byteOffset":608616,
"byteOffset":460008,
"target":34963
},
{
"buffer":0,
"byteLength":8448,
"byteOffset":731028,
"byteLength":124848,
"byteOffset":582420,
"target":34962
},
{
"buffer":0,
"byteLength":8448,
"byteOffset":739476,
"byteLength":124848,
"byteOffset":707268,
"target":34962
},
{
"buffer":0,
"byteLength":5632,
"byteOffset":747924,
"byteLength":83232,
"byteOffset":832116,
"target":34962
},
{
"buffer":0,
"byteLength":67056,
"byteOffset":753556
"byteLength":28560,
"byteOffset":915348
},
{
"buffer":0,
"byteLength":89408,
"byteOffset":820612
"byteLength":38080,
"byteOffset":943908
},
{
"buffer":0,
"byteLength":67056,
"byteOffset":910020
"byteLength":28560,
"byteOffset":981988
}
],
"buffers":[
{
"byteLength":977076,
"byteLength":1010548,
"uri":"terrain.bin"
}
]

View File

@@ -1,38 +0,0 @@
@vertex
fn vs_main(input: VertexInput) -> VertexOutput {
var output: VertexOutput;
let instance_model = mat4x4<f32>(
input.instance_model_0,
input.instance_model_1,
input.instance_model_2,
input.instance_model_3
);
let world_pos = instance_model * vec4<f32>(input.position, 1.0);
output.world_position = world_pos.xyz;
output.clip_position = uniforms.projection * uniforms.view * world_pos;
let normal_matrix = mat3x3<f32>(
instance_model[0].xyz,
instance_model[1].xyz,
instance_model[2].xyz
);
output.world_normal = normalize(normal_matrix * input.normal);
output.light_space_position = uniforms.light_view_projection * world_pos;
return output;
}
@fragment
fn fs_main(input: VertexOutput) -> @location(0) vec4<f32> {
let shadow = sample_shadow_map(input.light_space_position);
let tile_scale = 1.0;
let flowmap_strokes = flowmap_path_lighting_with_shadow(input.world_position, input.world_normal, tile_scale, shadow);
let point_strokes = point_lighting_with_shadow(input.world_position, input.world_normal, vec3<f32>(0.0, 100.0, 0.0), tile_scale, shadow);
let brightness = max(flowmap_strokes, point_strokes);
return vec4<f32>(brightness, brightness, brightness, 1.0);
}

View File

@@ -1,6 +1,11 @@
use bytemuck::{Pod, Zeroable};
use glam::{Mat4, Vec3};
use crate::components::CameraComponent;
use crate::entity::EntityHandle;
use crate::render;
use crate::world::{Transform, World};
#[repr(C)]
#[derive(Clone, Copy, Pod, Zeroable)]
pub struct CameraUniforms
@@ -26,147 +31,21 @@ impl CameraUniforms
}
}
pub struct Camera
{
pub position: Vec3,
pub target: Vec3,
pub up: Vec3,
pub fov: f32,
pub aspect: f32,
pub near: f32,
pub far: f32,
pub yaw: f32,
pub pitch: f32,
pub is_following: bool,
pub follow_offset: Vec3,
}
pub struct Camera;
impl Camera
{
pub fn init(aspect: f32) -> Self
pub fn spawn(world: &mut World, position: Vec3) -> EntityHandle
{
Self {
position: Vec3::new(15.0, 15.0, 15.0),
target: Vec3::ZERO,
up: Vec3::Y,
fov: 45.0_f32.to_radians(),
aspect,
near: 0.1,
far: 100.0,
yaw: -135.0_f32.to_radians(),
pitch: -30.0_f32.to_radians(),
is_following: true,
follow_offset: Vec3::ZERO,
}
}
let camera_entity = world.spawn();
pub fn view_matrix(&self) -> Mat4
{
Mat4::look_at_rh(self.position, self.target, self.up)
}
let camera_component = CameraComponent::new(render::aspect_ratio());
pub fn projection_matrix(&self) -> Mat4
{
Mat4::perspective_rh(self.fov, self.aspect, self.near, self.far)
}
let transform = Transform::from_position(position);
pub fn update_rotation(&mut self, mouse_delta: (f32, f32), sensitivity: f32)
{
self.yaw += mouse_delta.0 * sensitivity;
self.pitch -= mouse_delta.1 * sensitivity;
world.cameras.insert(camera_entity, camera_component);
world.transforms.insert(camera_entity, transform);
self.pitch = self
.pitch
.clamp(-89.0_f32.to_radians(), 89.0_f32.to_radians());
}
pub fn get_forward(&self) -> Vec3
{
Vec3::new(
self.yaw.cos() * self.pitch.cos(),
self.pitch.sin(),
self.yaw.sin() * self.pitch.cos(),
)
.normalize()
}
pub fn get_right(&self) -> Vec3
{
self.get_forward().cross(Vec3::Y).normalize()
}
pub fn get_forward_horizontal(&self) -> Vec3
{
Vec3::new(self.yaw.cos(), 0.0, self.yaw.sin()).normalize()
}
pub fn get_right_horizontal(&self) -> Vec3
{
self.get_forward_horizontal().cross(Vec3::Y).normalize()
}
pub fn update_noclip(&mut self, input: Vec3, speed: f32)
{
let forward = self.get_forward();
let right = self.get_right();
self.position += forward * input.z * speed;
self.position += right * input.x * speed;
self.position += Vec3::Y * input.y * speed;
self.target = self.position + forward;
}
pub fn start_following(&mut self, target_position: Vec3)
{
self.is_following = true;
self.follow_offset = self.position - target_position;
let distance = self.follow_offset.length();
if distance > 0.0
{
self.pitch = (self.follow_offset.y / distance).asin();
self.yaw = self.follow_offset.z.atan2(self.follow_offset.x) + std::f32::consts::PI;
}
}
pub fn stop_following(&mut self)
{
self.is_following = false;
let look_direction = (self.target - self.position).normalize();
self.yaw = look_direction.z.atan2(look_direction.x);
self.pitch = look_direction.y.asin();
}
pub fn update_follow(&mut self, target_position: Vec3, mouse_delta: (f32, f32), sensitivity: f32)
{
if !self.is_following
{
return;
}
if mouse_delta.0.abs() > 0.0 || mouse_delta.1.abs() > 0.0
{
self.yaw += mouse_delta.0 * sensitivity;
self.pitch += mouse_delta.1 * sensitivity;
self.pitch = self
.pitch
.clamp(-89.0_f32.to_radians(), 89.0_f32.to_radians());
}
let distance = self.follow_offset.length();
let orbit_yaw = self.yaw + std::f32::consts::PI;
let offset_x = distance * orbit_yaw.cos() * self.pitch.cos();
let offset_y = distance * self.pitch.sin();
let offset_z = distance * orbit_yaw.sin() * self.pitch.cos();
self.follow_offset = Vec3::new(offset_x, offset_y, offset_z);
self.position = target_position + self.follow_offset;
self.target = target_position;
camera_entity
}
}

View File

@@ -1,32 +0,0 @@
use glam::Vec3;
use crate::entity::EntityHandle;
#[derive(Clone, Copy)]
pub struct CameraFollowComponent
{
pub target_entity: EntityHandle,
pub offset: Vec3,
pub is_following: bool,
}
impl CameraFollowComponent
{
pub fn new(target_entity: EntityHandle) -> Self
{
Self {
target_entity,
offset: Vec3::ZERO,
is_following: false,
}
}
pub fn with_offset(target_entity: EntityHandle, offset: Vec3) -> Self
{
Self {
target_entity,
offset,
is_following: true,
}
}
}

View File

@@ -0,0 +1,27 @@
pub struct DissolveComponent
{
pub amount: f32,
pub target_amount: f32,
pub transition_speed: f32,
}
impl DissolveComponent
{
pub fn new() -> Self
{
Self {
amount: 0.0,
target_amount: 0.0,
transition_speed: 3.0,
}
}
pub fn with_speed(transition_speed: f32) -> Self
{
Self {
amount: 0.0,
target_amount: 0.0,
transition_speed,
}
}
}

10
src/components/follow.rs Normal file
View File

@@ -0,0 +1,10 @@
use crate::entity::EntityHandle;
use crate::utility::transform::Transform;
pub struct FollowComponent
{
pub target: EntityHandle,
pub offset: Transform,
pub inherit_rotation: bool,
pub inherit_scale: bool,
}

View File

@@ -41,12 +41,7 @@ impl Default for JumpConfig
max_air_momentum: 8.0,
air_damping_active: 0.4,
air_damping_passive: 0.9,
jump_curve: CubicBez::new(
(0.0, 0.0),
(0.4, 0.75),
(0.7, 0.9),
(1.0, 1.0),
),
jump_curve: CubicBez::new((0.0, 0.0), (0.4, 0.75), (0.7, 0.9), (1.0, 1.0)),
jump_context: JumpContext::default(),
}
}

View File

@@ -0,0 +1,18 @@
use glam::Vec3;
pub struct DirectionallightComponent
{
pub offset: Vec3,
pub direction: Vec3,
}
impl DirectionallightComponent
{
pub fn new(offset: Vec3, direction: Vec3) -> Self
{
Self {
offset,
direction: direction.normalize(),
}
}
}

View File

@@ -0,0 +1,3 @@
pub mod directional;
pub mod point;
pub mod spot;

View File

@@ -0,0 +1,14 @@
use glam::Vec3;
pub struct PointlightComponent
{
pub offset: Vec3,
}
impl PointlightComponent
{
pub fn new(offset: Vec3) -> Self
{
Self { offset }
}
}

View File

@@ -0,0 +1,25 @@
use glam::Vec3;
#[derive(Debug, Copy, Clone)]
pub struct SpotlightComponent
{
pub offset: Vec3,
pub direction: Vec3,
pub range: f32,
pub inner_angle: f32,
pub outer_angle: f32,
}
impl SpotlightComponent
{
pub fn new(offset: Vec3, direction: Vec3, range: f32, inner_angle: f32, outer_angle: f32) -> Self
{
Self {
offset,
direction: direction.normalize(),
range,
inner_angle,
outer_angle,
}
}
}

View File

@@ -1,16 +1,22 @@
pub mod camera;
pub mod camera_follow;
pub mod dissolve;
pub mod follow;
pub mod input;
pub mod jump;
pub mod lights;
pub mod mesh;
pub mod movement;
pub mod physics;
pub mod player_tag;
pub mod rotate;
pub mod state_machine;
pub mod tree_tag;
pub use camera::CameraComponent;
pub use camera_follow::CameraFollowComponent;
pub use dissolve::DissolveComponent;
pub use follow::FollowComponent;
pub use input::InputComponent;
pub use mesh::MeshComponent;
pub use movement::MovementComponent;
pub use physics::PhysicsComponent;
pub use rotate::RotateComponent;

View File

@@ -43,7 +43,7 @@ impl MovementConfig
(1.0, 1.0),
),
walking_damping: 0.8,
max_walking_speed: 6.0,
max_walking_speed: 12.0,
idle_damping: 0.1,
movement_context: MovementContext::new(),
}

15
src/components/rotate.rs Normal file
View File

@@ -0,0 +1,15 @@
use glam::Vec3;
pub struct RotateComponent
{
pub axis: Vec3,
pub speed: f32,
}
impl RotateComponent
{
pub fn new(axis: Vec3, speed: f32) -> Self
{
Self { axis, speed }
}
}

View File

@@ -0,0 +1 @@
pub struct TreeTag;

View File

@@ -126,6 +126,8 @@ pub fn render_collider_debug() -> Vec<DrawCall>
let instance_data = InstanceRaw {
model: model.to_cols_array_2d(),
dissolve_amount: 0.0,
_padding: [0.0; 3],
};
let instance_buffer = render::with_device(|device| {
@@ -153,6 +155,8 @@ pub fn render_collider_debug() -> Vec<DrawCall>
{
let instance_data = InstanceRaw {
model: Mat4::IDENTITY.to_cols_array_2d(),
dissolve_amount: 0.0,
_padding: [0.0; 3],
};
let instance_buffer = render::with_device(|device| {

View File

@@ -1,5 +1,3 @@
pub mod collider_debug;
pub mod noclip;
pub use collider_debug::{render_collider_debug, set_debug_heightfield};
pub use noclip::{update_follow_camera, update_noclip_camera};
pub use collider_debug::render_collider_debug;

101
src/empty.rs Normal file
View File

@@ -0,0 +1,101 @@
use glam::Mat4;
use std::path::Path;
pub struct EmptyNode
{
pub name: String,
pub transform: Mat4,
}
impl EmptyNode
{
pub fn new(name: String, transform: Mat4) -> Self
{
Self { name, transform }
}
}
pub struct Empties
{
nodes: Vec<EmptyNode>,
}
impl Empties
{
fn new(nodes: Vec<EmptyNode>) -> Self
{
Self { nodes }
}
pub fn into_nodes(self) -> Vec<EmptyNode>
{
self.nodes
}
pub fn load_gltf_empties(path: impl AsRef<Path>)
-> Result<Empties, Box<dyn std::error::Error>>
{
let (gltf, _buffers, _images) = gltf::import(path)?;
let mut all_empties = Vec::new();
for scene in gltf.scenes()
{
for node in scene.nodes()
{
Self::process_node(&node, Mat4::IDENTITY, &mut all_empties)?;
}
}
Ok(Empties::new(all_empties))
}
fn process_node(
node: &gltf::Node,
parent_transform: Mat4,
all_empties: &mut Vec<EmptyNode>,
) -> Result<(), Box<dyn std::error::Error>>
{
let local_transform = Mat4::from_cols_array_2d(&node.transform().matrix());
let global_transform = parent_transform * local_transform;
let is_empty = node.mesh().is_none() && node.light().is_none() && node.camera().is_none();
if is_empty
{
let name = node.name().unwrap_or("Unnamed").to_string();
all_empties.push(EmptyNode::new(name, global_transform));
}
for child in node.children()
{
Self::process_node(&child, global_transform, all_empties)?;
}
Ok(())
}
pub fn load_empties(path: impl AsRef<Path>) -> Result<Empties, Box<dyn std::error::Error>>
{
Self::load_gltf_empties(path)
}
pub fn get_empty_by_name(
gltf_path: &str,
name: &str,
) -> anyhow::Result<Option<crate::empty::EmptyNode>>
{
let empties = Self::load_empties(gltf_path)
.map_err(|e| anyhow::anyhow!("Failed to load empty nodes: {}", e))?;
for empty_node in empties.into_nodes()
{
if empty_node.name == name
{
return Ok(Some(empty_node));
}
}
Ok(None)
}
}

153
src/light.rs Normal file
View File

@@ -0,0 +1,153 @@
use glam::{Mat4, Vec3};
use gltf::json::Extras;
use std::{ops::Deref, path::Path};
use crate::{
components::lights::spot::SpotlightComponent,
world::{Transform, World},
};
pub struct LightData
{
pub component: SpotlightComponent,
pub transform: Mat4,
pub tag: Option<String>,
}
pub struct Lights
{
spotlights: Vec<LightData>,
}
impl Lights
{
fn new(spotlights: Vec<LightData>) -> Self
{
Self { spotlights }
}
pub fn into_spotlights(self) -> Vec<LightData>
{
self.spotlights
}
pub fn load_gltf_lights(path: impl AsRef<Path>) -> Result<Lights, Box<dyn std::error::Error>>
{
let (gltf, _buffers, _images) = gltf::import(path)?;
let mut all_directional = Vec::new();
let mut all_point = Vec::new();
let mut all_spot = Vec::new();
for scene in gltf.scenes()
{
for node in scene.nodes()
{
Self::process_node(
&node,
Mat4::IDENTITY,
&mut all_directional,
&mut all_point,
&mut all_spot,
)?;
}
}
Ok(Lights::new(all_spot))
}
fn process_node(
node: &gltf::Node,
parent_transform: Mat4,
all_directional: &mut Vec<LightData>,
all_point: &mut Vec<LightData>,
all_spot: &mut Vec<LightData>,
) -> Result<(), Box<dyn std::error::Error>>
{
let local_transform = Mat4::from_cols_array_2d(&node.transform().matrix());
let global_transform = parent_transform * local_transform;
if let Some(light) = node.light()
{
let local_transform = Mat4::from_cols_array_2d(&node.transform().matrix());
let global_transform = parent_transform * local_transform;
let (_scale, rotation, _translation) = global_transform.to_scale_rotation_translation();
let tag = serde_json::to_value(light.extras())
.ok()
.and_then(|extras| {
extras
.get("light_tag")
.and_then(|v| v.as_str())
.map(String::from)
});
match light.kind()
{
gltf::khr_lights_punctual::Kind::Directional => todo!(),
gltf::khr_lights_punctual::Kind::Point => todo!(),
gltf::khr_lights_punctual::Kind::Spot {
inner_cone_angle,
outer_cone_angle,
} =>
{
let range = light.range().unwrap_or(100.0);
let spotlight = SpotlightComponent::new(
Vec3::ZERO,
rotation * -Vec3::Z,
range,
inner_cone_angle,
outer_cone_angle,
);
all_spot.push(LightData {
component: spotlight,
transform: global_transform,
tag,
});
},
}
}
for child in node.children()
{
Self::process_node(
&child,
global_transform,
all_directional,
all_point,
all_spot,
)?;
}
Ok(())
}
pub fn load_lights(path: impl AsRef<Path>) -> Result<Lights, Box<dyn std::error::Error>>
{
crate::render::with_device(|_device| Lights::load_gltf_lights(path))
}
pub fn spawn_lights(world: &mut World, spotlights: Vec<LightData>)
{
use crate::components::RotateComponent;
for light_data in spotlights
{
let entity = world.spawn();
let transform = Transform::from_matrix(light_data.transform);
world.transforms.insert(entity, transform);
world.spotlights.insert(entity, light_data.component);
if let Some(tag) = light_data.tag
{
if tag == "lighthouse"
{
world
.rotates
.insert(entity, RotateComponent::new(Vec3::Y, 1.0));
}
}
}
}
}

View File

@@ -2,20 +2,24 @@ mod camera;
mod components;
mod debug;
mod draw;
mod empty;
mod entity;
mod event;
mod heightmap;
mod light;
mod mesh;
mod physics;
mod picking;
mod player;
mod postprocess;
mod render;
mod shader;
mod snow;
mod snow_light;
mod space;
mod state;
mod systems;
mod terrain;
mod texture_loader;
mod texture;
mod utility;
mod world;
@@ -26,16 +30,21 @@ use render::Renderer;
use utility::input::InputState;
use world::{Transform, World};
use crate::components::{CameraComponent, CameraFollowComponent};
use crate::camera::Camera;
use crate::components::CameraComponent;
use crate::debug::render_collider_debug;
use crate::entity::EntityHandle;
use crate::light::Lights;
use crate::physics::PhysicsManager;
use crate::player::Player;
use crate::space::Space;
use crate::systems::{
camera_follow_system, camera_input_system, camera_noclip_system, physics_sync_system,
player_input_system, render_system, start_camera_following, state_machine_physics_system,
state_machine_system, stop_camera_following,
player_input_system, render_system, rotate_system, spotlight_sync_system,
start_camera_following, state_machine_physics_system, state_machine_system,
stop_camera_following,
};
use crate::snow::{SnowConfig, SnowLayer};
use crate::terrain::{Terrain, TerrainConfig};
use crate::utility::time::Time;
@@ -45,26 +54,43 @@ fn main() -> Result<(), Box<dyn std::error::Error>>
let video_subsystem = sdl_context.video()?;
let window = video_subsystem
.window("snow_trail", 800, 600)
.window("snow_trail", 1200, 900)
.position_centered()
.resizable()
.vulkan()
.build()?;
let renderer = pollster::block_on(Renderer::new(&window, 1))?;
let renderer = pollster::block_on(Renderer::new(&window, 2))?;
render::init(renderer);
let space = Space::load_space("meshes/terrain.gltf")?;
let terrain_config = TerrainConfig::default();
let player_spawn = space.player_spawn;
let camera_spawn = space.camera_spawn_position();
let mut world = World::new();
let player_entity = Player::spawn(&mut world);
let _terrain_entity = Terrain::spawn(&mut world, &terrain_config)?;
let _player_entity = Player::spawn(&mut world, player_spawn);
let _terrain_entity = Terrain::spawn(&mut world, space.mesh_data, &terrain_config)?;
Lights::spawn_lights(&mut world, space.spotlights);
render::set_terrain_data();
let terrain_half_size = terrain_config.size / 2.0;
render::init_snow_light_accumulation(
glam::Vec2::new(-terrain_half_size.x, -terrain_half_size.y),
glam::Vec2::new(terrain_half_size.x, terrain_half_size.y),
);
let snow_config = SnowConfig::default();
let snow_layer = SnowLayer::load(&mut world, &snow_config)?;
println!("Snow layer loaded successfully");
render::set_snow_depth(&snow_layer.depth_texture_view);
let mut noclip_mode = true;
let camera_entity = spawn_camera(&mut world, player_entity);
let camera_entity = Camera::spawn(&mut world, camera_spawn);
if noclip_mode == false
{
start_camera_following(&mut world, camera_entity);
@@ -150,6 +176,11 @@ fn main() -> Result<(), Box<dyn std::error::Error>>
state_machine_system(&mut world, delta);
rotate_system(&mut world, delta);
let spotlights = spotlight_sync_system(&world);
render::update_spotlights(spotlights);
let mut draw_calls = render_system(&world);
draw_calls.extend(render_collider_debug());
@@ -161,12 +192,22 @@ fn main() -> Result<(), Box<dyn std::error::Error>>
get_view_matrix(&world, camera_entity, camera_transform, camera_component);
let projection = camera_component.projection_matrix();
render::render_with_matrices(
let player_pos = world
.player_tags
.all()
.first()
.and_then(|e| world.transforms.get(*e))
.map(|t| t.position)
.unwrap_or(Vec3::ZERO);
render::render(
&view,
&projection,
camera_transform.position,
player_pos,
&draw_calls,
time,
delta,
);
}
}
@@ -183,27 +224,6 @@ fn main() -> Result<(), Box<dyn std::error::Error>>
Ok(())
}
fn spawn_camera(world: &mut World, target_entity: EntityHandle) -> EntityHandle
{
let camera_entity = world.spawn();
let camera_component = CameraComponent::new(render::aspect_ratio());
let camera_follow = CameraFollowComponent::new(target_entity);
let initial_position = Vec3::new(15.0, 15.0, 15.0);
let transform = Transform {
position: initial_position,
rotation: glam::Quat::IDENTITY,
scale: Vec3::ONE,
};
world.cameras.insert(camera_entity, camera_component);
world.camera_follows.insert(camera_entity, camera_follow);
world.transforms.insert(camera_entity, transform);
camera_entity
}
fn get_view_matrix(
world: &World,
camera_entity: EntityHandle,
@@ -211,18 +231,15 @@ fn get_view_matrix(
camera_component: &CameraComponent,
) -> glam::Mat4
{
if let Some(follow) = world.camera_follows.get(camera_entity)
if let Some(follow) = world.follows.get(camera_entity)
{
if follow.is_following
if let Some(target_transform) = world.transforms.get(follow.target)
{
if let Some(target_transform) = world.transforms.get(follow.target_entity)
{
return glam::Mat4::look_at_rh(
camera_transform.position,
target_transform.position,
Vec3::Y,
);
}
return glam::Mat4::look_at_rh(
camera_transform.position,
target_transform.position,
Vec3::Y,
);
}
}

View File

@@ -48,6 +48,7 @@ pub struct InstanceData
pub position: Vec3,
pub rotation: Quat,
pub scale: Vec3,
pub dissolve_amount: f32,
}
impl InstanceData
@@ -57,6 +58,8 @@ impl InstanceData
let model = Mat4::from_scale_rotation_translation(self.scale, self.rotation, self.position);
InstanceRaw {
model: model.to_cols_array_2d(),
dissolve_amount: self.dissolve_amount,
_padding: [0.0; 3],
}
}
}
@@ -66,6 +69,8 @@ impl InstanceData
pub struct InstanceRaw
{
pub model: [[f32; 4]; 4],
pub dissolve_amount: f32,
pub _padding: [f32; 3],
}
impl InstanceRaw
@@ -96,6 +101,11 @@ impl InstanceRaw
shader_location: 6,
format: wgpu::VertexFormat::Float32x4,
},
wgpu::VertexAttribute {
offset: (std::mem::size_of::<[f32; 4]>() * 4) as wgpu::BufferAddress,
shader_location: 7,
format: wgpu::VertexFormat::Float32,
},
],
}
}
@@ -506,88 +516,92 @@ impl Mesh
{
let extensions = json_node.get("extensions").unwrap();
let instancing_ext = extensions.get("EXT_mesh_gpu_instancing").unwrap();
let mut mesh_vertices = Vec::new();
let mut mesh_indices = Vec::new();
let mut mesh_vertices = Vec::new();
let mut mesh_indices = Vec::new();
for primitive in mesh_data.primitives()
for primitive in mesh_data.primitives()
{
let reader = primitive
.reader(|buffer| buffers.get(buffer.index()).map(|data| &data[..]));
let positions = reader
.read_positions()
.ok_or_else(|| anyhow::anyhow!("Missing position data"))?
.collect::<Vec<[f32; 3]>>();
let normals = reader
.read_normals()
.ok_or_else(|| anyhow::anyhow!("Missing normal data"))?
.collect::<Vec<[f32; 3]>>();
let uvs = reader
.read_tex_coords(0)
.map(|iter| iter.into_f32().collect::<Vec<[f32; 2]>>())
.unwrap_or_else(|| vec![[0.0, 0.0]; positions.len()]);
let base_index = mesh_vertices.len() as u32;
for ((pos, normal), uv) in
positions.iter().zip(normals.iter()).zip(uvs.iter())
{
let reader = primitive
.reader(|buffer| buffers.get(buffer.index()).map(|data| &data[..]));
let positions = reader
.read_positions()
.ok_or_else(|| anyhow::anyhow!("Missing position data"))?
.collect::<Vec<[f32; 3]>>();
let normals = reader
.read_normals()
.ok_or_else(|| anyhow::anyhow!("Missing normal data"))?
.collect::<Vec<[f32; 3]>>();
let uvs = reader
.read_tex_coords(0)
.map(|iter| iter.into_f32().collect::<Vec<[f32; 2]>>())
.unwrap_or_else(|| vec![[0.0, 0.0]; positions.len()]);
let base_index = mesh_vertices.len() as u32;
for ((pos, normal), uv) in
positions.iter().zip(normals.iter()).zip(uvs.iter())
{
mesh_vertices.push(Vertex {
position: *pos,
normal: *normal,
uv: *uv,
});
}
if let Some(indices_reader) = reader.read_indices()
{
mesh_indices
.extend(indices_reader.into_u32().map(|i| i + base_index));
}
mesh_vertices.push(Vertex {
position: *pos,
normal: *normal,
uv: *uv,
});
}
let attributes = instancing_ext
.get("attributes")
.and_then(|v| v.as_object())
.ok_or_else(|| anyhow::anyhow!("Missing attributes in EXT_mesh_gpu_instancing"))?;
if let Some(indices_reader) = reader.read_indices()
{
mesh_indices.extend(indices_reader.into_u32().map(|i| i + base_index));
}
}
let translation_accessor_index = attributes
.get("TRANSLATION")
.and_then(|v| v.as_u64())
.ok_or_else(|| anyhow::anyhow!("Missing TRANSLATION in instancing extension"))? as usize;
let attributes = instancing_ext
.get("attributes")
.and_then(|v| v.as_object())
.ok_or_else(|| {
anyhow::anyhow!("Missing attributes in EXT_mesh_gpu_instancing")
})?;
let rotation_accessor_index = attributes
.get("ROTATION")
.and_then(|v| v.as_u64())
.ok_or_else(|| anyhow::anyhow!("Missing ROTATION in instancing extension"))? as usize;
let translation_accessor_index = attributes
.get("TRANSLATION")
.and_then(|v| v.as_u64())
.ok_or_else(|| {
anyhow::anyhow!("Missing TRANSLATION in instancing extension")
})? as usize;
let scale_accessor_index = attributes
.get("SCALE")
.and_then(|v| v.as_u64())
.ok_or_else(|| anyhow::anyhow!("Missing SCALE in instancing extension"))? as usize;
let rotation_accessor_index = attributes
.get("ROTATION")
.and_then(|v| v.as_u64())
.ok_or_else(|| {
anyhow::anyhow!("Missing ROTATION in instancing extension")
})? as usize;
let translations = Self::read_vec3_accessor(
&document,
&buffers,
translation_accessor_index,
)?;
let rotations =
Self::read_quat_accessor(&document, &buffers, rotation_accessor_index)?;
let scales =
Self::read_vec3_accessor(&document, &buffers, scale_accessor_index)?;
let scale_accessor_index = attributes
.get("SCALE")
.and_then(|v| v.as_u64())
.ok_or_else(|| anyhow::anyhow!("Missing SCALE in instancing extension"))?
as usize;
let instances: Vec<InstanceData> = translations
.into_iter()
.zip(rotations.into_iter())
.zip(scales.into_iter())
.map(|((position, rotation), scale)| InstanceData {
position,
rotation,
scale,
})
.collect();
let translations =
Self::read_vec3_accessor(&document, &buffers, translation_accessor_index)?;
let rotations =
Self::read_quat_accessor(&document, &buffers, rotation_accessor_index)?;
let scales =
Self::read_vec3_accessor(&document, &buffers, scale_accessor_index)?;
let instances: Vec<InstanceData> = translations
.into_iter()
.zip(rotations.into_iter())
.zip(scales.into_iter())
.map(|((position, rotation), scale)| InstanceData {
position,
rotation,
scale,
dissolve_amount: 0.0,
})
.collect();
let mesh = Mesh::new(device, &mesh_vertices, &mesh_indices);
result.push((mesh, instances));
@@ -655,7 +669,9 @@ impl Mesh
.nth(accessor_index)
.ok_or_else(|| anyhow::anyhow!("Invalid accessor index"))?;
let buffer_view = accessor.view().ok_or_else(|| anyhow::anyhow!("Missing buffer view"))?;
let buffer_view = accessor
.view()
.ok_or_else(|| anyhow::anyhow!("Missing buffer view"))?;
let buffer = &buffers[buffer_view.buffer().index()];
let start = buffer_view.offset() + accessor.offset();
let stride = buffer_view.stride().unwrap_or(12);
@@ -699,7 +715,9 @@ impl Mesh
.nth(accessor_index)
.ok_or_else(|| anyhow::anyhow!("Invalid accessor index"))?;
let buffer_view = accessor.view().ok_or_else(|| anyhow::anyhow!("Missing buffer view"))?;
let buffer_view = accessor
.view()
.ok_or_else(|| anyhow::anyhow!("Missing buffer view"))?;
let buffer = &buffers[buffer_view.buffer().index()];
let start = buffer_view.offset() + accessor.offset();
let stride = buffer_view.stride().unwrap_or(16);

View File

@@ -1,4 +1,4 @@
use std::rc::Rc;
use std::{f32::consts::PI, rc::Rc};
use glam::Vec3;
use kurbo::ParamCurve;
@@ -10,7 +10,8 @@ use rapier3d::{
use crate::{
components::{
jump::JumpComponent, InputComponent, MeshComponent, MovementComponent, PhysicsComponent,
jump::JumpComponent, lights::spot::SpotlightComponent, InputComponent, MeshComponent,
MovementComponent, PhysicsComponent,
},
entity::EntityHandle,
mesh::Mesh,
@@ -24,14 +25,14 @@ pub struct Player;
impl Player
{
pub fn spawn(world: &mut World) -> EntityHandle
pub fn spawn(world: &mut World, position: Vec3) -> EntityHandle
{
let entity = world.spawn();
let initial_position = Vec3::new(0.0, 5.0, 0.0);
let spawn_transform = Transform::from_position(position);
let rigidbody = RigidBodyBuilder::kinematic_position_based()
.translation(initial_position.into())
.translation(spawn_transform.position.into())
.build();
let collider = ColliderBuilder::capsule_y(0.5, 0.5).build();
let _controller = KinematicCharacterController {
@@ -163,7 +164,7 @@ impl Player
world
.transforms
.insert(entity, Transform::from_position(initial_position));
.insert(entity, spawn_transform);
world.movements.insert(entity, MovementComponent::new());
world.jumps.insert(entity, JumpComponent::new());
world.inputs.insert(entity, InputComponent::default());
@@ -178,7 +179,7 @@ impl Player
entity,
MeshComponent {
mesh: Rc::new(mesh),
pipeline: Pipeline::Render,
pipeline: Pipeline::Standard,
instance_buffer: None,
num_instances: 1,
},
@@ -186,6 +187,18 @@ impl Player
world.player_tags.insert(entity);
world.state_machines.insert(entity, state_machine);
let outer_angle = PI / 2.0 * 0.9;
world.spotlights.insert(
entity,
SpotlightComponent::new(
Vec3::new(1.0, 2.0, 1.0),
Vec3::new(0.0, -1.0, 0.0),
100.0,
outer_angle * 0.5,
outer_angle,
),
);
entity
}
@@ -233,7 +246,8 @@ impl State for PlayerFallingState
.flatten()
.unwrap();
let terrain_height = PhysicsManager::get_terrain_height_at(current_pos.x, current_pos.z);
let next_pos = current_pos + velocity;
let terrain_height = PhysicsManager::get_terrain_height_at(next_pos.x, next_pos.z);
let is_grounded = if let Some(height) = terrain_height
{
@@ -553,7 +567,8 @@ impl State for PlayerJumpingState
let current_time = Time::get_time_elapsed();
world.jumps.with_mut(self.entity, |jump| {
jump.jump_config.jump_context.duration = current_time - jump.jump_config.jump_context.execution_time;
jump.jump_config.jump_context.duration =
current_time - jump.jump_config.jump_context.execution_time;
});
let jump_config = world

View File

@@ -147,7 +147,7 @@ pub fn create_blit_pipeline(
) -> wgpu::RenderPipeline
{
let shader_source =
std::fs::read_to_string("shaders/blit.wgsl").expect("Failed to read blit shader");
std::fs::read_to_string("src/shaders/blit.wgsl").expect("Failed to read blit shader");
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("Blit Shader"),

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,5 @@
use crate::mesh::{InstanceRaw, Vertex};
use wesl::{include_wesl, Wesl};
pub fn create_render_pipeline(
device: &wgpu::Device,
@@ -6,11 +7,12 @@ pub fn create_render_pipeline(
bind_group_layout: &wgpu::BindGroupLayout,
) -> wgpu::RenderPipeline
{
let shared_source =
std::fs::read_to_string("shaders/shared.wgsl").expect("Failed to read shared shader");
let standard_source =
std::fs::read_to_string("shaders/standard.wgsl").expect("Failed to read standard shader");
let shader_source = format!("{}\n{}", shared_source, standard_source);
let compiler = Wesl::new("src/shaders");
let shader_source = compiler
.compile(&"package::standard".parse().unwrap())
.inspect_err(|e| eprintln!("WESL error: {e}"))
.unwrap()
.to_string();
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("Shader"),
@@ -67,3 +69,72 @@ pub fn create_render_pipeline(
cache: None,
})
}
pub fn create_environment_pipeline(
device: &wgpu::Device,
config: &wgpu::SurfaceConfiguration,
bind_group_layout: &wgpu::BindGroupLayout,
) -> wgpu::RenderPipeline
{
let compiler = Wesl::new("src/shaders");
let shader_source = compiler
.compile(&"package::environment".parse().unwrap())
.inspect_err(|e| eprintln!("WESL error: {e}"))
.unwrap()
.to_string();
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("Environment Shader"),
source: wgpu::ShaderSource::Wgsl(shader_source.into()),
});
let render_pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("Environment Pipeline Layout"),
bind_group_layouts: &[bind_group_layout],
push_constant_ranges: &[],
});
device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: Some("Environment Pipeline"),
layout: Some(&render_pipeline_layout),
vertex: wgpu::VertexState {
module: &shader,
entry_point: Some("vs_main"),
buffers: &[Vertex::desc(), InstanceRaw::desc()],
compilation_options: Default::default(),
},
fragment: Some(wgpu::FragmentState {
module: &shader,
entry_point: Some("fs_main"),
targets: &[Some(wgpu::ColorTargetState {
format: config.format,
blend: Some(wgpu::BlendState::REPLACE),
write_mask: wgpu::ColorWrites::ALL,
})],
compilation_options: Default::default(),
}),
primitive: wgpu::PrimitiveState {
topology: wgpu::PrimitiveTopology::TriangleList,
strip_index_format: None,
front_face: wgpu::FrontFace::Ccw,
cull_mode: Some(wgpu::Face::Back),
polygon_mode: wgpu::PolygonMode::Fill,
unclipped_depth: false,
conservative: false,
},
depth_stencil: Some(wgpu::DepthStencilState {
format: wgpu::TextureFormat::Depth32Float,
depth_write_enabled: true,
depth_compare: wgpu::CompareFunction::Less,
stencil: wgpu::StencilState::default(),
bias: wgpu::DepthBiasState::default(),
}),
multisample: wgpu::MultisampleState {
count: 1,
mask: !0,
alpha_to_coverage_enabled: false,
},
multiview: None,
cache: None,
})
}

View File

@@ -0,0 +1,95 @@
import package::shared::{VertexInput, VertexOutput, sample_shadow_map, flowmap_path_lighting_with_shadow, all_spotlights_lighting, uniforms, blue_noise_texture, blue_noise_sampler};
@vertex
fn vs_main(input: VertexInput) -> VertexOutput {
var output: VertexOutput;
let instance_model = mat4x4<f32>(
input.instance_model_0,
input.instance_model_1,
input.instance_model_2,
input.instance_model_3
);
let world_pos = instance_model * vec4<f32>(input.position, 1.0);
output.world_position = world_pos.xyz;
output.clip_position = uniforms.projection * uniforms.view * world_pos;
let normal_matrix = mat3x3<f32>(
instance_model[0].xyz,
instance_model[1].xyz,
instance_model[2].xyz
);
output.world_normal = normalize(normal_matrix * input.normal);
output.light_space_position = uniforms.light_view_projection * world_pos;
let instance_position = vec3<f32>(
instance_model[3][0],
instance_model[3][1],
instance_model[3][2]
);
let to_player = uniforms.player_position - uniforms.camera_position;
let distance_to_player = length(to_player);
var dissolve_amount = 0.0;
if distance_to_player > 0.01 {
let ray_dir = to_player / distance_to_player;
let ray_origin = uniforms.camera_position;
let tree_height = 16.0;
let occlusion_radius = 6.5;
let w = instance_position - ray_origin;
let projection_t = dot(w, ray_dir);
if projection_t > 0.0 && projection_t < distance_to_player {
let closest_on_ray = ray_origin + ray_dir * projection_t;
let diff = closest_on_ray - instance_position;
let height_on_trunk = clamp(diff.y, 0.0, tree_height);
let closest_on_trunk = instance_position + vec3<f32>(0.0, height_on_trunk, 0.0);
let perp_distance = length(closest_on_trunk - closest_on_ray);
if perp_distance < occlusion_radius {
let dissolve_t = pow(perp_distance / occlusion_radius, 0.5);
dissolve_amount = 1.0 - clamp(dissolve_t, 0.0, 1.0);
}
}
}
output.dissolve_amount = dissolve_amount;
return output;
}
@fragment
fn fs_main(input: VertexOutput) -> @location(0) vec4<f32> {
let debug = 0u;
if debug == 1u {
return vec4<f32>(input.dissolve_amount);
}
if input.dissolve_amount > 0.0 {
let screen_pos = input.clip_position.xy;
let noise_uv = fract(screen_pos / 128.0);
let noise_value = textureSampleLevel(blue_noise_texture, blue_noise_sampler, noise_uv, 0.0).r;
if noise_value < input.dissolve_amount {
discard;
}
}
let shadow = sample_shadow_map(input.light_space_position);
let tile_scale = 4.0;
let spotlight_strokes = all_spotlights_lighting(input.world_position, input.clip_position, input.world_normal, tile_scale, shadow);
let brightness = spotlight_strokes;
return vec4<f32>(brightness, brightness, brightness, 1.0);
}

14
src/shaders/shadow.wesl Normal file
View File

@@ -0,0 +1,14 @@
import package::shared::{ VertexInput, uniforms };
@vertex
fn vs_main(input: VertexInput) -> @builtin(position) vec4<f32> {
let instance_model = mat4x4<f32>(
input.instance_model_0,
input.instance_model_1,
input.instance_model_2,
input.instance_model_3
);
let world_pos = instance_model * vec4<f32>(input.position, 1.0);
return uniforms.light_view_projection * world_pos;
}

View File

@@ -6,6 +6,7 @@ struct VertexInput {
@location(4) instance_model_1: vec4<f32>,
@location(5) instance_model_2: vec4<f32>,
@location(6) instance_model_3: vec4<f32>,
@location(7) instance_dissolve: f32,
}
struct VertexOutput {
@@ -13,6 +14,24 @@ struct VertexOutput {
@location(0) world_position: vec3<f32>,
@location(1) world_normal: vec3<f32>,
@location(2) light_space_position: vec4<f32>,
@location(3) dissolve_amount: f32,
}
const MAX_SPOTLIGHTS: u32 = 4u;
struct Spotlight {
position: vec3<f32>,
inner_angle: f32,
direction: vec3<f32>,
outer_angle: f32,
range: f32,
_padding: f32,
_padding2: f32,
_padding3: f32,
_padding4: f32,
_padding5: f32,
_padding6: f32,
_padding7: f32,
}
struct Uniforms {
@@ -22,9 +41,13 @@ struct Uniforms {
light_view_projection: mat4x4<f32>,
camera_position: vec3<f32>,
height_scale: f32,
player_position: vec3<f32>,
time: f32,
shadow_bias: f32,
light_direction: vec3<f32>,
spotlight_count: u32,
_padding1: u32,
_padding2: u32,
spotlights: array<Spotlight, 4>,
}
@group(0) @binding(0)
@@ -48,6 +71,12 @@ var flowmap_texture: texture_2d<f32>;
@group(0) @binding(6)
var flowmap_sampler: sampler;
@group(0) @binding(7)
var blue_noise_texture: texture_2d<f32>;
@group(0) @binding(8)
var blue_noise_sampler: sampler;
const PI: f32 = 3.14159265359;
const TERRAIN_BOUNDS: vec2<f32> = vec2<f32>(1000.0, 1000.0);
const LINE_THICKNESS: f32 = 0.1;
@@ -91,9 +120,7 @@ fn sample_shadow_map(light_space_pos: vec4<f32>) -> f32 {
let proj_coords = light_space_pos.xyz / light_space_pos.w;
let ndc_coords = proj_coords * vec3<f32>(0.5, -0.5, 1.0) + vec3<f32>(0.5, 0.5, 0.0);
if ndc_coords.x < 0.0 || ndc_coords.x > 1.0 ||
ndc_coords.y < 0.0 || ndc_coords.y > 1.0 ||
ndc_coords.z < 0.0 || ndc_coords.z > 1.0 {
if ndc_coords.x < 0.0 || ndc_coords.x > 1.0 || ndc_coords.y < 0.0 || ndc_coords.y > 1.0 || ndc_coords.z < 0.0 || ndc_coords.z > 1.0 {
return 1.0;
}
@@ -103,7 +130,20 @@ fn sample_shadow_map(light_space_pos: vec4<f32>) -> f32 {
return shadow;
}
fn hatching_lighting(world_pos: vec3<f32>, tile_scale: f32, direction: vec2<f32>, distance: f32) -> f32 {
fn hatching_lighting(world_pos: vec3<f32>, clip_pos: vec4<f32>, tile_scale: f32, direction: vec2<f32>, distance: f32) -> f32 {
let octave_index = round((1.0 - pow(distance, 2.0)) * OCTAVE_STEPS);
let octave_normalized = octave_index / OCTAVE_STEPS;
if octave_index > 3.0 {
return 1.0;
} else if octave_index < 1.0 {
let screen_pos = clip_pos.xy / clip_pos.w;
let blue_noise_uv = screen_pos * 0.5 + 0.5;
let blue_noise = textureSample(blue_noise_texture, blue_noise_sampler, blue_noise_uv * 10.0).r;
return step(blue_noise, 0.05);
}
let world_pos_2d = vec2<f32>(world_pos.x, world_pos.z);
let tile_size = 1.0 / tile_scale;
let base_tile_center = floor(world_pos_2d / tile_size) * tile_size + tile_size * 0.5;
@@ -119,8 +159,6 @@ fn hatching_lighting(world_pos: vec3<f32>, tile_scale: f32, direction: vec2<f32>
let parallel = mix(perpendicular_to_light, direction, t / 2.0);
let perpendicular = compute_perpendicular(parallel);
let octave_index = round((1.0 - pow(distance, 2.0)) * OCTAVE_STEPS);
let spacing = LINE_THICKNESS * 1.5;
var max_offset: i32;
@@ -135,7 +173,7 @@ fn hatching_lighting(world_pos: vec3<f32>, tile_scale: f32, direction: vec2<f32>
max_offset = 9;
}
case 1 {
max_offset = 9;
max_offset = 6;
}
}
for (var i: i32 = -max_offset; i <= max_offset; i++) {
@@ -173,7 +211,7 @@ fn hatching_lighting(world_pos: vec3<f32>, tile_scale: f32, direction: vec2<f32>
local_pos,
);
let lighting = line_stroke_lighting(stroke_data);
let lighting = line_stroke_lighting(stroke_data, clip_pos);
min_lighting = min(min_lighting, lighting);
}
}
@@ -182,48 +220,94 @@ fn hatching_lighting(world_pos: vec3<f32>, tile_scale: f32, direction: vec2<f32>
return min_lighting;
}
fn point_lighting(world_pos: vec3<f32>, point_light: vec3<f32>, tile_scale: f32) -> f32 {
let world_pos_2d = vec2<f32>(world_pos.x, world_pos.z);
let light_pos_2d = vec2<f32>(point_light.x, point_light.z);
let tile_size = 1.0 / tile_scale;
let tile_center = floor(world_pos_2d / tile_size) * tile_size + tile_size * 0.5;
let direction_to_point = light_pos_2d - tile_center;
let distance_to_point = min(length(direction_to_point) / 60.0, 1.0);
let direction_normalized = normalize(direction_to_point);
return hatching_lighting(world_pos, tile_scale, direction_normalized, distance_to_point);
struct SpotlightData {
direction_normalized: vec2<f32>,
combined_distance: f32,
is_lit: bool,
}
fn point_lighting_with_shadow(world_pos: vec3<f32>, normal: vec3<f32>, point_light: vec3<f32>, tile_scale: f32, shadow: f32) -> f32 {
let world_pos_2d = vec2<f32>(world_pos.x, world_pos.z);
let light_pos_2d = vec2<f32>(point_light.x, point_light.z);
let tile_size = 1.0 / tile_scale;
let tile_center = floor(world_pos_2d / tile_size) * tile_size + tile_size * 0.5;
fn calculate_spotlight_data(world_pos: vec3<f32>, normal: vec3<f32>, spotlight: Spotlight, tile_scale: f32, shadow: f32) -> SpotlightData {
var data: SpotlightData;
data.is_lit = false;
data.direction_normalized = vec2<f32>(0.0, 0.0);
data.combined_distance = 2.0;
let direction_to_point_3d = normalize(point_light - world_pos);
let diffuse = max(0.0, dot(normalize(normal), direction_to_point_3d));
let to_fragment = normalize(world_pos - spotlight.position);
let angle_to_fragment = acos(dot(to_fragment, spotlight.direction));
let direction_to_point = light_pos_2d - tile_center;
let distance_to_point = min(length(direction_to_point) / 60.0, 1.0);
let direction_normalized = normalize(direction_to_point);
if angle_to_fragment > spotlight.outer_angle {
return data;
}
let lighting_intensity = shadow * diffuse;
let cube_size = 1.0 / tile_scale;
let cube_center = vec3<f32>(
floor(world_pos.x / cube_size) * cube_size + cube_size * 0.5,
floor(world_pos.y / cube_size) * cube_size + cube_size * 0.5,
floor(world_pos.z / cube_size) * cube_size + cube_size * 0.5
);
let tile_center = vec2<f32>(cube_center.x, cube_center.z);
let angular_falloff = smoothstep(spotlight.inner_angle, spotlight.outer_angle, angle_to_fragment);
let direction_to_point_3d = normalize(spotlight.position - cube_center);
let diffuse_raw = max(0.0, dot(normalize(normal), direction_to_point_3d));
let diffuse_res = 4.0;
let diffuse = floor(diffuse_raw * diffuse_res) / diffuse_res;
let t = (cube_center.y - spotlight.position.y) / spotlight.direction.y;
let hit_point = spotlight.position + spotlight.direction * t;
let hit_point_2d = vec2<f32>(hit_point.x, hit_point.z);
let direction_to_hit = hit_point_2d - tile_center;
let distance_to_hit = min(length(direction_to_hit) / spotlight.range, 1.0);
data.direction_normalized = normalize(direction_to_hit);
let lighting_intensity = shadow * pow(diffuse, 0.5) * (1.0 - angular_falloff);
let darkness = 1.0 - lighting_intensity;
let combined_distance = min(distance_to_point + darkness * 0.5, 1.0);
data.combined_distance = distance_to_hit + darkness;
return hatching_lighting(world_pos, tile_scale, direction_normalized, combined_distance);
data.is_lit = data.combined_distance <= 1.0;
return data;
}
fn line_stroke_lighting(data: StrokeData) -> f32 {
let octave_normalized = data.octave_index / OCTAVE_STEPS;
fn is_in_spotlight_light_area(world_pos: vec3<f32>, normal: vec3<f32>, spotlight: Spotlight, tile_scale: f32, shadow: f32) -> bool {
return calculate_spotlight_data(world_pos, normal, spotlight, tile_scale, shadow).is_lit;
}
if data.octave_index > 3.0 {
return 1.0;
} else if data.octave_index < 1.0 {
fn is_in_any_spotlight_light_area(world_pos: vec3<f32>, normal: vec3<f32>, tile_scale: f32, shadow: f32) -> bool {
for (var i = 0u; i < uniforms.spotlight_count; i++) {
if is_in_spotlight_light_area(world_pos, normal, uniforms.spotlights[i], tile_scale, shadow) {
return true;
}
}
return false;
}
fn spotlight_lighting(world_pos: vec3<f32>, clip_pos: vec4<f32>, normal: vec3<f32>, spotlight: Spotlight, tile_scale: f32, shadow: f32) -> f32 {
let data = calculate_spotlight_data(world_pos, normal, spotlight, tile_scale, shadow);
if !data.is_lit {
return 0.0;
}
return hatching_lighting(world_pos, clip_pos, tile_scale, data.direction_normalized, data.combined_distance);
}
fn all_spotlights_lighting(world_pos: vec3<f32>, clip_pos: vec4<f32>, normal: vec3<f32>, tile_scale: f32, shadow: f32) -> f32 {
var max_lighting = 0.0;
for (var i = 0u; i < uniforms.spotlight_count; i++) {
let spotlight = uniforms.spotlights[i];
let lighting = spotlight_lighting(world_pos, clip_pos, normal, spotlight, tile_scale, shadow);
max_lighting = max(max_lighting, lighting);
}
return max_lighting;
}
fn line_stroke_lighting(data: StrokeData, clip_pos: vec4<f32>) -> f32 {
let octave_normalized = data.octave_index / OCTAVE_STEPS;
let noise = hash2(data.tile_center + data.offset) * 2.0 - 1.0;
var noise_at_octave = noise;
@@ -249,7 +333,7 @@ fn line_stroke_lighting(data: StrokeData) -> f32 {
let parallel_coord = dot(data.local_pos, line);
let perpendicular_coord = dot(data.local_pos, perpendicular_to_line);
let line_half_width = LINE_THICKNESS * (1.0 - octave_normalized * 0.5);
let line_half_width = LINE_THICKNESS * (1.0 - octave_normalized * 0.5) * data.tile_size;
let straight_section_half_length = max(0.0, data.tile_size * 0.4 - line_half_width);
let parallel_jitter = (rand(data.tile_center + data.offset * 123.456) * 2.0 - 1.0) * data.tile_size * jitter;
@@ -261,7 +345,7 @@ fn line_stroke_lighting(data: StrokeData) -> f32 {
return step(line_half_width, effective_distance);
}
fn flowmap_path_lighting(world_pos: vec3<f32>, tile_scale: f32) -> f32 {
fn flowmap_path_lighting(world_pos: vec3<f32>, clip_pos: vec4<f32>, tile_scale: f32) -> f32 {
let world_pos_2d = vec2<f32>(world_pos.x, world_pos.z);
let tile_size = 1.0 / tile_scale;
let tile_center = floor(world_pos_2d / tile_size) * tile_size + tile_size * 0.5;
@@ -273,10 +357,10 @@ fn flowmap_path_lighting(world_pos: vec3<f32>, tile_scale: f32) -> f32 {
let direction_to_path = normalize(vec2<f32>(x, y));
let distance_to_path = flowmap_sample.b;
return hatching_lighting(world_pos, tile_scale, direction_to_path, distance_to_path);
return hatching_lighting(world_pos, clip_pos, tile_scale, direction_to_path, distance_to_path);
}
fn flowmap_path_lighting_with_shadow(world_pos: vec3<f32>, normal: vec3<f32>, tile_scale: f32, shadow: f32) -> f32 {
fn flowmap_path_lighting_with_shadow(world_pos: vec3<f32>, clip_pos: vec4<f32>, normal: vec3<f32>, tile_scale: f32, shadow: f32) -> f32 {
let world_pos_2d = vec2<f32>(world_pos.x, world_pos.z);
let tile_size = 1.0 / tile_scale;
let tile_center = floor(world_pos_2d / tile_size) * tile_size + tile_size * 0.5;
@@ -295,5 +379,5 @@ fn flowmap_path_lighting_with_shadow(world_pos: vec3<f32>, normal: vec3<f32>, ti
let darkness = 1.0 - lighting_intensity;
let combined_distance = min(distance_to_path + darkness * 0.5, 1.0);
return hatching_lighting(world_pos, tile_scale, direction_to_path, combined_distance);
return hatching_lighting(world_pos, clip_pos, tile_scale, direction_to_path, combined_distance);
}

72
src/shaders/snow.wesl Normal file
View File

@@ -0,0 +1,72 @@
import package::shared::{
VertexInput,
VertexOutput,
uniforms,
sample_shadow_map,
all_spotlights_lighting,
is_in_any_spotlight_light_area,
blue_noise_texture,
blue_noise_sampler,
TERRAIN_BOUNDS,
Spotlight
};
@group(1) @binding(0)
var persistent_light_texture: texture_2d<f32>;
@group(1) @binding(1)
var persistent_light_sampler: sampler;
@vertex
fn vs_main(input: VertexInput) -> VertexOutput {
var output: VertexOutput;
let instance_model = mat4x4<f32>(
input.instance_model_0,
input.instance_model_1,
input.instance_model_2,
input.instance_model_3
);
let world_pos = instance_model * vec4<f32>(input.position, 1.0);
output.world_position = world_pos.xyz;
output.clip_position = uniforms.projection * uniforms.view * world_pos;
let normal_matrix = mat3x3<f32>(
instance_model[0].xyz,
instance_model[1].xyz,
instance_model[2].xyz
);
output.world_normal = normalize(normal_matrix * input.normal);
output.light_space_position = uniforms.light_view_projection * world_pos;
return output;
}
@fragment
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
let shadow = sample_shadow_map(in.light_space_position);
let tile_scale = 2.0;
let in_spotlight_light_area = is_in_any_spotlight_light_area(in.world_position, in.world_normal, tile_scale, shadow);
let spotlight_strokes = all_spotlights_lighting(in.world_position, in.clip_position, in.world_normal, tile_scale, shadow);
var brightness = spotlight_strokes;
if !in_spotlight_light_area {
let terrain_uv = (vec2<f32>(in.world_position.x, in.world_position.z) + TERRAIN_BOUNDS * 0.5) / TERRAIN_BOUNDS;
let persistent_light = textureSample(persistent_light_texture, persistent_light_sampler, terrain_uv).r;
if persistent_light > 0.05 {
let screen_pos = in.clip_position.xy / in.clip_position.w;
let blue_noise_uv = screen_pos * 0.5 + 0.5;
let blue_noise = textureSample(blue_noise_texture, blue_noise_sampler, blue_noise_uv * 10.0).r;
let blue_step = step(blue_noise, persistent_light / 30.0);
brightness = max(brightness, blue_step);
}
}
return vec4<f32>(brightness, brightness, brightness, 1.0);
}

View File

@@ -0,0 +1,44 @@
@group(0) @binding(0)
var snow_depth: texture_storage_2d<r32float, read_write>;
@group(0) @binding(1)
var<uniform> params: DeformParams;
struct DeformParams {
position_x: f32,
position_z: f32,
radius: f32,
depth: f32,
}
@compute @workgroup_size(16, 16, 1)
fn deform(@builtin(global_invocation_id) global_id: vec3<u32>) {
let texture_size = textureDimensions(snow_depth);
if (global_id.x >= texture_size.x || global_id.y >= texture_size.y) {
return;
}
let coords = vec2<i32>(i32(global_id.x), i32(global_id.y));
let terrain_size = vec2<f32>(1000.0, 1000.0);
let half_size = terrain_size / 2.0;
let uv = vec2<f32>(f32(global_id.x) / f32(texture_size.x), f32(global_id.y) / f32(texture_size.y));
let world_pos = uv * terrain_size - half_size;
let deform_center = vec2<f32>(params.position_x, params.position_z);
let distance = length(world_pos - deform_center);
if (distance < params.radius) {
let current_depth = textureLoad(snow_depth, coords).r;
let falloff = 1.0 - (distance / params.radius);
let falloff_smooth = falloff * falloff;
let deform_amount = params.depth * falloff_smooth;
let new_depth = max(0.0, current_depth - deform_amount);
textureStore(snow_depth, coords, vec4<f32>(new_depth, 0.0, 0.0, 0.0));
}
}

View File

@@ -0,0 +1,117 @@
import package::shared::{Spotlight, MAX_SPOTLIGHTS, calculate_spotlight_data};
struct AccumulationUniforms {
terrain_min_xz: vec2<f32>,
terrain_max_xz: vec2<f32>,
decay_rate: f32,
delta_time: f32,
spotlight_count: u32,
_padding: u32,
light_view_projection: mat4x4<f32>,
shadow_bias: f32,
terrain_height_scale: f32,
_padding3: f32,
_padding4: f32,
spotlights: array<Spotlight, 4>,
}
@group(0) @binding(0)
var previous_light: texture_2d<f32>;
@group(0) @binding(1)
var light_sampler: sampler;
@group(0) @binding(2)
var<uniform> uniforms: AccumulationUniforms;
@group(0) @binding(3)
var heightmap: texture_2d<f32>;
@group(0) @binding(4)
var heightmap_sampler: sampler;
@group(0) @binding(5)
var shadow_map: texture_depth_2d;
@group(0) @binding(6)
var shadow_sampler: sampler_comparison;
@group(0) @binding(7)
var snow_depth: texture_2d<f32>;
@group(0) @binding(8)
var snow_depth_sampler: sampler;
struct VertexOutput {
@builtin(position) position: vec4<f32>,
@location(0) uv: vec2<f32>,
}
@vertex
fn vs_main(@builtin(vertex_index) vertex_index: u32) -> VertexOutput {
var out: VertexOutput;
let x = f32((vertex_index << 1u) & 2u);
let y = f32(vertex_index & 2u);
out.position = vec4<f32>(x * 2.0 - 1.0, y * 2.0 - 1.0, 0.0, 1.0);
out.uv = vec2<f32>(x, 1.0 - y);
return out;
}
fn sample_shadow_map(light_space_pos: vec4<f32>) -> f32 {
let proj_coords = light_space_pos.xyz / light_space_pos.w;
let ndc_coords = proj_coords * vec3<f32>(0.5, -0.5, 1.0) + vec3<f32>(0.5, 0.5, 0.0);
if ndc_coords.x < 0.0 || ndc_coords.x > 1.0 ||
ndc_coords.y < 0.0 || ndc_coords.y > 1.0 ||
ndc_coords.z < 0.0 || ndc_coords.z > 1.0 {
return 1.0;
}
let depth = ndc_coords.z - uniforms.shadow_bias;
let shadow = textureSampleCompare(shadow_map, shadow_sampler, ndc_coords.xy, depth);
return shadow;
}
@fragment
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
let prev_light = textureSample(previous_light, light_sampler, in.uv).r;
let world_xz = mix(uniforms.terrain_min_xz, uniforms.terrain_max_xz, in.uv);
let terrain_height = textureSampleLevel(heightmap, heightmap_sampler, in.uv, 0.0).r * uniforms.terrain_height_scale;
let depth = textureSampleLevel(snow_depth, snow_depth_sampler, in.uv, 0.0).r;
let snow_surface_height = terrain_height + depth;
let snow_surface_pos = vec3<f32>(world_xz.x, snow_surface_height, world_xz.y);
let light_space_position = uniforms.light_view_projection * vec4<f32>(snow_surface_pos, 1.0);
let shadow = sample_shadow_map(light_space_position);
var current_light = 0.0;
if shadow > 0.0 {
let tile_scale = 2.0;
let surface_normal = vec3<f32>(0.0, 1.0, 0.0);
for (var i = 0u; i < uniforms.spotlight_count; i++) {
let spotlight = uniforms.spotlights[i];
let data = calculate_spotlight_data(snow_surface_pos, surface_normal, spotlight, tile_scale, shadow);
let light = f32(data.is_lit);
current_light = max(current_light, light);
}
}
var accumulated: f32;
if current_light > 0.01 {
accumulated = current_light;
} else {
let decay_factor = exp(-uniforms.decay_rate * uniforms.delta_time * 60.0);
accumulated = prev_light * decay_factor;
if accumulated < 0.01 {
accumulated = 0.0;
}
}
return vec4<f32>(accumulated, 0.0, 0.0, 1.0);
}

66
src/shaders/standard.wesl Normal file
View File

@@ -0,0 +1,66 @@
import package::shared::{VertexInput, VertexOutput, sample_shadow_map, flowmap_path_lighting_with_shadow, all_spotlights_lighting, uniforms, blue_noise_texture, blue_noise_sampler};
@vertex
fn vs_main(input: VertexInput) -> VertexOutput {
var output: VertexOutput;
let instance_model = mat4x4<f32>(
input.instance_model_0,
input.instance_model_1,
input.instance_model_2,
input.instance_model_3
);
let world_pos = instance_model * vec4<f32>(input.position, 1.0);
output.world_position = world_pos.xyz;
output.clip_position = uniforms.projection * uniforms.view * world_pos;
let normal_matrix = mat3x3<f32>(
instance_model[0].xyz,
instance_model[1].xyz,
instance_model[2].xyz
);
output.world_normal = normalize(normal_matrix * input.normal);
output.light_space_position = uniforms.light_view_projection * world_pos;
output.dissolve_amount = 0.0;
return output;
}
@fragment
fn fs_main(input: VertexOutput) -> @location(0) vec4<f32> {
let shadow = sample_shadow_map(input.light_space_position);
let debug = 0u;
if debug == 3u {
return vec4<f32>(shadow, shadow, shadow, 1.0);
}
if debug == 2u {
let proj_coords = input.light_space_position.xyz / input.light_space_position.w;
return vec4<f32>(proj_coords.x, proj_coords.y, proj_coords.z, 1.0);
}
if debug == 1u {
let proj_coords = input.light_space_position.xyz / input.light_space_position.w;
let ndc_coords = proj_coords * vec3<f32>(0.5, -0.5, 1.0) + vec3<f32>(0.5, 0.5, 0.0);
let in_bounds = ndc_coords.x >= 0.0 && ndc_coords.x <= 1.0 &&
ndc_coords.y >= 0.0 && ndc_coords.y <= 1.0 &&
ndc_coords.z >= 0.0 && ndc_coords.z <= 1.0;
if in_bounds {
return vec4<f32>(ndc_coords.x, ndc_coords.y, ndc_coords.z, 1.0);
} else {
return vec4<f32>(0.0, 0.0, 0.0, 1.0);
}
}
let tile_scale = 4.0;
let flowmap_strokes = flowmap_path_lighting_with_shadow(input.world_position, input.clip_position, input.world_normal, tile_scale, shadow);
let spotlight_strokes = all_spotlights_lighting(input.world_position, input.clip_position, input.world_normal, tile_scale, shadow);
// let brightness = max(flowmap_strokes, spotlight_strokes);
let brightness = 0.0;
return vec4<f32>(brightness, brightness, brightness, 1.0);
}

View File

@@ -1,3 +1,5 @@
import package::shared::{VertexInput, VertexOutput, sample_shadow_map, flowmap_path_lighting_with_shadow, all_spotlights_lighting, uniforms, TERRAIN_BOUNDS, flowmap_texture, flowmap_sampler};
@vertex
fn vs_main(input: VertexInput) -> VertexOutput {
var output: VertexOutput;
@@ -29,6 +31,24 @@ fn vs_main(input: VertexInput) -> VertexOutput {
fn fs_main(input: VertexOutput) -> @location(0) vec4<f32> {
let debug = 0u;
if debug == 4u {
let proj_coords = input.light_space_position.xyz / input.light_space_position.w;
let ndc_coords = proj_coords * vec3<f32>(0.5, -0.5, 1.0) + vec3<f32>(0.5, 0.5, 0.0);
let in_bounds = ndc_coords.x >= 0.0 && ndc_coords.x <= 1.0 &&
ndc_coords.y >= 0.0 && ndc_coords.y <= 1.0 &&
ndc_coords.z >= 0.0 && ndc_coords.z <= 1.0;
if in_bounds {
return vec4<f32>(ndc_coords.x, ndc_coords.y, ndc_coords.z, 1.0);
} else {
return vec4<f32>(0.0, 0.0, 0.0, 1.0);
}
}
if debug == 3u {
let shadow = sample_shadow_map(input.light_space_position);
return vec4<f32>(shadow, shadow, shadow, 1.0);
}
if debug == 1u {
let flowmap_uv = (vec2<f32>(input.world_position.x, input.world_position.z) + TERRAIN_BOUNDS * 0.5) / TERRAIN_BOUNDS;
let flowmap_sample = textureSampleLevel(flowmap_texture, flowmap_sampler, flowmap_uv, 0.0).rgb;
@@ -61,10 +81,9 @@ fn fs_main(input: VertexOutput) -> @location(0) vec4<f32> {
let shadow = sample_shadow_map(input.light_space_position);
let tile_scale = 1.0;
let flowmap_strokes = flowmap_path_lighting_with_shadow(input.world_position, input.world_normal, tile_scale, shadow);
let point_strokes = point_lighting_with_shadow(input.world_position, input.world_normal, vec3<f32>(0.0, 100.0, 0.0), tile_scale, shadow);
let brightness = max(flowmap_strokes, point_strokes);
let tile_scale = 2.0;
let spotlight_strokes = all_spotlights_lighting(input.world_position, input.clip_position, input.world_normal, tile_scale, shadow);
let brightness = spotlight_strokes;
return vec4<f32>(brightness, brightness, brightness, 1.0);
}

451
src/snow.rs Normal file
View File

@@ -0,0 +1,451 @@
use std::rc::Rc;
use exr::prelude::{ReadChannels, ReadLayers};
use glam::{Vec2, Vec3};
use wgpu::util::DeviceExt;
use crate::{
components::MeshComponent,
entity::EntityHandle,
mesh::{Mesh, Vertex},
render,
world::{Transform, World},
};
pub struct SnowConfig
{
pub depth_map_path: String,
pub heightmap_path: String,
pub terrain_size: Vec2,
pub resolution: (u32, u32),
}
impl SnowConfig
{
pub fn new(depth_map_path: &str, heightmap_path: &str, terrain_size: Vec2, resolution: (u32, u32)) -> Self
{
Self {
depth_map_path: depth_map_path.to_string(),
heightmap_path: heightmap_path.to_string(),
terrain_size,
resolution,
}
}
pub fn default() -> Self
{
Self {
depth_map_path: "textures/snow_depth.exr".to_string(),
heightmap_path: "textures/terrain_heightmap.exr".to_string(),
terrain_size: Vec2::new(1000.0, 1000.0),
resolution: (1000, 1000),
}
}
}
pub struct SnowLayer
{
pub entity: EntityHandle,
pub depth_texture: wgpu::Texture,
pub depth_texture_view: wgpu::TextureView,
pub depth_bind_group: wgpu::BindGroup,
pub width: u32,
pub height: u32,
pub deform_bind_group: wgpu::BindGroup,
pub deform_pipeline: wgpu::ComputePipeline,
pub deform_params_buffer: wgpu::Buffer,
}
impl SnowLayer
{
pub fn load(world: &mut World, config: &SnowConfig) -> anyhow::Result<Self>
{
println!("\n=== Loading Snow Layer ===");
println!("Depth map path: {}", config.depth_map_path);
println!("Heightmap path: {}", config.heightmap_path);
println!("Terrain size: {:?}", config.terrain_size);
let (depth_data, width, height) = Self::load_depth_map(&config.depth_map_path)?;
let (heightmap_data, hm_width, hm_height) = Self::load_depth_map(&config.heightmap_path)?;
if width != hm_width || height != hm_height {
anyhow::bail!("Snow depth map ({}×{}) and heightmap ({}×{}) dimensions don't match!",
width, height, hm_width, hm_height);
}
println!("Using EXR dimensions: {}×{}", width, height);
let (depth_texture, depth_texture_view, depth_bind_group) =
Self::create_depth_texture(&depth_data, width, height);
let mesh = Self::generate_snow_mesh(&depth_data, &heightmap_data, width, height, config.terrain_size);
let num_indices = mesh.num_indices;
let entity = world.spawn();
world.transforms.insert(entity, Transform::IDENTITY);
if num_indices > 0 {
world.meshes.insert(
entity,
MeshComponent {
mesh: Rc::new(mesh),
pipeline: render::Pipeline::Snow,
instance_buffer: None,
num_instances: 1,
},
);
println!("Snow mesh created with {} indices", num_indices);
} else {
println!("⚠️ No snow mesh created - all depth values are zero");
}
let (deform_pipeline, deform_bind_group, deform_params_buffer) =
Self::create_deform_pipeline(&depth_texture_view);
Ok(Self {
entity,
depth_texture,
depth_texture_view,
depth_bind_group,
width,
height,
deform_bind_group,
deform_pipeline,
deform_params_buffer,
})
}
fn load_depth_map(path: &str) -> anyhow::Result<(Vec<f32>, u32, u32)>
{
println!("Loading snow depth map from: {}", path);
let image = exr::prelude::read()
.no_deep_data()
.largest_resolution_level()
.all_channels()
.all_layers()
.all_attributes()
.from_file(path)?;
let layer = &image.layer_data[0];
let width = layer.size.width() as u32;
let height = layer.size.height() as u32;
println!(" Layer size: {}×{}", width, height);
println!(" Available channels: {:?}", layer.channel_data.list.iter().map(|c| &c.name).collect::<Vec<_>>());
let channel = layer.channel_data.list.iter()
.find(|c| format!("{:?}", c.name).contains("\"R\""))
.or_else(|| layer.channel_data.list.first())
.ok_or_else(|| anyhow::anyhow!("No channels found in EXR"))?;
println!(" Using channel: {:?}", channel.name);
let depths: Vec<f32> = channel.sample_data.values_as_f32().collect();
let min_value = depths.iter().cloned().fold(f32::INFINITY, f32::min);
let max_value = depths.iter().cloned().fold(f32::NEG_INFINITY, f32::max);
let avg_value = depths.iter().sum::<f32>() / depths.len() as f32;
let non_zero_count = depths.iter().filter(|&&v| v > 0.0001).count();
println!(" Total values: {}", depths.len());
println!(" Min: {:.6}, Max: {:.6}, Avg: {:.6}", min_value, max_value, avg_value);
println!(" Non-zero values: {} ({:.1}%)", non_zero_count, (non_zero_count as f32 / depths.len() as f32) * 100.0);
if max_value < 0.0001 {
println!(" ⚠️ WARNING: All values are effectively zero! Snow depth map may be invalid.");
} else {
println!(" ✓ Snow depth data loaded successfully");
}
Ok((depths, width, height))
}
fn create_depth_texture(
depth_data: &[f32],
width: u32,
height: u32,
) -> (wgpu::Texture, wgpu::TextureView, wgpu::BindGroup)
{
render::with_device(|device| {
let size = wgpu::Extent3d {
width,
height,
depth_or_array_layers: 1,
};
let texture = device.create_texture(&wgpu::TextureDescriptor {
label: Some("Snow Depth Texture"),
size,
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: wgpu::TextureFormat::R32Float,
usage: wgpu::TextureUsages::TEXTURE_BINDING
| wgpu::TextureUsages::COPY_DST
| wgpu::TextureUsages::STORAGE_BINDING,
view_formats: &[],
});
let data_bytes: &[u8] = bytemuck::cast_slice(depth_data);
render::with_queue(|queue| {
queue.write_texture(
wgpu::TexelCopyTextureInfo {
texture: &texture,
mip_level: 0,
origin: wgpu::Origin3d::ZERO,
aspect: wgpu::TextureAspect::All,
},
data_bytes,
wgpu::TexelCopyBufferLayout {
offset: 0,
bytes_per_row: Some(width * 4),
rows_per_image: Some(height),
},
size,
);
});
let texture_view = texture.create_view(&wgpu::TextureViewDescriptor::default());
let bind_group_layout =
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Snow Depth Bind Group Layout"),
entries: &[wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::FRAGMENT | wgpu::ShaderStages::COMPUTE,
ty: wgpu::BindingType::Texture {
sample_type: wgpu::TextureSampleType::Float { filterable: false },
view_dimension: wgpu::TextureViewDimension::D2,
multisampled: false,
},
count: None,
}],
});
let bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Snow Depth Bind Group"),
layout: &bind_group_layout,
entries: &[wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(&texture_view),
}],
});
(texture, texture_view, bind_group)
})
}
fn generate_snow_mesh(
depth_data: &[f32],
heightmap_data: &[f32],
width: u32,
height: u32,
terrain_size: Vec2,
) -> Mesh
{
let mut vertices = Vec::new();
let mut indices = Vec::new();
let cell_size_x = terrain_size.x / (width - 1) as f32;
let cell_size_z = terrain_size.y / (height - 1) as f32;
let half_width = terrain_size.x / 2.0;
let half_height = terrain_size.y / 2.0;
for z in 0..height
{
for x in 0..width
{
let index = (z * width + x) as usize;
let snow_depth = depth_data.get(index).copied().unwrap_or(0.0);
let terrain_height = heightmap_data.get(index).copied().unwrap_or(0.0);
let world_x = x as f32 * cell_size_x - half_width;
let world_z = z as f32 * cell_size_z - half_height;
let world_y = terrain_height + snow_depth;
vertices.push(Vertex {
position: [world_x, world_y, world_z],
normal: [0.0, 1.0, 0.0],
uv: [x as f32 / width as f32, z as f32 / height as f32],
});
}
}
for z in 0..(height - 1)
{
for x in 0..(width - 1)
{
let index = (z * width + x) as usize;
let depth_tl = depth_data.get(index).copied().unwrap_or(0.0);
let depth_tr = depth_data.get(index + 1).copied().unwrap_or(0.0);
let depth_bl = depth_data.get(index + width as usize).copied().unwrap_or(0.0);
let depth_br = depth_data
.get(index + width as usize + 1)
.copied()
.unwrap_or(0.0);
if depth_tl > 0.001
|| depth_tr > 0.001
|| depth_bl > 0.001
|| depth_br > 0.001
{
let vertex_index = (z * width + x) as u32;
indices.push(vertex_index);
indices.push(vertex_index + width);
indices.push(vertex_index + 1);
indices.push(vertex_index + 1);
indices.push(vertex_index + width);
indices.push(vertex_index + width + 1);
}
}
}
let vertex_buffer = render::with_device(|device| {
device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Snow Vertex Buffer"),
contents: bytemuck::cast_slice(&vertices),
usage: wgpu::BufferUsages::VERTEX,
})
});
let index_buffer = render::with_device(|device| {
device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Snow Index Buffer"),
contents: bytemuck::cast_slice(&indices),
usage: wgpu::BufferUsages::INDEX,
})
});
Mesh {
vertex_buffer,
index_buffer,
num_indices: indices.len() as u32,
}
}
fn create_deform_pipeline(
depth_texture_view: &wgpu::TextureView,
) -> (wgpu::ComputePipeline, wgpu::BindGroup, wgpu::Buffer)
{
render::with_device(|device| {
let shader_source = std::fs::read_to_string("src/shaders/snow_deform.wgsl")
.expect("Failed to load snow deform shader");
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("Snow Deform Shader"),
source: wgpu::ShaderSource::Wgsl(shader_source.into()),
});
let params_buffer = device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Snow Deform Params"),
size: 32,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
let bind_group_layout =
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Snow Deform Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::COMPUTE,
ty: wgpu::BindingType::StorageTexture {
access: wgpu::StorageTextureAccess::ReadWrite,
format: wgpu::TextureFormat::R32Float,
view_dimension: wgpu::TextureViewDimension::D2,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::COMPUTE,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
],
});
let bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Snow Deform Bind Group"),
layout: &bind_group_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(depth_texture_view),
},
wgpu::BindGroupEntry {
binding: 1,
resource: params_buffer.as_entire_binding(),
},
],
});
let pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("Snow Deform Pipeline Layout"),
bind_group_layouts: &[&bind_group_layout],
push_constant_ranges: &[],
});
let pipeline = device.create_compute_pipeline(&wgpu::ComputePipelineDescriptor {
label: Some("Snow Deform Pipeline"),
layout: Some(&pipeline_layout),
module: &shader,
entry_point: Some("deform"),
compilation_options: Default::default(),
cache: None,
});
(pipeline, bind_group, params_buffer)
})
}
pub fn deform_at_position(&self, position: Vec3, radius: f32, depth: f32)
{
render::with_queue(|queue| {
let params_data = [position.x, position.z, radius, depth];
let params_bytes: &[u8] = bytemuck::cast_slice(&params_data);
queue.write_buffer(&self.deform_params_buffer, 0, params_bytes);
});
render::with_device(|device| {
let mut encoder = device.create_command_encoder(&wgpu::CommandEncoderDescriptor {
label: Some("Snow Deform Encoder"),
});
{
let mut compute_pass = encoder.begin_compute_pass(&wgpu::ComputePassDescriptor {
label: Some("Snow Deform Pass"),
timestamp_writes: None,
});
compute_pass.set_pipeline(&self.deform_pipeline);
compute_pass.set_bind_group(0, &self.deform_bind_group, &[]);
let workgroup_size = 16;
let dispatch_x = (self.width + workgroup_size - 1) / workgroup_size;
let dispatch_y = (self.height + workgroup_size - 1) / workgroup_size;
compute_pass.dispatch_workgroups(dispatch_x, dispatch_y, 1);
}
render::with_queue(|queue| {
queue.submit(Some(encoder.finish()));
});
});
}
#[allow(dead_code)]
pub fn regenerate_mesh(&mut self, _world: &mut World, _config: &SnowConfig)
{
todo!("Implement regenerate_mesh with correct wgpu types for texture-to-buffer copy");
}
}

550
src/snow_light.rs Normal file
View File

@@ -0,0 +1,550 @@
use bytemuck::{Pod, Zeroable};
use glam::{Vec2, Vec3};
use wgpu::util::DeviceExt;
#[repr(C)]
#[derive(Clone, Copy, Pod, Zeroable)]
struct AccumulationUniforms
{
terrain_min_xz: [f32; 2],
terrain_max_xz: [f32; 2],
decay_rate: f32,
delta_time: f32,
spotlight_count: u32,
_padding: u32,
light_view_projection: [[f32; 4]; 4],
shadow_bias: f32,
terrain_height_scale: f32,
_padding3: f32,
_padding4: f32,
spotlights: [crate::render::SpotlightRaw; crate::render::MAX_SPOTLIGHTS],
}
pub struct SnowLightAccumulation
{
texture_ping: wgpu::Texture,
texture_pong: wgpu::Texture,
view_ping: wgpu::TextureView,
view_pong: wgpu::TextureView,
bind_group_layout: wgpu::BindGroupLayout,
bind_group_ping: Option<wgpu::BindGroup>,
bind_group_pong: Option<wgpu::BindGroup>,
uniform_buffer: wgpu::Buffer,
pipeline: wgpu::RenderPipeline,
quad_vb: wgpu::Buffer,
quad_ib: wgpu::Buffer,
quad_num_indices: u32,
current: bool,
needs_clear: bool,
terrain_min: Vec2,
terrain_max: Vec2,
pub decay_rate: f32,
}
impl SnowLightAccumulation
{
pub fn new(
device: &wgpu::Device,
terrain_min: Vec2,
terrain_max: Vec2,
resolution: u32,
) -> Self
{
let size = wgpu::Extent3d {
width: resolution,
height: resolution,
depth_or_array_layers: 1,
};
let texture_desc = wgpu::TextureDescriptor {
label: Some("Snow Light Accumulation"),
size,
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: wgpu::TextureFormat::R16Float,
usage: wgpu::TextureUsages::RENDER_ATTACHMENT
| wgpu::TextureUsages::TEXTURE_BINDING,
view_formats: &[],
};
let texture_ping = device.create_texture(&texture_desc);
let texture_pong = device.create_texture(&texture_desc);
let view_ping = texture_ping.create_view(&wgpu::TextureViewDescriptor::default());
let view_pong = texture_pong.create_view(&wgpu::TextureViewDescriptor::default());
let sampler = device.create_sampler(&wgpu::SamplerDescriptor {
label: Some("Snow Light Sampler"),
address_mode_u: wgpu::AddressMode::ClampToEdge,
address_mode_v: wgpu::AddressMode::ClampToEdge,
mag_filter: wgpu::FilterMode::Linear,
min_filter: wgpu::FilterMode::Linear,
..Default::default()
});
let uniform_buffer = device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Snow Light Accumulation Uniforms"),
size: std::mem::size_of::<AccumulationUniforms>() as wgpu::BufferAddress,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
let bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Snow Light Accumulation Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Texture {
sample_type: wgpu::TextureSampleType::Float { filterable: true },
view_dimension: wgpu::TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Filtering),
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 2,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 3,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Texture {
sample_type: wgpu::TextureSampleType::Float { filterable: false },
view_dimension: wgpu::TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 4,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::NonFiltering),
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 5,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Texture {
sample_type: wgpu::TextureSampleType::Depth,
view_dimension: wgpu::TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 6,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Comparison),
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 7,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Texture {
sample_type: wgpu::TextureSampleType::Float { filterable: false },
view_dimension: wgpu::TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 8,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::NonFiltering),
count: None,
},
],
});
let compiler = wesl::Wesl::new("src/shaders");
let shader_source = compiler
.compile(&"package::snow_light_accumulation".parse().unwrap())
.inspect_err(|e| eprintln!("WESL error: {e}"))
.unwrap()
.to_string();
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("Snow Light Accumulation Shader"),
source: wgpu::ShaderSource::Wgsl(shader_source.into()),
});
let pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("Snow Light Accumulation Pipeline Layout"),
bind_group_layouts: &[&bind_group_layout],
push_constant_ranges: &[],
});
let pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: Some("Snow Light Accumulation Pipeline"),
layout: Some(&pipeline_layout),
vertex: wgpu::VertexState {
module: &shader,
entry_point: Some("vs_main"),
buffers: &[],
compilation_options: Default::default(),
},
fragment: Some(wgpu::FragmentState {
module: &shader,
entry_point: Some("fs_main"),
targets: &[Some(wgpu::ColorTargetState {
format: wgpu::TextureFormat::R16Float,
blend: Some(wgpu::BlendState::REPLACE),
write_mask: wgpu::ColorWrites::ALL,
})],
compilation_options: Default::default(),
}),
primitive: wgpu::PrimitiveState {
topology: wgpu::PrimitiveTopology::TriangleList,
strip_index_format: None,
front_face: wgpu::FrontFace::Ccw,
cull_mode: None,
polygon_mode: wgpu::PolygonMode::Fill,
unclipped_depth: false,
conservative: false,
},
depth_stencil: None,
multisample: wgpu::MultisampleState {
count: 1,
mask: !0,
alpha_to_coverage_enabled: false,
},
multiview: None,
cache: None,
});
let vertices: &[f32] = &[
-1.0, -1.0, 0.0, 1.0,
3.0, -1.0, 2.0, 1.0,
-1.0, 3.0, 0.0, -1.0,
];
let quad_vb = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Snow Light Quad VB"),
contents: bytemuck::cast_slice(vertices),
usage: wgpu::BufferUsages::VERTEX,
});
let indices: &[u16] = &[0, 1, 2];
let quad_ib = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Snow Light Quad IB"),
contents: bytemuck::cast_slice(indices),
usage: wgpu::BufferUsages::INDEX,
});
Self {
texture_ping,
texture_pong,
view_ping,
view_pong,
bind_group_layout,
bind_group_ping: None,
bind_group_pong: None,
uniform_buffer,
pipeline,
quad_vb,
quad_ib,
quad_num_indices: 3,
current: false,
needs_clear: true,
terrain_min,
terrain_max,
decay_rate: 0.015,
}
}
pub fn clear(&self, encoder: &mut wgpu::CommandEncoder)
{
encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Clear Snow Light Ping"),
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &self.view_ping,
resolve_target: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(wgpu::Color::BLACK),
store: wgpu::StoreOp::Store,
},
depth_slice: None,
})],
depth_stencil_attachment: None,
timestamp_writes: None,
occlusion_query_set: None,
});
encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Clear Snow Light Pong"),
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &self.view_pong,
resolve_target: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(wgpu::Color::BLACK),
store: wgpu::StoreOp::Store,
},
depth_slice: None,
})],
depth_stencil_attachment: None,
timestamp_writes: None,
occlusion_query_set: None,
});
}
pub fn set_heightmap(
&mut self,
device: &wgpu::Device,
heightmap_view: &wgpu::TextureView,
heightmap_sampler: &wgpu::Sampler,
shadow_map_view: &wgpu::TextureView,
shadow_sampler: &wgpu::Sampler,
snow_depth_view: &wgpu::TextureView,
snow_depth_sampler: &wgpu::Sampler,
)
{
let sampler = device.create_sampler(&wgpu::SamplerDescriptor {
label: Some("Snow Light Sampler"),
address_mode_u: wgpu::AddressMode::ClampToEdge,
address_mode_v: wgpu::AddressMode::ClampToEdge,
mag_filter: wgpu::FilterMode::Linear,
min_filter: wgpu::FilterMode::Linear,
..Default::default()
});
self.bind_group_ping = Some(device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Snow Light Accumulation Bind Group Ping"),
layout: &self.bind_group_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(&self.view_pong),
},
wgpu::BindGroupEntry {
binding: 1,
resource: wgpu::BindingResource::Sampler(&sampler),
},
wgpu::BindGroupEntry {
binding: 2,
resource: self.uniform_buffer.as_entire_binding(),
},
wgpu::BindGroupEntry {
binding: 3,
resource: wgpu::BindingResource::TextureView(heightmap_view),
},
wgpu::BindGroupEntry {
binding: 4,
resource: wgpu::BindingResource::Sampler(heightmap_sampler),
},
wgpu::BindGroupEntry {
binding: 5,
resource: wgpu::BindingResource::TextureView(shadow_map_view),
},
wgpu::BindGroupEntry {
binding: 6,
resource: wgpu::BindingResource::Sampler(shadow_sampler),
},
wgpu::BindGroupEntry {
binding: 7,
resource: wgpu::BindingResource::TextureView(snow_depth_view),
},
wgpu::BindGroupEntry {
binding: 8,
resource: wgpu::BindingResource::Sampler(snow_depth_sampler),
},
],
}));
self.bind_group_pong = Some(device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Snow Light Accumulation Bind Group Pong"),
layout: &self.bind_group_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(&self.view_ping),
},
wgpu::BindGroupEntry {
binding: 1,
resource: wgpu::BindingResource::Sampler(&sampler),
},
wgpu::BindGroupEntry {
binding: 2,
resource: self.uniform_buffer.as_entire_binding(),
},
wgpu::BindGroupEntry {
binding: 3,
resource: wgpu::BindingResource::TextureView(heightmap_view),
},
wgpu::BindGroupEntry {
binding: 4,
resource: wgpu::BindingResource::Sampler(heightmap_sampler),
},
wgpu::BindGroupEntry {
binding: 5,
resource: wgpu::BindingResource::TextureView(shadow_map_view),
},
wgpu::BindGroupEntry {
binding: 6,
resource: wgpu::BindingResource::Sampler(shadow_sampler),
},
wgpu::BindGroupEntry {
binding: 7,
resource: wgpu::BindingResource::TextureView(snow_depth_view),
},
wgpu::BindGroupEntry {
binding: 8,
resource: wgpu::BindingResource::Sampler(snow_depth_sampler),
},
],
}));
}
pub fn render(
&mut self,
encoder: &mut wgpu::CommandEncoder,
queue: &wgpu::Queue,
spotlights: &[crate::render::Spotlight],
delta_time: f32,
light_view_projection: &glam::Mat4,
shadow_bias: f32,
terrain_height_scale: f32,
)
{
if self.needs_clear
{
self.clear(encoder);
self.needs_clear = false;
}
let mut spotlight_array = [crate::render::SpotlightRaw::default(); crate::render::MAX_SPOTLIGHTS];
for (i, spotlight) in spotlights.iter().take(crate::render::MAX_SPOTLIGHTS).enumerate()
{
spotlight_array[i] = spotlight.to_raw();
}
let uniforms = AccumulationUniforms {
terrain_min_xz: self.terrain_min.to_array(),
terrain_max_xz: self.terrain_max.to_array(),
decay_rate: self.decay_rate,
delta_time,
spotlight_count: spotlights.len().min(crate::render::MAX_SPOTLIGHTS) as u32,
_padding: 0,
light_view_projection: light_view_projection.to_cols_array_2d(),
shadow_bias,
terrain_height_scale,
_padding3: 0.0,
_padding4: 0.0,
spotlights: spotlight_array,
};
queue.write_buffer(&self.uniform_buffer, 0, bytemuck::cast_slice(&[uniforms]));
let write_view = if self.current { &self.view_ping } else { &self.view_pong };
let bind_group = if self.current { self.bind_group_ping.as_ref() } else { self.bind_group_pong.as_ref() };
let Some(bind_group) = bind_group else {
return;
};
{
let mut render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Snow Light Accumulation Pass"),
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: write_view,
resolve_target: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Load,
store: wgpu::StoreOp::Store,
},
depth_slice: None,
})],
depth_stencil_attachment: None,
timestamp_writes: None,
occlusion_query_set: None,
});
render_pass.set_pipeline(&self.pipeline);
render_pass.set_bind_group(0, bind_group, &[]);
render_pass.set_vertex_buffer(0, self.quad_vb.slice(..));
render_pass.set_index_buffer(self.quad_ib.slice(..), wgpu::IndexFormat::Uint16);
render_pass.draw_indexed(0..self.quad_num_indices, 0, 0..1);
}
self.current = !self.current;
}
pub fn read_view(&self) -> &wgpu::TextureView
{
if self.current { &self.view_pong } else { &self.view_ping }
}
pub fn create_bind_group_layout(device: &wgpu::Device) -> wgpu::BindGroupLayout
{
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("Snow Persistent Light Bind Group Layout"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Texture {
sample_type: wgpu::TextureSampleType::Float { filterable: true },
view_dimension: wgpu::TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Filtering),
count: None,
},
],
})
}
pub fn create_read_bind_group(
&self,
device: &wgpu::Device,
layout: &wgpu::BindGroupLayout,
) -> wgpu::BindGroup
{
let sampler = device.create_sampler(&wgpu::SamplerDescriptor {
label: Some("Snow Persistent Light Sampler"),
address_mode_u: wgpu::AddressMode::ClampToEdge,
address_mode_v: wgpu::AddressMode::ClampToEdge,
mag_filter: wgpu::FilterMode::Linear,
min_filter: wgpu::FilterMode::Linear,
..Default::default()
});
device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("Snow Persistent Light Bind Group"),
layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(self.read_view()),
},
wgpu::BindGroupEntry {
binding: 1,
resource: wgpu::BindingResource::Sampler(&sampler),
},
],
})
}
}

75
src/space.rs Normal file
View File

@@ -0,0 +1,75 @@
use anyhow::Result;
use glam::Vec3;
use crate::{
empty::Empties,
light::{LightData, Lights},
mesh::{InstanceData, Mesh},
render,
};
pub const CAMERA_SPAWN_OFFSET: Vec3 = Vec3::new(15.0, 15.0, 15.0);
pub struct Space
{
pub mesh_data: Vec<(Mesh, Vec<InstanceData>)>,
pub spotlights: Vec<LightData>,
pub player_spawn: Vec3,
}
impl Space
{
pub fn load_space(gltf_path: &str) -> Result<Space>
{
let mesh_data = render::with_device(|device| {
Mesh::load_gltf_with_instances(device, gltf_path)
})?;
let lights = Lights::load_lights(gltf_path)
.map_err(|e| anyhow::anyhow!("Failed to load lights: {}", e))?;
let spotlights = lights.into_spotlights();
let player_spawn = Self::get_player_spawn(gltf_path)?;
Ok(Space {
mesh_data,
spotlights,
player_spawn,
})
}
fn get_player_spawn(gltf_path: &str) -> Result<Vec3>
{
let empty = Empties::get_empty_by_name(gltf_path, "PlayerSpawn")?;
if let Some(empty_node) = empty
{
let (_scale, _rotation, translation) = empty_node.transform.to_scale_rotation_translation();
Ok(translation)
}
else
{
println!("Warning: PlayerSpawn empty not found, using default position");
Ok(Vec3::new(0.0, 5.0, 0.0))
}
}
pub fn camera_spawn_position(&self) -> Vec3
{
self.player_spawn + CAMERA_SPAWN_OFFSET
}
pub fn terrain_mesh(&self) -> Option<&Mesh>
{
self.mesh_data.first().map(|(mesh, _)| mesh)
}
pub fn tree_instances(&self) -> Option<(&Mesh, &Vec<InstanceData>)>
{
self.mesh_data
.iter()
.find(|(_, instances)| !instances.is_empty())
.map(|(mesh, instances)| (mesh, instances))
}
}

View File

@@ -1,7 +1,8 @@
use glam::Vec3;
use crate::components::FollowComponent;
use crate::utility::input::InputState;
use crate::world::World;
use crate::world::{Transform, World};
pub fn camera_input_system(world: &mut World, input_state: &InputState)
{
@@ -18,11 +19,7 @@ pub fn camera_input_system(world: &mut World, input_state: &InputState)
if input_state.mouse_delta.0.abs() > 0.0 || input_state.mouse_delta.1.abs() > 0.0
{
let is_following = world
.camera_follows
.get(camera_entity)
.map(|f| f.is_following)
.unwrap_or(false);
let is_following = world.follows.get(camera_entity).is_some();
camera.yaw += input_state.mouse_delta.0 * 0.0008;
@@ -45,26 +42,20 @@ pub fn camera_input_system(world: &mut World, input_state: &InputState)
pub fn camera_follow_system(world: &mut World)
{
let camera_entities: Vec<_> = world.camera_follows.all();
let camera_entities: Vec<_> = world.follows.all();
for camera_entity in camera_entities
{
if let Some(follow) = world.camera_follows.get(camera_entity)
if let Some(camera) = world.cameras.get(camera_entity)
{
if !follow.is_following
if let Some(follow) = world.follows.get(camera_entity)
{
continue;
}
let target_entity = follow.target;
let offset = follow.offset.position;
let target_entity = follow.target_entity;
let offset = follow.offset;
if let Some(target_transform) = world.transforms.get(target_entity)
{
let target_position = target_transform.position;
if let Some(camera) = world.cameras.get_mut(camera_entity)
if let Some(target_transform) = world.transforms.get(target_entity)
{
let target_position = target_transform.position;
let distance = offset.length();
let orbit_yaw = camera.yaw + std::f32::consts::PI;
@@ -75,15 +66,15 @@ pub fn camera_follow_system(world: &mut World)
let new_offset = Vec3::new(offset_x, offset_y, offset_z);
if let Some(camera_transform) = world.transforms.get_mut(camera_entity)
{
camera_transform.position = target_position + new_offset;
}
world
.transforms
.with_mut(camera_entity, |camera_transform| {
camera_transform.position = target_position + new_offset;
});
if let Some(follow_mut) = world.camera_follows.get_mut(camera_entity)
{
follow_mut.offset = new_offset;
}
world.follows.components.get_mut(&camera_entity).map(|f| {
f.offset.position = new_offset;
});
}
}
}
@@ -152,19 +143,16 @@ pub fn camera_noclip_system(world: &mut World, input_state: &InputState, delta:
pub fn start_camera_following(world: &mut World, camera_entity: crate::entity::EntityHandle)
{
if let Some(follow) = world.camera_follows.get_mut(camera_entity)
if let Some(camera_transform) = world.transforms.get(camera_entity)
{
let target_entity = follow.target_entity;
if let Some(target_transform) = world.transforms.get(target_entity)
let player_entities = world.player_tags.all();
if let Some(&player_entity) = player_entities.first()
{
if let Some(camera_transform) = world.transforms.get(camera_entity)
if let Some(target_transform) = world.transforms.get(player_entity)
{
let offset = camera_transform.position - target_transform.position;
follow.offset = offset;
follow.is_following = true;
let distance = offset.length();
if distance > 0.0
{
if let Some(camera) = world.cameras.get_mut(camera_entity)
@@ -173,6 +161,16 @@ pub fn start_camera_following(world: &mut World, camera_entity: crate::entity::E
camera.yaw = offset.z.atan2(offset.x) + std::f32::consts::PI;
}
}
world.follows.insert(
camera_entity,
FollowComponent {
target: player_entity,
offset: Transform::from_position(offset),
inherit_rotation: false,
inherit_scale: false,
},
);
}
}
}
@@ -180,13 +178,13 @@ pub fn start_camera_following(world: &mut World, camera_entity: crate::entity::E
pub fn stop_camera_following(world: &mut World, camera_entity: crate::entity::EntityHandle)
{
if let Some(follow) = world.camera_follows.get_mut(camera_entity)
if let Some(follow) = world.follows.get(camera_entity)
{
follow.is_following = false;
let target_entity = follow.target;
if let Some(camera_transform) = world.transforms.get(camera_entity)
{
if let Some(target_transform) = world.transforms.get(follow.target_entity)
if let Some(target_transform) = world.transforms.get(target_entity)
{
let look_direction =
(target_transform.position - camera_transform.position).normalize();
@@ -198,5 +196,7 @@ pub fn stop_camera_following(world: &mut World, camera_entity: crate::entity::En
}
}
}
world.follows.remove(camera_entity);
}
}

48
src/systems/follow.rs Normal file
View File

@@ -0,0 +1,48 @@
use crate::world::World;
pub fn follow_system(world: &mut World)
{
let following_entities: Vec<_> = world.follows.all();
for entity in following_entities
{
if let Some(follow) = world.follows.get(entity)
{
let target = follow.target;
if let Some(target_transform) = world.transforms.get(target)
{
let target_pos = target_transform.position;
let target_rot = target_transform.rotation;
let target_scale = target_transform.scale;
let offset = follow.offset;
let inherit_rot = follow.inherit_rotation;
let inherit_scale = follow.inherit_scale;
world.transforms.with_mut(entity, |transform| {
transform.position = target_pos;
if inherit_rot
{
let rotated_offset = target_rot * offset.position;
transform.position += rotated_offset;
transform.rotation = target_rot * offset.rotation;
}
else
{
transform.position += offset.position;
transform.rotation = offset.rotation;
}
if inherit_scale
{
transform.scale = target_scale * offset.scale;
}
else
{
transform.scale = offset.scale;
}
});
}
}
}
}

View File

@@ -1,14 +1,22 @@
pub mod camera;
pub mod follow;
pub mod input;
pub mod physics_sync;
pub mod render;
pub mod rotate;
pub mod spotlight_sync;
pub mod state_machine;
pub mod tree_dissolve;
pub use camera::{
camera_follow_system, camera_input_system, camera_noclip_system, start_camera_following,
stop_camera_following,
};
pub use follow::follow_system;
pub use input::player_input_system;
pub use physics_sync::physics_sync_system;
pub use render::render_system;
pub use rotate::rotate_system;
pub use spotlight_sync::spotlight_sync_system;
pub use state_machine::{state_machine_physics_system, state_machine_system};
pub use tree_dissolve::{tree_dissolve_update_system, tree_occlusion_system};

View File

@@ -2,8 +2,6 @@ use crate::mesh::InstanceRaw;
use crate::render::DrawCall;
use crate::world::World;
use bytemuck::cast_slice;
use glam::Mat4;
use wgpu::util::DeviceExt;
pub fn render_system(world: &World) -> Vec<DrawCall>
{
@@ -24,16 +22,27 @@ pub fn render_system(world: &World) -> Vec<DrawCall>
}
else
{
let dissolve_amount = world.dissolves.get(entity).map(|d| d.amount).unwrap_or(0.0);
let instance_data = InstanceRaw {
model: model_matrix.to_cols_array_2d(),
dissolve_amount,
_padding: [0.0; 3],
};
let buffer = crate::render::with_device(|device| {
device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
let buffer = device.create_buffer(&wgpu::BufferDescriptor {
label: Some("Instance Buffer"),
contents: cast_slice(&[instance_data]),
usage: wgpu::BufferUsages::VERTEX,
})
size: std::mem::size_of::<InstanceRaw>() as u64,
usage: wgpu::BufferUsages::VERTEX | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
crate::render::with_queue(|queue| {
queue.write_buffer(&buffer, 0, cast_slice(&[instance_data]));
});
buffer
});
(Some(buffer), 1)

20
src/systems/rotate.rs Normal file
View File

@@ -0,0 +1,20 @@
use glam::Quat;
use crate::world::World;
pub fn rotate_system(world: &mut World, delta: f32)
{
let entities = world.rotates.all();
for entity in entities
{
if let Some(rotate) = world.rotates.get(entity)
{
let rotation_delta = Quat::from_axis_angle(rotate.axis, rotate.speed * delta);
world.transforms.with_mut(entity, |transform| {
transform.rotation = rotation_delta * transform.rotation;
});
}
}
}

View File

@@ -0,0 +1,32 @@
use crate::render::Spotlight;
use crate::world::World;
pub fn spotlight_sync_system(world: &World) -> Vec<Spotlight>
{
let mut entities = world.spotlights.all();
entities.sort();
let mut spotlights = Vec::new();
for entity in entities
{
if let Some(spotlight_component) = world.spotlights.get(entity)
{
if let Some(transform) = world.transforms.get(entity)
{
let position = transform.position + spotlight_component.offset;
let direction = transform.rotation * spotlight_component.direction;
spotlights.push(Spotlight::new(
position,
direction,
spotlight_component.inner_angle,
spotlight_component.outer_angle,
spotlight_component.range,
));
}
}
}
spotlights
}

View File

@@ -0,0 +1,112 @@
use crate::components::DissolveComponent;
use crate::world::World;
use glam::Vec3;
pub fn tree_dissolve_update_system(world: &mut World, delta: f32)
{
for entity in world.dissolves.all()
{
if let Some(dissolve) = world.dissolves.get_mut(entity)
{
let diff = dissolve.target_amount - dissolve.amount;
dissolve.amount += diff * dissolve.transition_speed * delta;
dissolve.amount = dissolve.amount.clamp(0.0, 1.0);
}
}
}
pub fn tree_occlusion_system(world: &mut World)
{
let player_entity = world.player_tags.all().first().copied();
let player_pos = player_entity.and_then(|e| world.transforms.get(e).map(|t| t.position));
if let Some(player_pos) = player_pos
{
let camera_entity = world.cameras.get_active().map(|(e, _)| e);
let camera_pos = camera_entity.and_then(|e| world.transforms.get(e).map(|t| t.position));
let tree_count = world.tree_tags.all().len();
if tree_count > 0
{
static mut FRAME_COUNT: u32 = 0;
unsafe {
FRAME_COUNT += 1;
if FRAME_COUNT % 60 == 0
{
println!("Tree occlusion system: {} trees detected", tree_count);
}
}
}
if let Some(camera_pos) = camera_pos
{
let to_player = player_pos - camera_pos;
let distance_to_player = to_player.length();
if distance_to_player < 0.01
{
return;
}
let to_player_normalized = to_player.normalize();
for tree_entity in world.tree_tags.all()
{
if let Some(tree_transform) = world.transforms.get(tree_entity)
{
let tree_pos = tree_transform.position;
let to_tree = tree_pos - camera_pos;
let distance_to_tree = to_tree.length();
if distance_to_tree < distance_to_player
{
let projection = to_tree.dot(to_player_normalized);
if projection > 0.0
{
let perpendicular_vec = to_tree - to_player_normalized * projection;
let perp_distance = perpendicular_vec.length();
let occlusion_radius = 2.5;
if perp_distance < occlusion_radius
{
let dissolve_amount =
1.0 - (perp_distance / occlusion_radius).clamp(0.0, 1.0);
static mut DEBUG_FRAME: u32 = 0;
unsafe {
DEBUG_FRAME += 1;
if DEBUG_FRAME % 60 == 0
{
println!(
"Tree occluding! perp_dist: {:.2}, dissolve: {:.2}",
perp_distance, dissolve_amount
);
}
}
if let Some(dissolve) = world.dissolves.get_mut(tree_entity)
{
dissolve.target_amount = dissolve_amount;
}
else
{
let mut dissolve = DissolveComponent::new();
dissolve.target_amount = dissolve_amount;
world.dissolves.insert(tree_entity, dissolve);
}
continue;
}
}
}
if let Some(dissolve) = world.dissolves.get_mut(tree_entity)
{
dissolve.target_amount = 0.0;
}
}
}
}
}
}

View File

@@ -7,11 +7,12 @@ use rapier3d::{
math::Isometry,
prelude::{ColliderBuilder, RigidBodyBuilder},
};
use wesl::Wesl;
use crate::{
components::{MeshComponent, PhysicsComponent},
entity::EntityHandle,
mesh::{InstanceRaw, Mesh, Vertex},
mesh::{InstanceData, InstanceRaw, Mesh, Vertex},
physics::PhysicsManager,
render,
world::{Transform, World},
@@ -39,7 +40,7 @@ impl TerrainConfig
{
Self {
gltf_path: "meshes/terrain.gltf".to_string(),
heightmap_path: "textures/terrain.exr".to_string(),
heightmap_path: "textures/terrain_heightmap.exr".to_string(),
size: Vec2::new(1000.0, 1000.0),
}
}
@@ -49,111 +50,102 @@ pub struct Terrain;
impl Terrain
{
pub fn spawn(world: &mut World, config: &TerrainConfig) -> anyhow::Result<EntityHandle>
pub fn spawn(
world: &mut World,
mesh_data: Vec<(Mesh, Vec<InstanceData>)>,
config: &TerrainConfig,
) -> anyhow::Result<EntityHandle>
{
let gltf_data = render::with_device(|device| {
Mesh::load_gltf_with_instances(device, &config.gltf_path)
})?;
let terrain_entity = world.spawn();
let transform = Transform::IDENTITY;
let mut first_entity = None;
let mut physics_added = false;
let mut terrain_mesh = None;
let mut tree_mesh = None;
let mut tree_instances = None;
for (mesh, instances) in gltf_data
for (mesh, instances) in mesh_data
{
let entity = world.spawn();
if first_entity.is_none()
{
first_entity = Some(entity);
}
if instances.is_empty()
{
if terrain_mesh.is_none()
world.transforms.insert(entity, transform);
world.meshes.insert(
entity,
MeshComponent {
mesh: Rc::new(mesh),
pipeline: render::Pipeline::Terrain,
instance_buffer: None,
num_instances: 1,
},
);
if !physics_added
{
terrain_mesh = Some(mesh);
let heights = Self::load_heightfield_from_exr(&config.heightmap_path)?;
let height_scale = 1.0;
let scale = vector![config.size.x, height_scale, config.size.y];
let body = RigidBodyBuilder::fixed()
.translation(transform.get_position().into())
.build();
let rigidbody_handle = PhysicsManager::add_rigidbody(body);
let collider = ColliderBuilder::heightfield(heights.clone(), scale).build();
let collider_handle =
PhysicsManager::add_collider(collider, Some(rigidbody_handle));
PhysicsManager::set_heightfield_data(
heights,
scale,
transform.get_position().into(),
);
world.physics.insert(
entity,
PhysicsComponent {
rigidbody: rigidbody_handle,
collider: Some(collider_handle),
},
);
physics_added = true;
}
}
else
{
tree_mesh = Some(mesh);
tree_instances = Some(instances);
let num_instances = instances.len();
let instance_raw: Vec<InstanceRaw> = instances.iter().map(|i| i.to_raw()).collect();
let instance_buffer = render::with_device(|device| {
use wgpu::util::DeviceExt;
device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Tree Instance Buffer"),
contents: bytemuck::cast_slice(&instance_raw),
usage: wgpu::BufferUsages::VERTEX,
})
});
world.transforms.insert(entity, Transform::IDENTITY);
world.meshes.insert(
entity,
MeshComponent {
mesh: Rc::new(mesh),
pipeline: render::Pipeline::Environment,
instance_buffer: Some(instance_buffer),
num_instances: num_instances as u32,
},
);
}
}
if let Some(terrain_mesh) = terrain_mesh
{
world.transforms.insert(terrain_entity, transform);
world.meshes.insert(
terrain_entity,
MeshComponent {
mesh: Rc::new(terrain_mesh),
pipeline: render::Pipeline::Terrain,
instance_buffer: None,
num_instances: 1,
},
);
let heights = Self::load_heightfield_from_exr(&config.heightmap_path)?;
println!(
"Loaded terrain: {} rows × {} cols heightfield from EXR",
heights.nrows(),
heights.ncols()
);
let height_scale = 1.0;
let scale = vector![config.size.x, height_scale, config.size.y];
let body = RigidBodyBuilder::fixed()
.translation(transform.get_position().into())
.build();
let rigidbody_handle = PhysicsManager::add_rigidbody(body);
let collider = ColliderBuilder::heightfield(heights.clone(), scale).build();
let collider_handle = PhysicsManager::add_collider(collider, Some(rigidbody_handle));
PhysicsManager::set_heightfield_data(heights, scale, transform.get_position().into());
world.physics.insert(
terrain_entity,
PhysicsComponent {
rigidbody: rigidbody_handle,
collider: Some(collider_handle),
},
);
}
if let (Some(tree_mesh), Some(instances)) = (tree_mesh, tree_instances)
{
let num_instances = instances.len();
println!("Loaded {} tree instances", num_instances);
let tree_entity = world.spawn();
let instance_raw: Vec<InstanceRaw> = instances.iter().map(|i| i.to_raw()).collect();
let instance_buffer = render::with_device(|device| {
use wgpu::util::DeviceExt;
device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Tree Instance Buffer"),
contents: bytemuck::cast_slice(&instance_raw),
usage: wgpu::BufferUsages::VERTEX,
})
});
world.transforms.insert(tree_entity, Transform::IDENTITY);
world.meshes.insert(
tree_entity,
MeshComponent {
mesh: Rc::new(tree_mesh),
pipeline: render::Pipeline::Render,
instance_buffer: Some(instance_buffer),
num_instances: num_instances as u32,
},
);
}
Ok(terrain_entity)
first_entity.ok_or_else(|| anyhow::anyhow!("No meshes found in glTF file"))
}
fn load_heightfield_from_exr(path: &str) -> anyhow::Result<DMatrix<f32>>
@@ -184,11 +176,12 @@ pub fn create_terrain_render_pipeline(
bind_group_layout: &wgpu::BindGroupLayout,
) -> wgpu::RenderPipeline
{
let shared_source =
std::fs::read_to_string("shaders/shared.wgsl").expect("Failed to read shared shader");
let terrain_source =
std::fs::read_to_string("shaders/terrain.wgsl").expect("Failed to read terrain shader");
let shader_source = format!("{}\n{}", shared_source, terrain_source);
let compiler = Wesl::new("src/shaders");
let shader_source = compiler
.compile(&"package::terrain".parse().unwrap())
.inspect_err(|e| eprintln!("WESL error: {e}"))
.unwrap()
.to_string();
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("Terrain Shader"),

View File

@@ -1,6 +1,5 @@
use anyhow::Result;
use exr::prelude::{ReadChannels, ReadLayers};
use half::f16;
pub struct DitherTextures
{
@@ -16,6 +15,13 @@ pub struct FlowmapTexture
pub sampler: wgpu::Sampler,
}
pub struct HeightmapTexture
{
pub texture: wgpu::Texture,
pub view: wgpu::TextureView,
pub sampler: wgpu::Sampler,
}
impl DitherTextures
{
pub fn load_octaves(device: &wgpu::Device, queue: &wgpu::Queue) -> Result<Self>
@@ -186,16 +192,11 @@ impl FlowmapTexture
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: wgpu::TextureFormat::Rgba16Float,
format: wgpu::TextureFormat::Rgba32Float,
usage: wgpu::TextureUsages::TEXTURE_BINDING | wgpu::TextureUsages::COPY_DST,
view_formats: &[],
});
let rgba_data_f16: Vec<u16> = rgba_data
.iter()
.map(|&f| f16::from_f32(f).to_bits())
.collect();
queue.write_texture(
wgpu::TexelCopyTextureInfo {
texture: &texture,
@@ -203,10 +204,10 @@ impl FlowmapTexture
origin: wgpu::Origin3d::ZERO,
aspect: wgpu::TextureAspect::All,
},
bytemuck::cast_slice(&rgba_data_f16),
bytemuck::cast_slice(&rgba_data),
wgpu::TexelCopyBufferLayout {
offset: 0,
bytes_per_row: Some(8 * width as u32),
bytes_per_row: Some(16 * width as u32),
rows_per_image: Some(height as u32),
},
wgpu::Extent3d {
@@ -223,8 +224,8 @@ impl FlowmapTexture
address_mode_u: wgpu::AddressMode::Repeat,
address_mode_v: wgpu::AddressMode::Repeat,
address_mode_w: wgpu::AddressMode::Repeat,
mag_filter: wgpu::FilterMode::Linear,
min_filter: wgpu::FilterMode::Linear,
mag_filter: wgpu::FilterMode::Nearest,
min_filter: wgpu::FilterMode::Nearest,
mipmap_filter: wgpu::FilterMode::Nearest,
..Default::default()
});
@@ -236,3 +237,76 @@ impl FlowmapTexture
})
}
}
impl HeightmapTexture
{
pub fn load(device: &wgpu::Device, queue: &wgpu::Queue, path: &str) -> Result<Self>
{
let image = exr::prelude::read()
.no_deep_data()
.largest_resolution_level()
.all_channels()
.all_layers()
.all_attributes()
.from_file(path)?;
let layer = &image.layer_data[0];
let width = layer.size.width();
let height = layer.size.height();
let channel = &layer.channel_data.list[0];
let height_data: Vec<f32> = channel.sample_data.values_as_f32().collect();
let texture = device.create_texture(&wgpu::TextureDescriptor {
label: Some("Heightmap Texture"),
size: wgpu::Extent3d {
width: width as u32,
height: height as u32,
depth_or_array_layers: 1,
},
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: wgpu::TextureFormat::R32Float,
usage: wgpu::TextureUsages::TEXTURE_BINDING | wgpu::TextureUsages::COPY_DST,
view_formats: &[],
});
queue.write_texture(
wgpu::TexelCopyTextureInfo {
texture: &texture,
mip_level: 0,
origin: wgpu::Origin3d::ZERO,
aspect: wgpu::TextureAspect::All,
},
bytemuck::cast_slice(&height_data),
wgpu::TexelCopyBufferLayout {
offset: 0,
bytes_per_row: Some(4 * width as u32),
rows_per_image: Some(height as u32),
},
wgpu::Extent3d {
width: width as u32,
height: height as u32,
depth_or_array_layers: 1,
},
);
let view = texture.create_view(&wgpu::TextureViewDescriptor::default());
let sampler = device.create_sampler(&wgpu::SamplerDescriptor {
label: Some("Heightmap Sampler"),
address_mode_u: wgpu::AddressMode::ClampToEdge,
address_mode_v: wgpu::AddressMode::ClampToEdge,
mag_filter: wgpu::FilterMode::Nearest,
min_filter: wgpu::FilterMode::Nearest,
..Default::default()
});
Ok(Self {
texture,
view,
sampler,
})
}
}

View File

@@ -22,6 +22,16 @@ impl Transform
Self::IDENTITY.translated(position)
}
pub fn from_matrix(matrix: Mat4) -> Self
{
let (scale, rotation, position) = matrix.to_scale_rotation_translation();
Self {
position,
rotation,
scale,
}
}
pub fn to_matrix(&self) -> Mat4
{
Mat4::from_scale_rotation_translation(self.scale, self.rotation, self.position)

View File

@@ -1,9 +1,12 @@
use std::collections::HashMap;
use crate::components::dissolve::DissolveComponent;
use crate::components::follow::FollowComponent;
use crate::components::jump::JumpComponent;
use crate::components::lights::spot::SpotlightComponent;
use crate::components::{
CameraComponent, CameraFollowComponent, InputComponent, MeshComponent, MovementComponent,
PhysicsComponent,
CameraComponent, InputComponent, MeshComponent, MovementComponent, PhysicsComponent,
RotateComponent,
};
use crate::entity::{EntityHandle, EntityManager};
use crate::state::StateMachine;
@@ -415,12 +418,12 @@ impl CameraStorage
}
}
pub struct CameraFollowStorage
pub struct SpotlightStorage
{
pub components: HashMap<EntityHandle, CameraFollowComponent>,
pub components: HashMap<EntityHandle, SpotlightComponent>,
}
impl CameraFollowStorage
impl SpotlightStorage
{
pub fn new() -> Self
{
@@ -429,26 +432,159 @@ impl CameraFollowStorage
}
}
pub fn insert(&mut self, entity: EntityHandle, component: CameraFollowComponent)
pub fn insert(&mut self, entity: EntityHandle, component: SpotlightComponent)
{
self.components.insert(entity, component);
}
pub fn get(&self, entity: EntityHandle) -> Option<&CameraFollowComponent>
pub fn get(&self, entity: EntityHandle) -> Option<&SpotlightComponent>
{
self.components.get(&entity)
}
pub fn get_mut(&mut self, entity: EntityHandle) -> Option<&mut CameraFollowComponent>
pub fn get_mut(&mut self, entity: EntityHandle) -> Option<&mut SpotlightComponent>
{
self.components.get_mut(&entity)
}
pub fn with_mut<F, R>(&mut self, entity: EntityHandle, f: F) -> Option<R>
where
F: FnOnce(&mut CameraFollowComponent) -> R,
pub fn remove(&mut self, entity: EntityHandle)
{
self.components.get_mut(&entity).map(f)
self.components.remove(&entity);
}
pub fn all(&self) -> Vec<EntityHandle>
{
self.components.keys().copied().collect()
}
}
pub struct TreeTagStorage
{
pub components: HashMap<EntityHandle, ()>,
}
impl TreeTagStorage
{
pub fn new() -> Self
{
Self {
components: HashMap::new(),
}
}
pub fn insert(&mut self, entity: EntityHandle)
{
self.components.insert(entity, ());
}
pub fn remove(&mut self, entity: EntityHandle)
{
self.components.remove(&entity);
}
pub fn all(&self) -> Vec<EntityHandle>
{
self.components.keys().copied().collect()
}
}
pub struct DissolveStorage
{
pub components: HashMap<EntityHandle, DissolveComponent>,
}
impl DissolveStorage
{
pub fn new() -> Self
{
Self {
components: HashMap::new(),
}
}
pub fn insert(&mut self, entity: EntityHandle, component: DissolveComponent)
{
self.components.insert(entity, component);
}
pub fn get(&self, entity: EntityHandle) -> Option<&DissolveComponent>
{
self.components.get(&entity)
}
pub fn get_mut(&mut self, entity: EntityHandle) -> Option<&mut DissolveComponent>
{
self.components.get_mut(&entity)
}
pub fn remove(&mut self, entity: EntityHandle)
{
self.components.remove(&entity);
}
pub fn all(&self) -> Vec<EntityHandle>
{
self.components.keys().copied().collect()
}
}
pub struct FollowStorage
{
pub components: HashMap<EntityHandle, FollowComponent>,
}
impl FollowStorage
{
pub fn new() -> Self
{
Self {
components: HashMap::new(),
}
}
pub fn insert(&mut self, entity: EntityHandle, component: FollowComponent)
{
self.components.insert(entity, component);
}
pub fn get(&self, entity: EntityHandle) -> Option<&FollowComponent>
{
self.components.get(&entity)
}
pub fn remove(&mut self, entity: EntityHandle)
{
self.components.remove(&entity);
}
pub fn all(&self) -> Vec<EntityHandle>
{
self.components.keys().copied().collect()
}
}
pub struct RotateStorage
{
pub components: HashMap<EntityHandle, RotateComponent>,
}
impl RotateStorage
{
pub fn new() -> Self
{
Self {
components: HashMap::new(),
}
}
pub fn insert(&mut self, entity: EntityHandle, component: RotateComponent)
{
self.components.insert(entity, component);
}
pub fn get(&self, entity: EntityHandle) -> Option<&RotateComponent>
{
self.components.get(&entity)
}
pub fn remove(&mut self, entity: EntityHandle)
@@ -474,7 +610,11 @@ pub struct World
pub player_tags: PlayerTagStorage,
pub state_machines: StateMachineStorage,
pub cameras: CameraStorage,
pub camera_follows: CameraFollowStorage,
pub spotlights: SpotlightStorage,
pub tree_tags: TreeTagStorage,
pub dissolves: DissolveStorage,
pub follows: FollowStorage,
pub rotates: RotateStorage,
}
impl World
@@ -492,7 +632,11 @@ impl World
player_tags: PlayerTagStorage::new(),
state_machines: StateMachineStorage::new(),
cameras: CameraStorage::new(),
camera_follows: CameraFollowStorage::new(),
spotlights: SpotlightStorage::new(),
tree_tags: TreeTagStorage::new(),
dissolves: DissolveStorage::new(),
follows: FollowStorage::new(),
rotates: RotateStorage::new(),
}
}
@@ -512,7 +656,11 @@ impl World
self.player_tags.remove(entity);
self.state_machines.remove(entity);
self.cameras.remove(entity);
self.camera_follows.remove(entity);
self.spotlights.remove(entity);
self.tree_tags.remove(entity);
self.dissolves.remove(entity);
self.follows.remove(entity);
self.rotates.remove(entity);
self.entities.despawn(entity);
}
}

BIN
textures/blue_noise.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.1 KiB

View File

@@ -0,0 +1,62 @@
# Texture Generation Scripts
## Blue Noise Generator
`generate_blue_noise.py` - Generates blue noise textures for high-quality dithering effects.
### Requirements
```bash
pip install numpy pillow scipy
```
### Usage
Basic usage (generates 128x128 texture):
```bash
python generate_blue_noise.py
```
Custom size:
```bash
python generate_blue_noise.py --width 256 --height 256
```
Custom output path:
```bash
python generate_blue_noise.py --output ../my_blue_noise.png
```
Advanced options:
```bash
python generate_blue_noise.py --width 128 --height 128 --sigma 1.5 --method void_cluster
```
### Parameters
- `--width`: Texture width in pixels (default: 128)
- `--height`: Texture height in pixels (default: 128)
- `--method`: Generation method
- `void_cluster`: High-quality void-and-cluster method (default, recommended)
- `annealing`: Simulated annealing method (slower)
- `--sigma`: Gaussian kernel sigma for void_cluster method (default: 1.5)
- Lower values (0.8-1.2): Tighter clustering, more high-frequency
- Higher values (2.0-3.0): Smoother distribution
- `--iterations`: Number of iterations (optional, auto-calculated if not specified)
- `--output`: Output file path (default: ../blue_noise.png)
### What is Blue Noise?
Blue noise is a type of noise with energy concentrated in high frequencies and minimal low-frequency content. This makes it ideal for dithering because:
- No visible patterns or clustering
- Smooth gradients without banding
- Perceptually pleasing distribution
- Better than Bayer or white noise for transparency effects
### Use Cases in snow_trail_sdl
- **Tree dissolve effect**: Dither trees between camera and player for unobstructed view
- **Temporal effects**: Screen-space dithering for transitions
- **Transparency**: High-quality alpha dithering
- **LOD transitions**: Smooth fade between detail levels

View File

@@ -0,0 +1,155 @@
#!/usr/bin/env python3
import numpy as np
from PIL import Image
import argparse
from pathlib import Path
def gaussian_kernel(size, sigma):
x = np.arange(-size // 2 + 1, size // 2 + 1)
y = np.arange(-size // 2 + 1, size // 2 + 1)
xx, yy = np.meshgrid(x, y)
kernel = np.exp(-(xx**2 + yy**2) / (2 * sigma**2))
return kernel / kernel.sum()
def apply_periodic_filter(binary_pattern, kernel):
from scipy.signal import fftconvolve
return fftconvolve(binary_pattern, kernel, mode='same')
def generate_blue_noise_void_cluster(width, height, sigma=1.5, iterations=None):
if iterations is None:
iterations = width * height
size = width * height
pattern = np.zeros((height, width), dtype=np.float32)
kernel_size = int(6 * sigma)
if kernel_size % 2 == 0:
kernel_size += 1
kernel = gaussian_kernel(kernel_size, sigma)
initial_pattern = np.random.rand(height, width)
print(f"Generating {width}x{height} blue noise texture...")
print(f"Kernel size: {kernel_size}x{kernel_size}, sigma: {sigma}")
dither_array = np.zeros(size, dtype=np.int32)
binary_pattern = np.zeros((height, width), dtype=np.float32)
for i in range(size):
if i % (size // 10) == 0:
print(f"Progress: {i}/{size} ({100*i//size}%)")
filtered = apply_periodic_filter(binary_pattern, kernel)
if i < size // 2:
initial_energy = initial_pattern + filtered
coords = np.unravel_index(np.argmax(initial_energy), initial_energy.shape)
else:
coords = np.unravel_index(np.argmin(filtered), filtered.shape)
dither_array[i] = coords[0] * width + coords[1]
binary_pattern[coords[0], coords[1]] = 1.0
print("Converting to threshold map...")
threshold_map = np.zeros((height, width), dtype=np.float32)
for rank, pos in enumerate(dither_array):
y = pos // width
x = pos % width
threshold_map[y, x] = rank / size
print("Done!")
return threshold_map
def generate_blue_noise_simulated_annealing(width, height, iterations=10000):
print(f"Generating {width}x{height} blue noise using simulated annealing...")
pattern = np.random.rand(height, width)
def energy(pattern):
fft = np.fft.fft2(pattern)
power = np.abs(fft) ** 2
h, w = pattern.shape
cy, cx = h // 2, w // 2
y, x = np.ogrid[:h, :w]
dist = np.sqrt((x - cx)**2 + (y - cy)**2)
low_freq_mask = dist < min(h, w) * 0.1
low_freq_energy = np.sum(power * low_freq_mask)
return low_freq_energy
current_energy = energy(pattern)
temperature = 1.0
cooling_rate = 0.9995
for i in range(iterations):
if i % (iterations // 10) == 0:
print(f"Iteration {i}/{iterations}, Energy: {current_energy:.2f}, Temp: {temperature:.4f}")
y1, x1 = np.random.randint(0, height), np.random.randint(0, width)
y2, x2 = np.random.randint(0, height), np.random.randint(0, width)
pattern[y1, x1], pattern[y2, x2] = pattern[y2, x2], pattern[y1, x1]
new_energy = energy(pattern)
delta_energy = new_energy - current_energy
if delta_energy < 0 or np.random.rand() < np.exp(-delta_energy / temperature):
current_energy = new_energy
else:
pattern[y1, x1], pattern[y2, x2] = pattern[y2, x2], pattern[y1, x1]
temperature *= cooling_rate
print("Done!")
return pattern
def main():
parser = argparse.ArgumentParser(description='Generate blue noise texture for dithering')
parser.add_argument('--width', type=int, default=128, help='Texture width (default: 128)')
parser.add_argument('--height', type=int, default=128, help='Texture height (default: 128)')
parser.add_argument('--method', choices=['void_cluster', 'annealing'], default='void_cluster',
help='Generation method (default: void_cluster)')
parser.add_argument('--sigma', type=float, default=1.5,
help='Gaussian kernel sigma for void_cluster method (default: 1.5)')
parser.add_argument('--iterations', type=int, default=None,
help='Number of iterations (optional)')
parser.add_argument('--output', type=str, default='../blue_noise.png',
help='Output file path (default: ../blue_noise.png)')
args = parser.parse_args()
try:
from scipy.signal import fftconvolve
except ImportError:
print("Error: scipy is required for this script.")
print("Install it with: pip install scipy")
return
if args.method == 'void_cluster':
noise = generate_blue_noise_void_cluster(args.width, args.height, args.sigma, args.iterations)
else:
noise = generate_blue_noise_simulated_annealing(args.width, args.height,
args.iterations or 10000)
noise_normalized = ((noise - noise.min()) / (noise.max() - noise.min()) * 255).astype(np.uint8)
img = Image.fromarray(noise_normalized, mode='L')
output_path = Path(__file__).parent / args.output
output_path.parent.mkdir(parents=True, exist_ok=True)
img.save(output_path)
print(f"\nBlue noise texture saved to: {output_path}")
print(f"Size: {args.width}x{args.height}")
print(f"Method: {args.method}")
fft = np.fft.fft2(noise)
power_spectrum = np.abs(np.fft.fftshift(fft)) ** 2
print(f"Power spectrum range: {power_spectrum.min():.2e} - {power_spectrum.max():.2e}")
if __name__ == '__main__':
main()

BIN
textures/snow_depth.exr Normal file

Binary file not shown.

Binary file not shown.