stylized 1-bit rendering
53
CLAUDE.md
@@ -6,6 +6,8 @@ This file provides guidance to Claude Code when working with code in this reposi
|
||||
|
||||
This is a pure Rust game project using SDL3 for windowing/input, wgpu for rendering, rapier3d for physics, and a low-res retro aesthetic with dithering. This is a migration from the Godot-based snow_trail project, implementing the same snow deformation system and character controller without engine dependencies.
|
||||
|
||||
**Content Creation:** Blender 5.0 is used for terrain modeling and asset export (glTF meshes + EXR heightmaps).
|
||||
|
||||
## Code Style
|
||||
|
||||
**Code Documentation Guidelines**:
|
||||
@@ -143,12 +145,17 @@ SDL Events → InputState → player_input_system() → InputComponent → movem
|
||||
- Final blit pass upscales framebuffer to window using nearest-neighbor sampling
|
||||
- Depth buffer for 3D rendering with proper occlusion
|
||||
|
||||
**Terrain Height Deformation:**
|
||||
**Terrain Rendering:**
|
||||
- glTF mesh exported from Blender 5.0 with baked height values in vertices
|
||||
- No runtime displacement in shader - vertices rendered directly
|
||||
- Separate terrain pipeline for terrain-specific rendering
|
||||
- Terrain mesh has heights pre-baked during export for optimal performance
|
||||
|
||||
**Terrain Physics:**
|
||||
- EXR heightmap files loaded via `exr` crate (single-channel R32Float format)
|
||||
- Height displacement applied in vertex shader
|
||||
- Separate terrain pipeline with texture sampling in vertex stage
|
||||
- TerrainUniforms includes height_scale parameter for tweaking displacement strength
|
||||
- R32Float textures require non-filterable samplers (FilterMode::Nearest)
|
||||
- Heightmap loaded directly into rapier3d heightfield collider
|
||||
- No runtime sampling or computation - instant loading
|
||||
- Both glTF and EXR exported from same Blender terrain, guaranteed to match
|
||||
|
||||
**Lighting Model:**
|
||||
- Directional light (like Godot's DirectionalLight3D)
|
||||
@@ -194,7 +201,7 @@ cargo fmt
|
||||
|
||||
WGSL shaders are stored in the `shaders/` directory:
|
||||
- `shaders/standard.wgsl` - Standard mesh rendering with directional lighting
|
||||
- `shaders/terrain.wgsl` - Terrain rendering with height displacement
|
||||
- `shaders/terrain.wgsl` - Terrain rendering with shadow mapping (no displacement)
|
||||
- `shaders/blit.wgsl` - Fullscreen blit for upscaling low-res framebuffer
|
||||
|
||||
Shaders are loaded at runtime via `std::fs::read_to_string()`, allowing hot-reloading by restarting the application.
|
||||
@@ -224,10 +231,9 @@ Shaders are loaded at runtime via `std::fs::read_to_string()`, allowing hot-relo
|
||||
**Rendering:**
|
||||
- `render.rs` - wgpu renderer, pipelines, bind groups, DrawCall execution
|
||||
- `shader.rs` - Standard mesh shader (WGSL) with diffuse+ambient lighting
|
||||
- `terrain.rs` - Terrain mesh generation and pipeline creation
|
||||
- `terrain.rs` - Terrain entity spawning, glTF loading, EXR heightmap → physics collider
|
||||
- `postprocess.rs` - Low-res framebuffer and blit shader for upscaling
|
||||
- `mesh.rs` - Vertex/Mesh structs, plane/cube mesh generation, glTF loading
|
||||
- `heightmap.rs` - EXR heightmap loading using `exr` crate
|
||||
- `draw.rs` - DrawManager (legacy, kept for compatibility)
|
||||
|
||||
**Game Logic:**
|
||||
@@ -260,32 +266,32 @@ Shaders are loaded at runtime via `std::fs::read_to_string()`, allowing hot-relo
|
||||
- **bytemuck**: Safe byte casting for GPU buffer uploads (Pod/Zeroable for vertex data)
|
||||
- **anyhow**: Ergonomic error handling
|
||||
- **gltf**: Loading 3D models in glTF format
|
||||
- **exr**: Loading EXR heightmap files (single-channel float data)
|
||||
- **image**: Image loading and processing (includes EXR support)
|
||||
- **half**: Float16 support (dependency of exr)
|
||||
- **exr**: Loading EXR heightmap files (single-channel float data for physics colliders)
|
||||
- **kurbo**: Bezier curve evaluation for movement acceleration curves
|
||||
|
||||
## Technical Notes
|
||||
|
||||
### EXR Heightmap Loading
|
||||
When loading EXR files with the `exr` crate:
|
||||
### EXR Heightmap Loading (Physics Only)
|
||||
When loading EXR files with the `exr` crate for physics colliders:
|
||||
- Must import traits: `use exr::prelude::{ReadChannels, ReadLayers};`
|
||||
- Use builder pattern: `.no_deep_data().largest_resolution_level().all_channels().all_layers().all_attributes().from_file(path)`
|
||||
- Extract float data: `channel.sample_data.values_as_f32().collect()`
|
||||
- Create R32Float texture for height data
|
||||
- R32Float is non-filterable, requires `FilterMode::Nearest` sampler
|
||||
- Convert to nalgebra DMatrix for rapier3d heightfield collider
|
||||
- No GPU texture creation - physics data only
|
||||
|
||||
### wgpu Texture Formats
|
||||
- R32Float = single-channel 32-bit float, **non-filterable**
|
||||
- Use `TextureSampleType::Float { filterable: false }` in bind group layout
|
||||
- Use `SamplerBindingType::NonFiltering` for sampler binding
|
||||
- Attempting linear filtering on R32Float causes validation errors
|
||||
### Blender Export Workflow
|
||||
**Using Blender 5.0** for terrain creation and export:
|
||||
- Export terrain as **glTF** with baked height values in mesh vertices
|
||||
- Export same terrain as **EXR** heightmap (single-channel R32Float)
|
||||
- Both files represent the same terrain data, guaranteeing visual/physics sync
|
||||
- glTF used for rendering (vertices rendered directly, no shader displacement)
|
||||
- EXR used for physics (loaded into rapier3d heightfield collider)
|
||||
|
||||
### Multiple Render Pipelines
|
||||
- `Pipeline` enum determines which pipeline to use per DrawCall
|
||||
- Different pipelines can have different shaders, bind group layouts, uniforms
|
||||
- Terrain pipeline: includes height texture binding in vertex stage
|
||||
- Standard pipeline: basic mesh rendering without height displacement
|
||||
- Terrain pipeline: shadow-mapped rendering with line hatching shading
|
||||
- Standard pipeline: basic mesh rendering with diffuse lighting
|
||||
- Each pipeline writes to its own uniform buffer before rendering
|
||||
|
||||
### ECS Component Storages
|
||||
@@ -589,7 +595,8 @@ let time = Time::get_time_elapsed(); // Anywhere in code
|
||||
- ✅ Low-res framebuffer (160×120) with Bayer dithering
|
||||
- ✅ Multiple render pipelines (standard mesh + terrain)
|
||||
- ✅ Directional lighting with diffuse + ambient
|
||||
- ✅ EXR heightmap loading and terrain displacement
|
||||
- ✅ Terrain rendering (glTF with baked heights, no shader displacement)
|
||||
- ✅ EXR heightmap loading for physics colliders
|
||||
- ✅ glTF mesh loading
|
||||
- ✅ render_system (ECS-based DrawCall generation)
|
||||
|
||||
|
||||
1
Cargo.lock
generated
@@ -1832,6 +1832,7 @@ dependencies = [
|
||||
"pollster",
|
||||
"rapier3d",
|
||||
"sdl3",
|
||||
"serde_json",
|
||||
"wgpu",
|
||||
]
|
||||
|
||||
|
||||
@@ -17,3 +17,4 @@ exr = "1.72"
|
||||
half = "2.4"
|
||||
kurbo = "0.11"
|
||||
nalgebra = { version = "0.34.1", features = ["convert-glam030"] }
|
||||
serde_json = "1.0"
|
||||
|
||||
594
blender/scripts/generate_flowmap.py
Normal file
@@ -0,0 +1,594 @@
|
||||
import bpy
|
||||
import bmesh
|
||||
import numpy as np
|
||||
from pathlib import Path
|
||||
from mathutils import Vector
|
||||
from multiprocessing import Pool, cpu_count
|
||||
|
||||
|
||||
def simple_blur(data, iterations=1):
|
||||
"""
|
||||
Simple 3x3 box blur for 2D vector fields.
|
||||
"""
|
||||
result = data.copy()
|
||||
h, w, c = data.shape
|
||||
|
||||
for _ in range(iterations):
|
||||
blurred = np.zeros_like(result)
|
||||
for y in range(h):
|
||||
for x in range(w):
|
||||
count = 0
|
||||
total = np.zeros(c)
|
||||
|
||||
for dy in [-1, 0, 1]:
|
||||
for dx in [-1, 0, 1]:
|
||||
ny, nx = y + dy, x + dx
|
||||
if 0 <= ny < h and 0 <= nx < w:
|
||||
total += result[ny, nx]
|
||||
count += 1
|
||||
|
||||
blurred[y, x] = total / count if count > 0 else result[y, x]
|
||||
|
||||
result = blurred
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def sample_curve_points(curve_obj, samples_per_segment=50):
|
||||
"""
|
||||
Sample points along a Blender curve object (Bezier, NURBS, etc).
|
||||
|
||||
Args:
|
||||
curve_obj: Blender curve object
|
||||
samples_per_segment: Number of samples per curve segment
|
||||
|
||||
Returns:
|
||||
List of Vector world-space positions
|
||||
"""
|
||||
curve_obj.data.resolution_u = samples_per_segment
|
||||
|
||||
depsgraph = bpy.context.evaluated_depsgraph_get()
|
||||
curve_eval = curve_obj.evaluated_get(depsgraph)
|
||||
mesh_from_curve = curve_eval.to_mesh()
|
||||
|
||||
points = [curve_obj.matrix_world @ v.co for v in mesh_from_curve.vertices]
|
||||
|
||||
curve_eval.to_mesh_clear()
|
||||
|
||||
return points
|
||||
|
||||
|
||||
def compute_arc_lengths(points):
|
||||
"""
|
||||
Compute cumulative arc length at each point along a curve.
|
||||
|
||||
Args:
|
||||
points: List of Vector positions
|
||||
|
||||
Returns:
|
||||
List of floats representing cumulative distance from curve start
|
||||
"""
|
||||
if len(points) == 0:
|
||||
return []
|
||||
|
||||
arc_lengths = [0.0]
|
||||
cumulative = 0.0
|
||||
|
||||
for i in range(1, len(points)):
|
||||
segment_length = (points[i] - points[i-1]).length
|
||||
cumulative += segment_length
|
||||
arc_lengths.append(cumulative)
|
||||
|
||||
return arc_lengths
|
||||
|
||||
|
||||
def get_path_curves(collection_name="Paths"):
|
||||
"""
|
||||
Get all curve objects from a collection, or all curves with 'path' in the name.
|
||||
|
||||
Args:
|
||||
collection_name: Name of collection containing path curves
|
||||
|
||||
Returns:
|
||||
List of curve objects
|
||||
"""
|
||||
collection = bpy.data.collections.get(collection_name)
|
||||
|
||||
if collection:
|
||||
curves = [obj for obj in collection.objects if obj.type == 'CURVE']
|
||||
print(f"Found {len(curves)} curves in collection '{collection_name}'")
|
||||
return curves
|
||||
|
||||
print(f"Collection '{collection_name}' not found. Searching for curves with 'path' in name...")
|
||||
curves = [obj for obj in bpy.data.objects if obj.type == 'CURVE' and 'path' in obj.name.lower()]
|
||||
|
||||
if not curves:
|
||||
print("No path curves found. Looking for any curve objects...")
|
||||
curves = [obj for obj in bpy.data.objects if obj.type == 'CURVE']
|
||||
|
||||
return curves
|
||||
|
||||
|
||||
def closest_point_on_segment_2d(point, seg_start, seg_end):
|
||||
"""
|
||||
Find closest point on a 2D line segment to a given point.
|
||||
|
||||
Args:
|
||||
point: Vector2 (x, y)
|
||||
seg_start: Vector2 segment start
|
||||
seg_end: Vector2 segment end
|
||||
|
||||
Returns:
|
||||
Vector2 closest point on segment
|
||||
"""
|
||||
segment = seg_end - seg_start
|
||||
point_vec = point - seg_start
|
||||
|
||||
segment_len_sq = segment.length_squared
|
||||
|
||||
if segment_len_sq < 1e-8:
|
||||
return seg_start
|
||||
|
||||
t = max(0.0, min(1.0, point_vec.dot(segment) / segment_len_sq))
|
||||
|
||||
return seg_start + segment * t
|
||||
|
||||
|
||||
def process_row_chunk(args):
|
||||
"""
|
||||
Process a chunk of rows for parallel computation.
|
||||
|
||||
Args:
|
||||
args: Tuple of (start_row, end_row, width, height_total, min_x, max_x, min_y, max_y, path_segments_simple)
|
||||
|
||||
Returns:
|
||||
Tuple of (flow_field_chunk, distance_field_chunk, arc_length_field_chunk, segment_id_chunk)
|
||||
"""
|
||||
start_row, end_row, width, height_total, min_x, max_x, min_y, max_y, path_segments_simple = args
|
||||
|
||||
height = end_row - start_row
|
||||
flow_field_chunk = np.zeros((height, width, 2), dtype=np.float32)
|
||||
distance_field_chunk = np.zeros((height, width), dtype=np.float32)
|
||||
arc_length_field_chunk = np.zeros((height, width), dtype=np.float32)
|
||||
segment_id_chunk = np.zeros((height, width), dtype=np.int32)
|
||||
|
||||
for local_y in range(height):
|
||||
y = start_row + local_y
|
||||
|
||||
for x in range(width):
|
||||
u = x / width
|
||||
v = y / height_total
|
||||
|
||||
world_x = min_x + u * (max_x - min_x)
|
||||
world_y = min_y + v * (max_y - min_y)
|
||||
world_pos_2d = np.array([world_x, world_y])
|
||||
|
||||
min_dist = float('inf')
|
||||
closest_point = None
|
||||
closest_arc_length = 0.0
|
||||
closest_segment_id = -1
|
||||
|
||||
for seg_id, (seg_start, seg_end, arc_start, arc_end) in enumerate(path_segments_simple):
|
||||
segment = seg_end - seg_start
|
||||
point_vec = world_pos_2d - seg_start
|
||||
|
||||
segment_len_sq = np.dot(segment, segment)
|
||||
|
||||
if segment_len_sq < 1e-8:
|
||||
closest_on_segment = seg_start
|
||||
t = 0.0
|
||||
else:
|
||||
t = max(0.0, min(1.0, np.dot(point_vec, segment) / segment_len_sq))
|
||||
closest_on_segment = seg_start + segment * t
|
||||
|
||||
dist = np.linalg.norm(closest_on_segment - world_pos_2d)
|
||||
|
||||
if dist < min_dist:
|
||||
min_dist = dist
|
||||
closest_point = closest_on_segment
|
||||
closest_arc_length = arc_start + t * (arc_end - arc_start)
|
||||
closest_segment_id = seg_id
|
||||
|
||||
if closest_point is not None:
|
||||
direction = closest_point - world_pos_2d
|
||||
flow_field_chunk[local_y, x, 0] = direction[0]
|
||||
flow_field_chunk[local_y, x, 1] = -direction[1]
|
||||
distance_field_chunk[local_y, x] = min_dist
|
||||
arc_length_field_chunk[local_y, x] = closest_arc_length
|
||||
segment_id_chunk[local_y, x] = closest_segment_id
|
||||
|
||||
return (start_row, flow_field_chunk, distance_field_chunk, arc_length_field_chunk, segment_id_chunk)
|
||||
|
||||
|
||||
def generate_flowmap_from_paths(terrain_obj, path_curves, resolution=1024, blur_iterations=2):
|
||||
"""
|
||||
Generate a flowmap texture showing direction toward nearest path.
|
||||
|
||||
Args:
|
||||
terrain_obj: Blender mesh object (the terrain plane)
|
||||
path_curves: List of Blender curve objects representing paths
|
||||
resolution: Output texture resolution (square)
|
||||
blur_iterations: Number of smoothing passes for flow field
|
||||
|
||||
Returns:
|
||||
Blender image containing encoded flowmap
|
||||
"""
|
||||
print(f"Generating flowmap from {len(path_curves)} path curves")
|
||||
print(f"Output resolution: {resolution}×{resolution}")
|
||||
|
||||
mesh = terrain_obj.data
|
||||
bm = bmesh.new()
|
||||
bm.from_mesh(mesh)
|
||||
bm.verts.ensure_lookup_table()
|
||||
|
||||
world_matrix = terrain_obj.matrix_world
|
||||
verts_world = [world_matrix @ v.co for v in bm.verts]
|
||||
|
||||
if len(verts_world) == 0:
|
||||
raise ValueError("Terrain mesh has no vertices")
|
||||
|
||||
xs = [v.x for v in verts_world]
|
||||
ys = [v.y for v in verts_world]
|
||||
|
||||
min_x, max_x = min(xs), max(xs)
|
||||
min_y, max_y = min(ys), max(ys)
|
||||
|
||||
print(f"Terrain bounds: X=[{min_x:.2f}, {max_x:.2f}], Y=[{min_y:.2f}, {max_y:.2f}]")
|
||||
|
||||
print("Sampling path curves...")
|
||||
all_path_points = []
|
||||
for curve_obj in path_curves:
|
||||
points = sample_curve_points(curve_obj, samples_per_segment=50)
|
||||
all_path_points.extend(points)
|
||||
print(f" {curve_obj.name}: {len(points)} points")
|
||||
|
||||
print(f"Total path points: {len(all_path_points)}")
|
||||
|
||||
if len(all_path_points) == 0:
|
||||
raise ValueError("No path points sampled. Check that path curves exist and have geometry.")
|
||||
|
||||
print("Building line segments from path points with arc lengths...")
|
||||
path_segments = []
|
||||
current_segment_start = 0
|
||||
|
||||
for curve_obj in path_curves:
|
||||
curve_points = sample_curve_points(curve_obj, samples_per_segment=50)
|
||||
arc_lengths = compute_arc_lengths(curve_points)
|
||||
for i in range(len(curve_points) - 1):
|
||||
path_segments.append((curve_points[i], curve_points[i + 1], arc_lengths[i], arc_lengths[i + 1]))
|
||||
|
||||
print(f"Created {len(path_segments)} line segments with arc length data")
|
||||
|
||||
print("Generating flow field toward nearest path...")
|
||||
h, w = resolution, resolution
|
||||
flow_field = np.zeros((h, w, 2), dtype=np.float32)
|
||||
distance_field = np.zeros((h, w), dtype=np.float32)
|
||||
arc_length_field = np.zeros((h, w), dtype=np.float32)
|
||||
segment_id_field = np.zeros((h, w), dtype=np.int32)
|
||||
|
||||
path_segments_simple = [
|
||||
(np.array([seg_start.x, seg_start.y]), np.array([seg_end.x, seg_end.y]), arc_start, arc_end)
|
||||
for seg_start, seg_end, arc_start, arc_end in path_segments
|
||||
]
|
||||
|
||||
num_cores = cpu_count()
|
||||
chunk_size = max(1, h // num_cores)
|
||||
|
||||
print(f"Using {num_cores} CPU cores with chunk size {chunk_size} rows")
|
||||
|
||||
chunks = []
|
||||
for start_row in range(0, h, chunk_size):
|
||||
end_row = min(start_row + chunk_size, h)
|
||||
chunks.append((start_row, end_row, w, h, min_x, max_x, min_y, max_y, path_segments_simple))
|
||||
|
||||
with Pool(processes=num_cores) as pool:
|
||||
total_chunks = len(chunks)
|
||||
print(f"Processing {total_chunks} chunks...")
|
||||
results = []
|
||||
for i, result in enumerate(pool.imap(process_row_chunk, chunks)):
|
||||
results.append(result)
|
||||
progress = (i + 1) / total_chunks * 100
|
||||
print(f" Progress: {i+1}/{total_chunks} chunks ({progress:.1f}%)")
|
||||
|
||||
print("Assembling results from parallel workers...")
|
||||
for start_row, flow_chunk, distance_chunk, arc_length_chunk, segment_id_chunk in results:
|
||||
end_row = start_row + flow_chunk.shape[0]
|
||||
flow_field[start_row:end_row, :, :] = flow_chunk
|
||||
distance_field[start_row:end_row, :] = distance_chunk
|
||||
arc_length_field[start_row:end_row, :] = arc_length_chunk
|
||||
segment_id_field[start_row:end_row, :] = segment_id_chunk
|
||||
|
||||
print(f"Distance range: {distance_field.min():.2f} to {distance_field.max():.2f}")
|
||||
print(f"Arc length range: {arc_length_field.min():.2f} to {arc_length_field.max():.2f}")
|
||||
|
||||
print("Normalizing flow vectors...")
|
||||
magnitudes = np.sqrt(flow_field[:, :, 0]**2 + flow_field[:, :, 1]**2)
|
||||
magnitudes = np.maximum(magnitudes, 1e-8)
|
||||
|
||||
flow_field[:, :, 0] /= magnitudes
|
||||
flow_field[:, :, 1] /= magnitudes
|
||||
|
||||
if blur_iterations > 0:
|
||||
print(f"Applying distance-based blur (max {blur_iterations} iterations)...")
|
||||
|
||||
original_flow = flow_field.copy()
|
||||
|
||||
print(" Creating maximally blurred version...")
|
||||
blurred_flow = simple_blur(flow_field, iterations=blur_iterations)
|
||||
|
||||
magnitudes = np.sqrt(blurred_flow[:, :, 0]**2 + blurred_flow[:, :, 1]**2)
|
||||
magnitudes = np.maximum(magnitudes, 1e-8)
|
||||
blurred_flow[:, :, 0] /= magnitudes
|
||||
blurred_flow[:, :, 1] /= magnitudes
|
||||
|
||||
print(" Blending based on distance to path...")
|
||||
max_distance = 60.0
|
||||
distance_normalized = np.clip(distance_field / max_distance, 0.0, 1.0)
|
||||
|
||||
blend_factor = distance_normalized[:, :, np.newaxis]
|
||||
|
||||
flow_field = original_flow * (1.0 - blend_factor) + blurred_flow * blend_factor
|
||||
|
||||
magnitudes = np.sqrt(flow_field[:, :, 0]**2 + flow_field[:, :, 1]**2)
|
||||
magnitudes = np.maximum(magnitudes, 1e-8)
|
||||
flow_field[:, :, 0] /= magnitudes
|
||||
flow_field[:, :, 1] /= magnitudes
|
||||
|
||||
print(" Distance-based blur complete!")
|
||||
|
||||
print("Saving segment ID debug image...")
|
||||
num_segments = len(path_segments)
|
||||
segment_colors = np.random.RandomState(42).rand(num_segments, 3).astype(np.float32)
|
||||
|
||||
segment_img = np.zeros((resolution, resolution, 4), dtype=np.float32)
|
||||
for y in range(resolution):
|
||||
for x in range(resolution):
|
||||
seg_id = segment_id_field[y, x]
|
||||
if seg_id >= 0 and seg_id < num_segments:
|
||||
segment_img[y, x, 0:3] = segment_colors[seg_id]
|
||||
segment_img[y, x, 3] = 1.0
|
||||
|
||||
segment_debug_img = bpy.data.images.new(
|
||||
"Segment_Debug", width=resolution, height=resolution, alpha=True, float_buffer=True
|
||||
)
|
||||
segment_debug_img.pixels[:] = segment_img.flatten()
|
||||
segment_debug_img.filepath_raw = str(
|
||||
Path(bpy.data.filepath).parent.parent / "textures" / "path_segment_debug.png"
|
||||
)
|
||||
segment_debug_img.file_format = 'PNG'
|
||||
segment_debug_img.save()
|
||||
bpy.data.images.remove(segment_debug_img)
|
||||
print(f" Saved to textures/path_segment_debug.png ({num_segments} segments)")
|
||||
|
||||
print("Saving distance field debug image...")
|
||||
distance_normalized = np.clip(distance_field / 50.0, 0.0, 1.0)
|
||||
distance_img = np.zeros((resolution, resolution, 4), dtype=np.float32)
|
||||
distance_img[:, :, 0] = distance_normalized
|
||||
distance_img[:, :, 1] = distance_normalized
|
||||
distance_img[:, :, 2] = distance_normalized
|
||||
distance_img[:, :, 3] = 1.0
|
||||
|
||||
debug_img = bpy.data.images.new(
|
||||
"Distance_Debug", width=resolution, height=resolution, alpha=True, float_buffer=True
|
||||
)
|
||||
debug_img.pixels[:] = distance_img.flatten()
|
||||
debug_img.filepath_raw = str(
|
||||
Path(bpy.data.filepath).parent.parent / "textures" / "path_distance_debug.png"
|
||||
)
|
||||
debug_img.file_format = 'PNG'
|
||||
debug_img.save()
|
||||
bpy.data.images.remove(debug_img)
|
||||
print(" Saved to textures/path_distance_debug.png")
|
||||
|
||||
print("Saving direction field debug image...")
|
||||
direction_img = np.zeros((resolution, resolution, 4), dtype=np.float32)
|
||||
|
||||
cardinal_dirs = np.array([
|
||||
[0.0, 1.0],
|
||||
[0.707, 0.707],
|
||||
[1.0, 0.0],
|
||||
[0.707, -0.707],
|
||||
[0.0, -1.0],
|
||||
[-0.707, -0.707],
|
||||
[-1.0, 0.0],
|
||||
[-0.707, 0.707],
|
||||
])
|
||||
|
||||
cardinal_colors = np.array([
|
||||
[76, 120, 168],
|
||||
[242, 142, 43],
|
||||
[225, 87, 89],
|
||||
[118, 183, 178],
|
||||
[89, 161, 79],
|
||||
[237, 201, 72],
|
||||
[176, 122, 161],
|
||||
[255, 157, 167],
|
||||
]) / 255.0
|
||||
|
||||
flow_flat = flow_field.reshape(-1, 2)
|
||||
dots = flow_flat @ cardinal_dirs.T
|
||||
closest_dir_indices = np.argmax(dots, axis=1)
|
||||
direction_img[:, :, 0:3] = cardinal_colors[closest_dir_indices].reshape(resolution, resolution, 3)
|
||||
direction_img[:, :, 3] = 1.0
|
||||
|
||||
print("Drawing compass in lower right corner...")
|
||||
compass_radius = resolution // 8
|
||||
compass_center_x = resolution - compass_radius - 20
|
||||
compass_center_y = compass_radius + 20
|
||||
|
||||
for y in range(resolution):
|
||||
for x in range(resolution):
|
||||
dx = x - compass_center_x
|
||||
dy = y - compass_center_y
|
||||
dist = np.sqrt(dx * dx + dy * dy)
|
||||
|
||||
if dist <= compass_radius:
|
||||
angle = np.arctan2(dy, dx)
|
||||
sector = int(((angle + np.pi / 2) % (2 * np.pi)) / (np.pi / 4)) % 8
|
||||
|
||||
direction_img[y, x, 0:3] = cardinal_colors[sector]
|
||||
|
||||
direction_debug_img = bpy.data.images.new(
|
||||
"Direction_Debug", width=resolution, height=resolution, alpha=True, float_buffer=True
|
||||
)
|
||||
direction_debug_img.pixels[:] = direction_img.flatten()
|
||||
direction_debug_img.filepath_raw = str(
|
||||
Path(bpy.data.filepath).parent.parent / "textures" / "path_direction_debug.png"
|
||||
)
|
||||
direction_debug_img.file_format = 'PNG'
|
||||
direction_debug_img.save()
|
||||
bpy.data.images.remove(direction_debug_img)
|
||||
print(" Saved to textures/path_direction_debug.png (8-color cardinal directions)")
|
||||
|
||||
print("\n=== DEBUG: Sample flow field values ===")
|
||||
print(f"Terrain bounds: X=[{min_x}, {max_x}], Y=[{min_y}, {max_y}]")
|
||||
|
||||
sample_pixels = [
|
||||
(resolution // 4, resolution // 4),
|
||||
(resolution // 2, resolution // 2),
|
||||
(3 * resolution // 4, 3 * resolution // 4),
|
||||
]
|
||||
|
||||
for py, px in sample_pixels:
|
||||
u = px / resolution
|
||||
v = py / resolution
|
||||
world_x = min_x + u * (max_x - min_x)
|
||||
world_y = min_y + v * (max_y - min_y)
|
||||
|
||||
flow_x = flow_field[py, px, 0]
|
||||
flow_y = flow_field[py, px, 1]
|
||||
|
||||
dots = [np.dot([flow_x, flow_y], cardinal_dirs[i]) for i in range(8)]
|
||||
best_dir = np.argmax(dots)
|
||||
dir_names = ["North", "NE", "East", "SE", "South", "SW", "West", "NW"]
|
||||
|
||||
print(f" Pixel [{px}, {py}] -> World [{world_x:.1f}, {world_y:.1f}]")
|
||||
print(f" Flow: [{flow_x:.3f}, {flow_y:.3f}] -> {dir_names[best_dir]}")
|
||||
print(f" Encoded: R={flow_x * 0.5 + 0.5:.3f}, G={flow_y * 0.5 + 0.5:.3f}")
|
||||
|
||||
print("Encoding to RGBA texture...")
|
||||
flow_encoded_x = flow_field[:, :, 0] * 0.5 + 0.5
|
||||
flow_encoded_y = flow_field[:, :, 1] * 0.5 + 0.5
|
||||
|
||||
distance_normalized = np.clip(distance_field / max_distance, 0.0, 1.0)
|
||||
|
||||
arc_length_repeat = 100.0
|
||||
arc_length_normalized = np.fmod(arc_length_field, arc_length_repeat) / arc_length_repeat
|
||||
|
||||
print(f"Distance encoding: 0.0 = on path, 1.0 = {max_distance}+ units away")
|
||||
print(f"Arc length encoding: repeating pattern every {arc_length_repeat} world units")
|
||||
|
||||
flowmap = np.zeros((resolution, resolution, 4), dtype=np.float32)
|
||||
flowmap[:, :, 0] = flow_encoded_x
|
||||
flowmap[:, :, 1] = flow_encoded_y
|
||||
flowmap[:, :, 2] = distance_normalized
|
||||
flowmap[:, :, 3] = arc_length_normalized
|
||||
|
||||
print(f"Creating Blender image...")
|
||||
output_img = bpy.data.images.new(
|
||||
name="Path_Flowmap",
|
||||
width=resolution,
|
||||
height=resolution,
|
||||
alpha=True,
|
||||
float_buffer=True
|
||||
)
|
||||
|
||||
output_img.pixels[:] = flowmap.flatten()
|
||||
|
||||
bm.free()
|
||||
|
||||
return output_img
|
||||
|
||||
|
||||
def save_flowmap(output_path, resolution=1024, blur_iterations=3, terrain_name="TerrainPlane", path_collection="Paths"):
|
||||
"""
|
||||
Main function to generate and save flowmap from paths in current Blender file.
|
||||
|
||||
Args:
|
||||
output_path: Path to save PNG flowmap
|
||||
resolution: Output texture resolution
|
||||
blur_iterations: Smoothing iterations
|
||||
terrain_name: Name of terrain object
|
||||
path_collection: Name of collection or search pattern for path curves
|
||||
"""
|
||||
terrain_obj = bpy.data.objects.get(terrain_name)
|
||||
|
||||
if not terrain_obj:
|
||||
print(f"Object '{terrain_name}' not found. Searching for terrain-like objects...")
|
||||
for obj in bpy.data.objects:
|
||||
if obj.type == 'MESH':
|
||||
print(f" Found mesh: {obj.name}")
|
||||
if 'terrain' in obj.name.lower() or 'plane' in obj.name.lower():
|
||||
terrain_obj = obj
|
||||
print(f" -> Using: {obj.name}")
|
||||
break
|
||||
|
||||
if not terrain_obj:
|
||||
raise ValueError(
|
||||
f"Object '{terrain_name}' not found and no terrain mesh detected. "
|
||||
f"Available objects: {[obj.name for obj in bpy.data.objects if obj.type == 'MESH']}"
|
||||
)
|
||||
|
||||
print(f"Using terrain object: {terrain_obj.name}")
|
||||
|
||||
if terrain_obj.type != 'MESH':
|
||||
raise ValueError(f"Object '{terrain_obj.name}' is not a mesh")
|
||||
|
||||
path_curves = get_path_curves(path_collection)
|
||||
|
||||
if not path_curves:
|
||||
raise ValueError(
|
||||
f"No path curves found. Create curves and add them to a '{path_collection}' collection "
|
||||
f"or name them with 'path' in the name. Available curves: "
|
||||
f"{[obj.name for obj in bpy.data.objects if obj.type == 'CURVE']}"
|
||||
)
|
||||
|
||||
print(f"Found {len(path_curves)} path curves:")
|
||||
for curve in path_curves:
|
||||
print(f" - {curve.name}")
|
||||
|
||||
flowmap_img = generate_flowmap_from_paths(
|
||||
terrain_obj,
|
||||
path_curves,
|
||||
resolution=resolution,
|
||||
blur_iterations=blur_iterations
|
||||
)
|
||||
|
||||
flowmap_img.filepath_raw = str(output_path)
|
||||
flowmap_img.file_format = 'OPEN_EXR'
|
||||
flowmap_img.save()
|
||||
|
||||
bpy.data.images.remove(flowmap_img)
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Flowmap generation complete!")
|
||||
print(f"Output: {output_path}")
|
||||
print(f"Resolution: {resolution}×{resolution}")
|
||||
print(f"Paths used: {len(path_curves)}")
|
||||
print(f"Format: OpenEXR (32-bit float)")
|
||||
print(f"{'='*60}")
|
||||
print("\nChannel encoding:")
|
||||
print(" R channel: X direction toward nearest path (0.5=none, <0.5=left, >0.5=right)")
|
||||
print(" G channel: Y direction toward nearest path (0.5=none, <0.5=down, >0.5=up)")
|
||||
print(" B channel: Distance to nearest path (0.0=on path, 1.0=far away)")
|
||||
print(" A channel: Arc length along path (0.0-1.0, repeating every 100 units)")
|
||||
print("\nTo use in shader:")
|
||||
print(" vec4 flowmap_sample = texture(flowmap, uv);")
|
||||
print(" vec2 direction = flowmap_sample.rg * 2.0 - 1.0; // Direction to path")
|
||||
print(" float distance = flowmap_sample.b; // Distance to path")
|
||||
print(" float arc_length = flowmap_sample.a * 100.0; // Arc length in world units")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
project_root = Path(bpy.data.filepath).parent.parent
|
||||
output_path = project_root / "textures" / "terrain_flowmap.exr"
|
||||
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
save_flowmap(
|
||||
output_path=output_path,
|
||||
resolution=1024,
|
||||
blur_iterations=5,
|
||||
terrain_name="TerrainPlane",
|
||||
path_collection="Paths"
|
||||
)
|
||||
208
blender/scripts/generate_normal_map.py
Normal file
@@ -0,0 +1,208 @@
|
||||
import bpy
|
||||
import bmesh
|
||||
import numpy as np
|
||||
from pathlib import Path
|
||||
from mathutils import Vector
|
||||
|
||||
def simple_blur(data, iterations=1):
|
||||
"""
|
||||
Simple 3x3 box blur for vector fields.
|
||||
"""
|
||||
result = data.copy()
|
||||
h, w, c = data.shape
|
||||
|
||||
for _ in range(iterations):
|
||||
blurred = np.zeros_like(result)
|
||||
for y in range(h):
|
||||
for x in range(w):
|
||||
count = 0
|
||||
total = np.zeros(c)
|
||||
|
||||
for dy in [-1, 0, 1]:
|
||||
for dx in [-1, 0, 1]:
|
||||
ny, nx = y + dy, x + dx
|
||||
if 0 <= ny < h and 0 <= nx < w:
|
||||
total += result[ny, nx]
|
||||
count += 1
|
||||
|
||||
blurred[y, x] = total / count if count > 0 else result[y, x]
|
||||
|
||||
result = blurred
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def generate_normal_map_from_terrain(terrain_obj, resolution=1024, blur_iterations=2):
|
||||
"""
|
||||
Bake terrain surface normals to a texture for neighbor sampling.
|
||||
|
||||
Args:
|
||||
terrain_obj: Blender mesh object (the terrain plane)
|
||||
resolution: Output texture resolution (square)
|
||||
blur_iterations: Number of smoothing passes
|
||||
"""
|
||||
|
||||
print(f"Generating normal map from terrain mesh: {terrain_obj.name}")
|
||||
print(f"Output resolution: {resolution}×{resolution}")
|
||||
|
||||
mesh = terrain_obj.data
|
||||
bm = bmesh.new()
|
||||
bm.from_mesh(mesh)
|
||||
bm.verts.ensure_lookup_table()
|
||||
bm.faces.ensure_lookup_table()
|
||||
|
||||
world_matrix = terrain_obj.matrix_world
|
||||
normal_matrix = world_matrix.to_3x3().inverted().transposed()
|
||||
|
||||
verts_world = [world_matrix @ v.co for v in bm.verts]
|
||||
normals_world = [normal_matrix @ v.normal for v in bm.verts]
|
||||
|
||||
if len(verts_world) == 0:
|
||||
raise ValueError("Terrain mesh has no vertices")
|
||||
|
||||
xs = [v.x for v in verts_world]
|
||||
ys = [v.y for v in verts_world]
|
||||
|
||||
min_x, max_x = min(xs), max(xs)
|
||||
min_y, max_y = min(ys), max(ys)
|
||||
|
||||
print(f"Terrain bounds: X=[{min_x:.2f}, {max_x:.2f}], Y=[{min_y:.2f}, {max_y:.2f}]")
|
||||
|
||||
normal_map = np.zeros((resolution, resolution, 3), dtype=np.float32)
|
||||
sample_counts = np.zeros((resolution, resolution), dtype=np.int32)
|
||||
|
||||
print("Sampling vertex normals to grid...")
|
||||
for v_pos, v_normal in zip(verts_world, normals_world):
|
||||
u = (v_pos.x - min_x) / (max_x - min_x) if max_x > min_x else 0.5
|
||||
v_coord = (v_pos.y - min_y) / (max_y - min_y) if max_y > min_y else 0.5
|
||||
|
||||
u = np.clip(u, 0.0, 0.9999)
|
||||
v_coord = np.clip(v_coord, 0.0, 0.9999)
|
||||
|
||||
grid_x = int(u * resolution)
|
||||
grid_y = int(v_coord * resolution)
|
||||
|
||||
normal_map[grid_y, grid_x, 0] += v_normal.x
|
||||
normal_map[grid_y, grid_x, 1] += v_normal.y
|
||||
normal_map[grid_y, grid_x, 2] += v_normal.z
|
||||
sample_counts[grid_y, grid_x] += 1
|
||||
|
||||
mask = sample_counts > 0
|
||||
for c in range(3):
|
||||
normal_map[:, :, c][mask] /= sample_counts[mask]
|
||||
|
||||
print("Interpolating missing values...")
|
||||
if not mask.all():
|
||||
filled = normal_map.copy()
|
||||
for _ in range(10):
|
||||
smoothed = simple_blur(filled, iterations=2)
|
||||
for c in range(3):
|
||||
filled[:, :, c] = np.where(mask, filled[:, :, c], smoothed[:, :, c])
|
||||
normal_map = filled
|
||||
|
||||
print("Normalizing normal vectors...")
|
||||
magnitudes = np.sqrt(np.sum(normal_map**2, axis=2))
|
||||
magnitudes = np.maximum(magnitudes, 1e-8)
|
||||
for c in range(3):
|
||||
normal_map[:, :, c] /= magnitudes
|
||||
|
||||
if blur_iterations > 0:
|
||||
print(f"Smoothing normal field ({blur_iterations} iterations)...")
|
||||
normal_map = simple_blur(normal_map, iterations=blur_iterations)
|
||||
magnitudes = np.sqrt(np.sum(normal_map**2, axis=2))
|
||||
magnitudes = np.maximum(magnitudes, 1e-8)
|
||||
for c in range(3):
|
||||
normal_map[:, :, c] /= magnitudes
|
||||
|
||||
print("Encoding to RGB texture (world space normals)...")
|
||||
normal_encoded = normal_map * 0.5 + 0.5
|
||||
|
||||
normal_rgba = np.zeros((resolution, resolution, 4), dtype=np.float32)
|
||||
normal_rgba[:, :, 0] = normal_encoded[:, :, 0]
|
||||
normal_rgba[:, :, 1] = normal_encoded[:, :, 1]
|
||||
normal_rgba[:, :, 2] = normal_encoded[:, :, 2]
|
||||
normal_rgba[:, :, 3] = 1.0
|
||||
|
||||
normal_flat = normal_rgba.flatten()
|
||||
|
||||
print(f"Creating Blender image...")
|
||||
output_img = bpy.data.images.new(
|
||||
name="Terrain_Normal_Map",
|
||||
width=resolution,
|
||||
height=resolution,
|
||||
alpha=True,
|
||||
float_buffer=True
|
||||
)
|
||||
|
||||
output_img.pixels[:] = normal_flat
|
||||
|
||||
bm.free()
|
||||
|
||||
return output_img
|
||||
|
||||
|
||||
def save_normal_map(output_path, resolution=1024, blur_iterations=2, terrain_name="TerrainPlane"):
|
||||
"""
|
||||
Main function to generate and save normal map from terrain in current Blender file.
|
||||
|
||||
Args:
|
||||
output_path: Path to save PNG normal map
|
||||
resolution: Output texture resolution
|
||||
blur_iterations: Smoothing iterations
|
||||
terrain_name: Name of terrain object
|
||||
"""
|
||||
|
||||
terrain_obj = bpy.data.objects.get(terrain_name)
|
||||
|
||||
if not terrain_obj:
|
||||
print(f"Object '{terrain_name}' not found. Searching for terrain-like objects...")
|
||||
for obj in bpy.data.objects:
|
||||
if obj.type == 'MESH':
|
||||
print(f" Found mesh: {obj.name}")
|
||||
if 'terrain' in obj.name.lower() or 'plane' in obj.name.lower():
|
||||
terrain_obj = obj
|
||||
print(f" -> Using: {obj.name}")
|
||||
break
|
||||
|
||||
if not terrain_obj:
|
||||
raise ValueError(f"Object '{terrain_name}' not found and no terrain mesh detected. Available objects: {[obj.name for obj in bpy.data.objects if obj.type == 'MESH']}")
|
||||
|
||||
print(f"Using terrain object: {terrain_obj.name}")
|
||||
|
||||
if terrain_obj.type != 'MESH':
|
||||
raise ValueError(f"Object '{terrain_obj.name}' is not a mesh")
|
||||
|
||||
normal_img = generate_normal_map_from_terrain(
|
||||
terrain_obj,
|
||||
resolution=resolution,
|
||||
blur_iterations=blur_iterations
|
||||
)
|
||||
|
||||
normal_img.filepath_raw = str(output_path)
|
||||
normal_img.file_format = 'PNG'
|
||||
normal_img.save()
|
||||
|
||||
bpy.data.images.remove(normal_img)
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Normal map generation complete!")
|
||||
print(f"Output: {output_path}")
|
||||
print(f"Resolution: {resolution}×{resolution}")
|
||||
print(f"{'='*60}")
|
||||
print("\nNormal encoding:")
|
||||
print(" RGB channels: World-space normal (0.5, 0.5, 0.5) = (0, 0, 0)")
|
||||
print(" Decode in shader: normal = texture.rgb * 2.0 - 1.0;")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
project_root = Path(bpy.data.filepath).parent.parent
|
||||
output_path = project_root / "textures" / "terrain_normals.png"
|
||||
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
save_normal_map(
|
||||
output_path=output_path,
|
||||
resolution=1024,
|
||||
blur_iterations=2,
|
||||
terrain_name="TerrainPlane"
|
||||
)
|
||||
BIN
blender/terrain.blend
Normal file
BIN
blender/terrain.blend1
Normal file
BIN
gimp/dither_patterns.xcf
Normal file
BIN
meshes/terrain.bin
Normal file
BIN
meshes/terrain.glb
Normal file
325
meshes/terrain.gltf
Normal file
@@ -0,0 +1,325 @@
|
||||
{
|
||||
"asset":{
|
||||
"generator":"Khronos glTF Blender I/O v5.0.21",
|
||||
"version":"2.0"
|
||||
},
|
||||
"extensionsUsed":[
|
||||
"EXT_mesh_gpu_instancing"
|
||||
],
|
||||
"scene":0,
|
||||
"scenes":[
|
||||
{
|
||||
"name":"Scene",
|
||||
"nodes":[
|
||||
0,
|
||||
1,
|
||||
2
|
||||
]
|
||||
}
|
||||
],
|
||||
"nodes":[
|
||||
{
|
||||
"children":[
|
||||
3
|
||||
],
|
||||
"mesh":1,
|
||||
"name":"TerrainPlane"
|
||||
},
|
||||
{
|
||||
"mesh":2,
|
||||
"name":"TreePrime",
|
||||
"translation":[
|
||||
0,
|
||||
1,
|
||||
0
|
||||
]
|
||||
},
|
||||
{
|
||||
"name":"Main_Light_Path",
|
||||
"translation":[
|
||||
491.1999816894531,
|
||||
0,
|
||||
-66.48489379882812
|
||||
]
|
||||
},
|
||||
{
|
||||
"extensions":{
|
||||
"EXT_mesh_gpu_instancing":{
|
||||
"attributes":{
|
||||
"TRANSLATION":11,
|
||||
"ROTATION":12,
|
||||
"SCALE":13
|
||||
}
|
||||
}
|
||||
},
|
||||
"mesh":0,
|
||||
"name":"TerrainPlane.0"
|
||||
}
|
||||
],
|
||||
"materials":[
|
||||
{
|
||||
"doubleSided":true,
|
||||
"emissiveFactor":[
|
||||
1,
|
||||
1,
|
||||
1
|
||||
],
|
||||
"name":"heightmap",
|
||||
"pbrMetallicRoughness":{
|
||||
"baseColorFactor":[
|
||||
0,
|
||||
0,
|
||||
0,
|
||||
1
|
||||
]
|
||||
}
|
||||
}
|
||||
],
|
||||
"meshes":[
|
||||
{
|
||||
"name":"Cylinder",
|
||||
"primitives":[
|
||||
{
|
||||
"attributes":{
|
||||
"POSITION":0,
|
||||
"NORMAL":1,
|
||||
"TEXCOORD_0":2
|
||||
},
|
||||
"indices":3
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name":"Plane.001",
|
||||
"primitives":[
|
||||
{
|
||||
"attributes":{
|
||||
"POSITION":4,
|
||||
"NORMAL":5,
|
||||
"TEXCOORD_0":6
|
||||
},
|
||||
"indices":7,
|
||||
"material":0
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name":"Cylinder",
|
||||
"primitives":[
|
||||
{
|
||||
"attributes":{
|
||||
"POSITION":8,
|
||||
"NORMAL":9,
|
||||
"TEXCOORD_0":10
|
||||
},
|
||||
"indices":3
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"accessors":[
|
||||
{
|
||||
"bufferView":0,
|
||||
"componentType":5126,
|
||||
"count":704,
|
||||
"max":[
|
||||
1,
|
||||
11.999963760375977,
|
||||
1
|
||||
],
|
||||
"min":[
|
||||
-1,
|
||||
0,
|
||||
-1
|
||||
],
|
||||
"type":"VEC3"
|
||||
},
|
||||
{
|
||||
"bufferView":1,
|
||||
"componentType":5126,
|
||||
"count":704,
|
||||
"type":"VEC3"
|
||||
},
|
||||
{
|
||||
"bufferView":2,
|
||||
"componentType":5126,
|
||||
"count":704,
|
||||
"type":"VEC2"
|
||||
},
|
||||
{
|
||||
"bufferView":3,
|
||||
"componentType":5123,
|
||||
"count":1908,
|
||||
"type":"SCALAR"
|
||||
},
|
||||
{
|
||||
"bufferView":4,
|
||||
"componentType":5126,
|
||||
"count":18196,
|
||||
"max":[
|
||||
500,
|
||||
122.76703643798828,
|
||||
500
|
||||
],
|
||||
"min":[
|
||||
-500,
|
||||
-0.000225067138671875,
|
||||
-500
|
||||
],
|
||||
"type":"VEC3"
|
||||
},
|
||||
{
|
||||
"bufferView":5,
|
||||
"componentType":5126,
|
||||
"count":18196,
|
||||
"type":"VEC3"
|
||||
},
|
||||
{
|
||||
"bufferView":6,
|
||||
"componentType":5126,
|
||||
"count":18196,
|
||||
"type":"VEC2"
|
||||
},
|
||||
{
|
||||
"bufferView":7,
|
||||
"componentType":5123,
|
||||
"count":61206,
|
||||
"type":"SCALAR"
|
||||
},
|
||||
{
|
||||
"bufferView":8,
|
||||
"componentType":5126,
|
||||
"count":704,
|
||||
"max":[
|
||||
1,
|
||||
11.999963760375977,
|
||||
1
|
||||
],
|
||||
"min":[
|
||||
-1,
|
||||
0,
|
||||
-1
|
||||
],
|
||||
"type":"VEC3"
|
||||
},
|
||||
{
|
||||
"bufferView":9,
|
||||
"componentType":5126,
|
||||
"count":704,
|
||||
"type":"VEC3"
|
||||
},
|
||||
{
|
||||
"bufferView":10,
|
||||
"componentType":5126,
|
||||
"count":704,
|
||||
"type":"VEC2"
|
||||
},
|
||||
{
|
||||
"bufferView":11,
|
||||
"componentType":5126,
|
||||
"count":5588,
|
||||
"type":"VEC3"
|
||||
},
|
||||
{
|
||||
"bufferView":12,
|
||||
"componentType":5126,
|
||||
"count":5588,
|
||||
"type":"VEC4"
|
||||
},
|
||||
{
|
||||
"bufferView":13,
|
||||
"componentType":5126,
|
||||
"count":5588,
|
||||
"type":"VEC3"
|
||||
}
|
||||
],
|
||||
"bufferViews":[
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":8448,
|
||||
"byteOffset":0,
|
||||
"target":34962
|
||||
},
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":8448,
|
||||
"byteOffset":8448,
|
||||
"target":34962
|
||||
},
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":5632,
|
||||
"byteOffset":16896,
|
||||
"target":34962
|
||||
},
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":3816,
|
||||
"byteOffset":22528,
|
||||
"target":34963
|
||||
},
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":218352,
|
||||
"byteOffset":26344,
|
||||
"target":34962
|
||||
},
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":218352,
|
||||
"byteOffset":244696,
|
||||
"target":34962
|
||||
},
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":145568,
|
||||
"byteOffset":463048,
|
||||
"target":34962
|
||||
},
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":122412,
|
||||
"byteOffset":608616,
|
||||
"target":34963
|
||||
},
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":8448,
|
||||
"byteOffset":731028,
|
||||
"target":34962
|
||||
},
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":8448,
|
||||
"byteOffset":739476,
|
||||
"target":34962
|
||||
},
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":5632,
|
||||
"byteOffset":747924,
|
||||
"target":34962
|
||||
},
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":67056,
|
||||
"byteOffset":753556
|
||||
},
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":89408,
|
||||
"byteOffset":820612
|
||||
},
|
||||
{
|
||||
"buffer":0,
|
||||
"byteLength":67056,
|
||||
"byteOffset":910020
|
||||
}
|
||||
],
|
||||
"buffers":[
|
||||
{
|
||||
"byteLength":977076,
|
||||
"uri":"terrain.bin"
|
||||
}
|
||||
]
|
||||
}
|
||||
299
shaders/shared.wgsl
Normal file
@@ -0,0 +1,299 @@
|
||||
struct VertexInput {
|
||||
@location(0) position: vec3<f32>,
|
||||
@location(1) normal: vec3<f32>,
|
||||
@location(2) uv: vec2<f32>,
|
||||
@location(3) instance_model_0: vec4<f32>,
|
||||
@location(4) instance_model_1: vec4<f32>,
|
||||
@location(5) instance_model_2: vec4<f32>,
|
||||
@location(6) instance_model_3: vec4<f32>,
|
||||
}
|
||||
|
||||
struct VertexOutput {
|
||||
@builtin(position) clip_position: vec4<f32>,
|
||||
@location(0) world_position: vec3<f32>,
|
||||
@location(1) world_normal: vec3<f32>,
|
||||
@location(2) light_space_position: vec4<f32>,
|
||||
}
|
||||
|
||||
struct Uniforms {
|
||||
model: mat4x4<f32>,
|
||||
view: mat4x4<f32>,
|
||||
projection: mat4x4<f32>,
|
||||
light_view_projection: mat4x4<f32>,
|
||||
camera_position: vec3<f32>,
|
||||
height_scale: f32,
|
||||
time: f32,
|
||||
shadow_bias: f32,
|
||||
light_direction: vec3<f32>,
|
||||
}
|
||||
|
||||
@group(0) @binding(0)
|
||||
var<uniform> uniforms: Uniforms;
|
||||
|
||||
@group(0) @binding(1)
|
||||
var shadow_map: texture_depth_2d;
|
||||
|
||||
@group(0) @binding(2)
|
||||
var shadow_sampler: sampler_comparison;
|
||||
|
||||
@group(0) @binding(3)
|
||||
var dither_texture_array: texture_2d_array<f32>;
|
||||
|
||||
@group(0) @binding(4)
|
||||
var dither_sampler: sampler;
|
||||
|
||||
@group(0) @binding(5)
|
||||
var flowmap_texture: texture_2d<f32>;
|
||||
|
||||
@group(0) @binding(6)
|
||||
var flowmap_sampler: sampler;
|
||||
|
||||
const PI: f32 = 3.14159265359;
|
||||
const TERRAIN_BOUNDS: vec2<f32> = vec2<f32>(1000.0, 1000.0);
|
||||
const LINE_THICKNESS: f32 = 0.1;
|
||||
const OCTAVE_STEPS: f32 = 4.0;
|
||||
const STROKE_LENGTH: f32 = 0.8;
|
||||
|
||||
fn hash2(p: vec2<f32>) -> f32 {
|
||||
let p3 = fract(vec3<f32>(p.x, p.y, p.x) * 0.13);
|
||||
let p3_dot = dot(p3, vec3<f32>(p3.y + 3.333, p3.z + 3.333, p3.x + 3.333));
|
||||
return fract((p3.x + p3.y) * p3_dot);
|
||||
}
|
||||
|
||||
fn rand(p: vec2f) -> f32 {
|
||||
return fract(sin(dot(p, vec2f(12.9898, 78.233))) * 43758.5453);
|
||||
}
|
||||
|
||||
struct StrokeData {
|
||||
world_pos_2d: vec2<f32>,
|
||||
tile_center: vec2<f32>,
|
||||
tile_size: f32,
|
||||
direction_to_light: vec2<f32>,
|
||||
distance_to_light: f32,
|
||||
perpendicular_to_light: vec2<f32>,
|
||||
rotation_t: f32,
|
||||
line_direction: vec2<f32>,
|
||||
perpendicular_to_line: vec2<f32>,
|
||||
octave_index: f32,
|
||||
offset: vec2<f32>,
|
||||
local_pos: vec2<f32>,
|
||||
}
|
||||
|
||||
fn compute_perpendicular(dir: vec2<f32>) -> vec2<f32> {
|
||||
return normalize(vec2<f32>(-dir.y, dir.x));
|
||||
}
|
||||
|
||||
fn compute_rotation_t(distance_to_light: f32) -> f32 {
|
||||
return pow(min(max(distance_to_light - 1.0 / OCTAVE_STEPS, 0.0) * OCTAVE_STEPS, 1.0), 2.0);
|
||||
}
|
||||
|
||||
fn sample_shadow_map(light_space_pos: vec4<f32>) -> f32 {
|
||||
let proj_coords = light_space_pos.xyz / light_space_pos.w;
|
||||
let ndc_coords = proj_coords * vec3<f32>(0.5, -0.5, 1.0) + vec3<f32>(0.5, 0.5, 0.0);
|
||||
|
||||
if ndc_coords.x < 0.0 || ndc_coords.x > 1.0 ||
|
||||
ndc_coords.y < 0.0 || ndc_coords.y > 1.0 ||
|
||||
ndc_coords.z < 0.0 || ndc_coords.z > 1.0 {
|
||||
return 1.0;
|
||||
}
|
||||
|
||||
let depth = ndc_coords.z - uniforms.shadow_bias;
|
||||
let shadow = textureSampleCompare(shadow_map, shadow_sampler, ndc_coords.xy, depth);
|
||||
|
||||
return shadow;
|
||||
}
|
||||
|
||||
fn hatching_lighting(world_pos: vec3<f32>, tile_scale: f32, direction: vec2<f32>, distance: f32) -> f32 {
|
||||
let world_pos_2d = vec2<f32>(world_pos.x, world_pos.z);
|
||||
let tile_size = 1.0 / tile_scale;
|
||||
let base_tile_center = floor(world_pos_2d / tile_size) * tile_size + tile_size * 0.5;
|
||||
|
||||
var min_lighting = 1.0;
|
||||
|
||||
for (var tile_y: i32 = -1; tile_y <= 1; tile_y++) {
|
||||
for (var tile_x: i32 = -1; tile_x <= 1; tile_x++) {
|
||||
let tile_center = base_tile_center + vec2<f32>(f32(tile_x), f32(tile_y)) * tile_size;
|
||||
|
||||
let perpendicular_to_light = compute_perpendicular(direction);
|
||||
let t = compute_rotation_t(distance);
|
||||
let parallel = mix(perpendicular_to_light, direction, t / 2.0);
|
||||
let perpendicular = compute_perpendicular(parallel);
|
||||
|
||||
let octave_index = round((1.0 - pow(distance, 2.0)) * OCTAVE_STEPS);
|
||||
|
||||
let spacing = LINE_THICKNESS * 1.5;
|
||||
|
||||
var max_offset: i32;
|
||||
switch i32(octave_index) {
|
||||
case default {
|
||||
max_offset = 0;
|
||||
}
|
||||
case 3 {
|
||||
max_offset = 3;
|
||||
}
|
||||
case 2 {
|
||||
max_offset = 9;
|
||||
}
|
||||
case 1 {
|
||||
max_offset = 9;
|
||||
}
|
||||
}
|
||||
for (var i: i32 = -max_offset; i <= max_offset; i++) {
|
||||
let random = rand(tile_center + vec2<f32>(f32(i), f32(-i)));
|
||||
var chance: f32;
|
||||
switch i32(octave_index) {
|
||||
case 1, default: {
|
||||
chance = 1.0;
|
||||
}
|
||||
case 2: {
|
||||
chance = 0.5;
|
||||
}
|
||||
case 3: {
|
||||
chance = 0.8;
|
||||
}
|
||||
}
|
||||
if random > chance {
|
||||
continue;
|
||||
}
|
||||
let offset = perpendicular * f32(i) * tile_size / f32(max_offset);
|
||||
let local_pos = world_pos_2d - tile_center - offset;
|
||||
|
||||
let stroke_data = StrokeData(
|
||||
world_pos_2d,
|
||||
tile_center,
|
||||
tile_size,
|
||||
direction,
|
||||
distance,
|
||||
perpendicular_to_light,
|
||||
t,
|
||||
parallel,
|
||||
perpendicular,
|
||||
octave_index,
|
||||
offset,
|
||||
local_pos,
|
||||
);
|
||||
|
||||
let lighting = line_stroke_lighting(stroke_data);
|
||||
min_lighting = min(min_lighting, lighting);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return min_lighting;
|
||||
}
|
||||
|
||||
fn point_lighting(world_pos: vec3<f32>, point_light: vec3<f32>, tile_scale: f32) -> f32 {
|
||||
let world_pos_2d = vec2<f32>(world_pos.x, world_pos.z);
|
||||
let light_pos_2d = vec2<f32>(point_light.x, point_light.z);
|
||||
let tile_size = 1.0 / tile_scale;
|
||||
let tile_center = floor(world_pos_2d / tile_size) * tile_size + tile_size * 0.5;
|
||||
|
||||
let direction_to_point = light_pos_2d - tile_center;
|
||||
let distance_to_point = min(length(direction_to_point) / 60.0, 1.0);
|
||||
let direction_normalized = normalize(direction_to_point);
|
||||
|
||||
return hatching_lighting(world_pos, tile_scale, direction_normalized, distance_to_point);
|
||||
}
|
||||
|
||||
fn point_lighting_with_shadow(world_pos: vec3<f32>, normal: vec3<f32>, point_light: vec3<f32>, tile_scale: f32, shadow: f32) -> f32 {
|
||||
let world_pos_2d = vec2<f32>(world_pos.x, world_pos.z);
|
||||
let light_pos_2d = vec2<f32>(point_light.x, point_light.z);
|
||||
let tile_size = 1.0 / tile_scale;
|
||||
let tile_center = floor(world_pos_2d / tile_size) * tile_size + tile_size * 0.5;
|
||||
|
||||
let direction_to_point_3d = normalize(point_light - world_pos);
|
||||
let diffuse = max(0.0, dot(normalize(normal), direction_to_point_3d));
|
||||
|
||||
let direction_to_point = light_pos_2d - tile_center;
|
||||
let distance_to_point = min(length(direction_to_point) / 60.0, 1.0);
|
||||
let direction_normalized = normalize(direction_to_point);
|
||||
|
||||
let lighting_intensity = shadow * diffuse;
|
||||
let darkness = 1.0 - lighting_intensity;
|
||||
let combined_distance = min(distance_to_point + darkness * 0.5, 1.0);
|
||||
|
||||
return hatching_lighting(world_pos, tile_scale, direction_normalized, combined_distance);
|
||||
}
|
||||
|
||||
fn line_stroke_lighting(data: StrokeData) -> f32 {
|
||||
let octave_normalized = data.octave_index / OCTAVE_STEPS;
|
||||
|
||||
if data.octave_index > 3.0 {
|
||||
return 1.0;
|
||||
} else if data.octave_index < 1.0 {
|
||||
return 0.0;
|
||||
}
|
||||
|
||||
let noise = hash2(data.tile_center + data.offset) * 2.0 - 1.0;
|
||||
|
||||
var noise_at_octave = noise;
|
||||
var jitter: f32;
|
||||
|
||||
switch i32(data.octave_index) {
|
||||
case default {
|
||||
noise_at_octave = noise;
|
||||
jitter = 1.0;
|
||||
}
|
||||
case 2 {
|
||||
noise_at_octave = noise / 10.0;
|
||||
jitter = 1.0;
|
||||
}
|
||||
case 3 {
|
||||
noise_at_octave = noise / 20.0;
|
||||
jitter = 0.5;
|
||||
}
|
||||
}
|
||||
let line = mix(data.perpendicular_to_light, data.direction_to_light, data.rotation_t / (2.0 + noise_at_octave));
|
||||
let perpendicular_to_line = compute_perpendicular(line);
|
||||
|
||||
let parallel_coord = dot(data.local_pos, line);
|
||||
let perpendicular_coord = dot(data.local_pos, perpendicular_to_line);
|
||||
|
||||
let line_half_width = LINE_THICKNESS * (1.0 - octave_normalized * 0.5);
|
||||
let straight_section_half_length = max(0.0, data.tile_size * 0.4 - line_half_width);
|
||||
|
||||
let parallel_jitter = (rand(data.tile_center + data.offset * 123.456) * 2.0 - 1.0) * data.tile_size * jitter;
|
||||
let jittered_parallel_coord = parallel_coord - parallel_jitter;
|
||||
|
||||
let overhang = max(0.0, abs(jittered_parallel_coord) - straight_section_half_length);
|
||||
let effective_distance = sqrt(overhang * overhang + perpendicular_coord * perpendicular_coord);
|
||||
|
||||
return step(line_half_width, effective_distance);
|
||||
}
|
||||
|
||||
fn flowmap_path_lighting(world_pos: vec3<f32>, tile_scale: f32) -> f32 {
|
||||
let world_pos_2d = vec2<f32>(world_pos.x, world_pos.z);
|
||||
let tile_size = 1.0 / tile_scale;
|
||||
let tile_center = floor(world_pos_2d / tile_size) * tile_size + tile_size * 0.5;
|
||||
|
||||
let flowmap_uv = (tile_center + TERRAIN_BOUNDS * 0.5) / TERRAIN_BOUNDS;
|
||||
let flowmap_sample = textureSampleLevel(flowmap_texture, flowmap_sampler, flowmap_uv, 0.0);
|
||||
let x = flowmap_sample.r * 2.0 - 1.0;
|
||||
let y = flowmap_sample.g * 2.0 - 1.0;
|
||||
let direction_to_path = normalize(vec2<f32>(x, y));
|
||||
let distance_to_path = flowmap_sample.b;
|
||||
|
||||
return hatching_lighting(world_pos, tile_scale, direction_to_path, distance_to_path);
|
||||
}
|
||||
|
||||
fn flowmap_path_lighting_with_shadow(world_pos: vec3<f32>, normal: vec3<f32>, tile_scale: f32, shadow: f32) -> f32 {
|
||||
let world_pos_2d = vec2<f32>(world_pos.x, world_pos.z);
|
||||
let tile_size = 1.0 / tile_scale;
|
||||
let tile_center = floor(world_pos_2d / tile_size) * tile_size + tile_size * 0.5;
|
||||
|
||||
let flowmap_uv = (tile_center + TERRAIN_BOUNDS * 0.5) / TERRAIN_BOUNDS;
|
||||
let flowmap_sample = textureSampleLevel(flowmap_texture, flowmap_sampler, flowmap_uv, 0.0);
|
||||
let x = flowmap_sample.r * 2.0 - 1.0;
|
||||
let y = flowmap_sample.g * 2.0 - 1.0;
|
||||
let direction_to_path = normalize(vec2<f32>(x, y));
|
||||
let distance_to_path = flowmap_sample.b;
|
||||
|
||||
let light_dir_3d = normalize(vec3<f32>(-x, 100.0, -y));
|
||||
let diffuse = max(0.0, dot(normalize(normal), light_dir_3d));
|
||||
|
||||
let lighting_intensity = diffuse * shadow;
|
||||
let darkness = 1.0 - lighting_intensity;
|
||||
let combined_distance = min(distance_to_path + darkness * 0.5, 1.0);
|
||||
|
||||
return hatching_lighting(world_pos, tile_scale, direction_to_path, combined_distance);
|
||||
}
|
||||
@@ -1,97 +1,38 @@
|
||||
struct VertexInput {
|
||||
@location(0) position: vec3<f32>,
|
||||
@location(1) normal: vec3<f32>,
|
||||
@location(2) uv: vec2<f32>,
|
||||
}
|
||||
|
||||
struct VertexOutput {
|
||||
@builtin(position) clip_position: vec4<f32>,
|
||||
@location(0) world_position: vec3<f32>,
|
||||
@location(1) world_normal: vec3<f32>,
|
||||
}
|
||||
|
||||
struct Uniforms {
|
||||
model: mat4x4<f32>,
|
||||
view: mat4x4<f32>,
|
||||
projection: mat4x4<f32>,
|
||||
}
|
||||
|
||||
@group(0) @binding(0)
|
||||
var<uniform> uniforms: Uniforms;
|
||||
|
||||
@vertex
|
||||
fn vs_main(input: VertexInput) -> VertexOutput {
|
||||
var output: VertexOutput;
|
||||
|
||||
let world_pos = uniforms.model * vec4<f32>(input.position, 1.0);
|
||||
let instance_model = mat4x4<f32>(
|
||||
input.instance_model_0,
|
||||
input.instance_model_1,
|
||||
input.instance_model_2,
|
||||
input.instance_model_3
|
||||
);
|
||||
|
||||
let world_pos = instance_model * vec4<f32>(input.position, 1.0);
|
||||
output.world_position = world_pos.xyz;
|
||||
output.world_normal = (uniforms.model * vec4<f32>(input.normal, 0.0)).xyz;
|
||||
output.clip_position = uniforms.projection * uniforms.view * world_pos;
|
||||
|
||||
let normal_matrix = mat3x3<f32>(
|
||||
instance_model[0].xyz,
|
||||
instance_model[1].xyz,
|
||||
instance_model[2].xyz
|
||||
);
|
||||
output.world_normal = normalize(normal_matrix * input.normal);
|
||||
|
||||
output.light_space_position = uniforms.light_view_projection * world_pos;
|
||||
|
||||
return output;
|
||||
}
|
||||
|
||||
fn bayer_2x2_dither(value: f32, screen_pos: vec2<f32>) -> f32 {
|
||||
let pattern = array<f32, 4>(
|
||||
0.0/4.0, 2.0/4.0,
|
||||
3.0/4.0, 1.0/4.0
|
||||
);
|
||||
let x = i32(screen_pos.x) % 2;
|
||||
let y = i32(screen_pos.y) % 2;
|
||||
let index = y * 2 + x;
|
||||
return select(0.0, 1.0, value > pattern[index]);
|
||||
}
|
||||
|
||||
fn bayer_4x4_dither(value: f32, screen_pos: vec2<f32>) -> f32 {
|
||||
let pattern = array<f32, 16>(
|
||||
0.0/16.0, 8.0/16.0, 2.0/16.0, 10.0/16.0,
|
||||
12.0/16.0, 4.0/16.0, 14.0/16.0, 6.0/16.0,
|
||||
3.0/16.0, 11.0/16.0, 1.0/16.0, 9.0/16.0,
|
||||
15.0/16.0, 7.0/16.0, 13.0/16.0, 5.0/16.0
|
||||
);
|
||||
let x = i32(screen_pos.x) % 4;
|
||||
let y = i32(screen_pos.y) % 4;
|
||||
let index = y * 4 + x;
|
||||
return select(0.0, 1.0, value > pattern[index]);
|
||||
}
|
||||
|
||||
fn bayer_8x8_dither(value: f32, screen_pos: vec2<f32>) -> f32 {
|
||||
let pattern = array<f32, 64>(
|
||||
0.0/64.0, 32.0/64.0, 8.0/64.0, 40.0/64.0, 2.0/64.0, 34.0/64.0, 10.0/64.0, 42.0/64.0,
|
||||
48.0/64.0, 16.0/64.0, 56.0/64.0, 24.0/64.0, 50.0/64.0, 18.0/64.0, 58.0/64.0, 26.0/64.0,
|
||||
12.0/64.0, 44.0/64.0, 4.0/64.0, 36.0/64.0, 14.0/64.0, 46.0/64.0, 6.0/64.0, 38.0/64.0,
|
||||
60.0/64.0, 28.0/64.0, 52.0/64.0, 20.0/64.0, 62.0/64.0, 30.0/64.0, 54.0/64.0, 22.0/64.0,
|
||||
3.0/64.0, 35.0/64.0, 11.0/64.0, 43.0/64.0, 1.0/64.0, 33.0/64.0, 9.0/64.0, 41.0/64.0,
|
||||
51.0/64.0, 19.0/64.0, 59.0/64.0, 27.0/64.0, 49.0/64.0, 17.0/64.0, 57.0/64.0, 25.0/64.0,
|
||||
15.0/64.0, 47.0/64.0, 7.0/64.0, 39.0/64.0, 13.0/64.0, 45.0/64.0, 5.0/64.0, 37.0/64.0,
|
||||
63.0/64.0, 31.0/64.0, 55.0/64.0, 23.0/64.0, 61.0/64.0, 29.0/64.0, 53.0/64.0, 21.0/64.0
|
||||
);
|
||||
let x = i32(screen_pos.x) % 8;
|
||||
let y = i32(screen_pos.y) % 8;
|
||||
let index = y * 8 + x;
|
||||
return select(0.0, 1.0, value > pattern[index]);
|
||||
}
|
||||
|
||||
|
||||
@fragment
|
||||
fn fs_main(input: VertexOutput) -> @location(0) vec4<f32> {
|
||||
let light_pos = vec3<f32>(5.0, 5.0, 5.0);
|
||||
let light_color = vec3<f32>(1.0, 1.0, 1.0);
|
||||
let object_color = vec3<f32>(1.0, 1.0, 1.0);
|
||||
let shadow = sample_shadow_map(input.light_space_position);
|
||||
|
||||
let ambient_strength = 0.3;
|
||||
let ambient = ambient_strength * light_color;
|
||||
let tile_scale = 1.0;
|
||||
let flowmap_strokes = flowmap_path_lighting_with_shadow(input.world_position, input.world_normal, tile_scale, shadow);
|
||||
let point_strokes = point_lighting_with_shadow(input.world_position, input.world_normal, vec3<f32>(0.0, 100.0, 0.0), tile_scale, shadow);
|
||||
let brightness = max(flowmap_strokes, point_strokes);
|
||||
|
||||
let norm = normalize(input.world_normal);
|
||||
let light_dir = normalize(vec3<f32>(1.0, -1.0, 1.0));
|
||||
let diff = max(dot(norm, light_dir), 0.0);
|
||||
let diffuse = diff * light_color;
|
||||
|
||||
let result = (ambient + diffuse) * object_color;
|
||||
|
||||
let dithered_r = bayer_8x8_dither(result.r, input.clip_position.xy);
|
||||
let dithered_g = bayer_8x8_dither(result.g, input.clip_position.xy);
|
||||
let dithered_b = bayer_8x8_dither(result.b, input.clip_position.xy);
|
||||
|
||||
return vec4<f32>(dithered_r, dithered_g, dithered_b, 1.0);
|
||||
return vec4<f32>(brightness, brightness, brightness, 1.0);
|
||||
}
|
||||
|
||||
@@ -1,120 +1,70 @@
|
||||
struct VertexInput {
|
||||
@location(0) position: vec3<f32>,
|
||||
@location(1) normal: vec3<f32>,
|
||||
@location(2) uv: vec2<f32>,
|
||||
}
|
||||
|
||||
struct VertexOutput {
|
||||
@builtin(position) clip_position: vec4<f32>,
|
||||
@location(0) world_position: vec3<f32>,
|
||||
@location(1) world_normal: vec3<f32>,
|
||||
@location(2) uv: vec2<f32>,
|
||||
}
|
||||
|
||||
struct Uniforms {
|
||||
model: mat4x4<f32>,
|
||||
view: mat4x4<f32>,
|
||||
projection: mat4x4<f32>,
|
||||
height_scale: f32,
|
||||
time: f32,
|
||||
}
|
||||
|
||||
@group(0) @binding(0)
|
||||
var<uniform> uniforms: Uniforms;
|
||||
|
||||
@group(0) @binding(1)
|
||||
var height_texture: texture_2d<f32>;
|
||||
|
||||
@group(0) @binding(2)
|
||||
var height_sampler: sampler;
|
||||
|
||||
@vertex
|
||||
fn vs_main(input: VertexInput) -> VertexOutput {
|
||||
var output: VertexOutput;
|
||||
|
||||
let height = textureSampleLevel(height_texture, height_sampler, input.uv, 0.0).r;
|
||||
let instance_model = mat4x4<f32>(
|
||||
input.instance_model_0,
|
||||
input.instance_model_1,
|
||||
input.instance_model_2,
|
||||
input.instance_model_3
|
||||
);
|
||||
|
||||
var displaced_pos = input.position;
|
||||
displaced_pos.y += height * uniforms.height_scale;
|
||||
|
||||
let texel_size = vec2<f32>(1.0 / 512.0, 1.0 / 512.0);
|
||||
let height_left = textureSampleLevel(height_texture, height_sampler, input.uv - vec2<f32>(texel_size.x, 0.0), 0.0).r;
|
||||
let height_right = textureSampleLevel(height_texture, height_sampler, input.uv + vec2<f32>(texel_size.x, 0.0), 0.0).r;
|
||||
let height_down = textureSampleLevel(height_texture, height_sampler, input.uv - vec2<f32>(0.0, texel_size.y), 0.0).r;
|
||||
let height_up = textureSampleLevel(height_texture, height_sampler, input.uv + vec2<f32>(0.0, texel_size.y), 0.0).r;
|
||||
|
||||
let dh_dx = (height_right - height_left) * uniforms.height_scale;
|
||||
let dh_dz = (height_up - height_down) * uniforms.height_scale;
|
||||
|
||||
let normal = normalize(vec3<f32>(-dh_dx, 1.0, -dh_dz));
|
||||
|
||||
let world_pos = uniforms.model * vec4<f32>(displaced_pos, 1.0);
|
||||
let world_pos = instance_model * vec4<f32>(input.position, 1.0);
|
||||
output.world_position = world_pos.xyz;
|
||||
output.world_normal = normalize((uniforms.model * vec4<f32>(normal, 0.0)).xyz);
|
||||
output.clip_position = uniforms.projection * uniforms.view * world_pos;
|
||||
output.uv = input.uv;
|
||||
|
||||
let normal_matrix = mat3x3<f32>(
|
||||
instance_model[0].xyz,
|
||||
instance_model[1].xyz,
|
||||
instance_model[2].xyz
|
||||
);
|
||||
output.world_normal = normalize(normal_matrix * input.normal);
|
||||
|
||||
output.light_space_position = uniforms.light_view_projection * world_pos;
|
||||
|
||||
return output;
|
||||
}
|
||||
|
||||
fn hash(p: vec2<f32>) -> f32 {
|
||||
var p3 = fract(vec3<f32>(p.xyx) * 0.1031);
|
||||
p3 += dot(p3, p3.yzx + 33.33);
|
||||
return fract((p3.x + p3.y) * p3.z);
|
||||
}
|
||||
|
||||
fn should_glitter(screen_pos: vec2<f32>, time: f32) -> bool {
|
||||
let pixel_pos = floor(screen_pos);
|
||||
let h = hash(pixel_pos);
|
||||
let time_offset = h * 6283.18;
|
||||
let sparkle_rate = 0.2;
|
||||
let sparkle = sin(time * sparkle_rate + time_offset) * 0.5 + 0.5;
|
||||
let threshold = 0.95;
|
||||
return sparkle > threshold && h > 0.95;
|
||||
}
|
||||
|
||||
fn bayer_8x8_dither(value: f32, screen_pos: vec2<f32>) -> f32 {
|
||||
let pattern = array<f32, 64>(
|
||||
0.0/64.0, 32.0/64.0, 8.0/64.0, 40.0/64.0, 2.0/64.0, 34.0/64.0, 10.0/64.0, 42.0/64.0,
|
||||
48.0/64.0, 16.0/64.0, 56.0/64.0, 24.0/64.0, 50.0/64.0, 18.0/64.0, 58.0/64.0, 26.0/64.0,
|
||||
12.0/64.0, 44.0/64.0, 4.0/64.0, 36.0/64.0, 14.0/64.0, 46.0/64.0, 6.0/64.0, 38.0/64.0,
|
||||
60.0/64.0, 28.0/64.0, 52.0/64.0, 20.0/64.0, 62.0/64.0, 30.0/64.0, 54.0/64.0, 22.0/64.0,
|
||||
3.0/64.0, 35.0/64.0, 11.0/64.0, 43.0/64.0, 1.0/64.0, 33.0/64.0, 9.0/64.0, 41.0/64.0,
|
||||
51.0/64.0, 19.0/64.0, 59.0/64.0, 27.0/64.0, 49.0/64.0, 17.0/64.0, 57.0/64.0, 25.0/64.0,
|
||||
15.0/64.0, 47.0/64.0, 7.0/64.0, 39.0/64.0, 13.0/64.0, 45.0/64.0, 5.0/64.0, 37.0/64.0,
|
||||
63.0/64.0, 31.0/64.0, 55.0/64.0, 23.0/64.0, 61.0/64.0, 29.0/64.0, 53.0/64.0, 21.0/64.0
|
||||
);
|
||||
let x = i32(screen_pos.x) % 8;
|
||||
let y = i32(screen_pos.y) % 8;
|
||||
let index = y * 8 + x;
|
||||
return select(0.2, 1.0, value > pattern[index]);
|
||||
}
|
||||
|
||||
@fragment
|
||||
fn fs_main(input: VertexOutput) -> @location(0) vec4<f32> {
|
||||
let light_dir = normalize(vec3<f32>(-0.5, -1.0, -0.5));
|
||||
let light_color = vec3<f32>(1.0, 1.0, 1.0);
|
||||
let object_color = vec3<f32>(1.0, 1.0, 1.0);
|
||||
let debug = 0u;
|
||||
|
||||
let ambient_strength = 0.2;
|
||||
let ambient = ambient_strength * light_color;
|
||||
|
||||
let norm = normalize(input.world_normal);
|
||||
let diff = max(dot(norm, -light_dir), 0.0);
|
||||
let diffuse = diff * light_color;
|
||||
|
||||
let result = (ambient + diffuse) * object_color;
|
||||
|
||||
var dithered_r = bayer_8x8_dither(result.r, input.clip_position.xy);
|
||||
var dithered_g = bayer_8x8_dither(result.g, input.clip_position.xy);
|
||||
var dithered_b = bayer_8x8_dither(result.b, input.clip_position.xy);
|
||||
|
||||
let is_grey_or_black = dithered_r == 0.0 || (dithered_r == dithered_g && dithered_g == dithered_b);
|
||||
if (is_grey_or_black && should_glitter(input.clip_position.xy, uniforms.time)) {
|
||||
dithered_r = 1.0;
|
||||
dithered_g = 1.0;
|
||||
dithered_b = 1.0;
|
||||
if debug == 1u {
|
||||
let flowmap_uv = (vec2<f32>(input.world_position.x, input.world_position.z) + TERRAIN_BOUNDS * 0.5) / TERRAIN_BOUNDS;
|
||||
let flowmap_sample = textureSampleLevel(flowmap_texture, flowmap_sampler, flowmap_uv, 0.0).rgb;
|
||||
return vec4<f32>(flowmap_sample, 1.0);
|
||||
}
|
||||
|
||||
return vec4<f32>(dithered_r, dithered_g, dithered_b, 1.0);
|
||||
if debug == 2u {
|
||||
let world_pos_2d = vec2<f32>(input.world_position.x, input.world_position.z);
|
||||
let tile_size = 10.0;
|
||||
let tile_center = floor(world_pos_2d / tile_size) * tile_size + tile_size * 0.5;
|
||||
let flowmap_uv = (tile_center + TERRAIN_BOUNDS * 0.5) / TERRAIN_BOUNDS;
|
||||
|
||||
let flowmap_sample = textureSampleLevel(flowmap_texture, flowmap_sampler, flowmap_uv, 0.0).rgb;
|
||||
let x = (flowmap_sample.r) * 2.0 - 1.0;
|
||||
let y = flowmap_sample.g * 2.0 - 1.0;
|
||||
let direction_to_path = normalize(vec2<f32>(x, y));
|
||||
let perpendicular_to_path = normalize(vec2<f32>(-direction_to_path.y, direction_to_path.x));
|
||||
|
||||
let local_pos = world_pos_2d - tile_center;
|
||||
|
||||
let arrow_scale = 0.05;
|
||||
let parallel_coord = dot(local_pos, direction_to_path) * arrow_scale;
|
||||
let perpendicular_coord = dot(local_pos, perpendicular_to_path) * arrow_scale;
|
||||
|
||||
let to_path = step(0.95, fract(parallel_coord));
|
||||
let to_perp = step(0.95, fract(perpendicular_coord));
|
||||
|
||||
return vec4<f32>(to_perp, to_perp, to_perp, 1.0);
|
||||
}
|
||||
|
||||
let shadow = sample_shadow_map(input.light_space_position);
|
||||
|
||||
let tile_scale = 1.0;
|
||||
let flowmap_strokes = flowmap_path_lighting_with_shadow(input.world_position, input.world_normal, tile_scale, shadow);
|
||||
let point_strokes = point_lighting_with_shadow(input.world_position, input.world_normal, vec3<f32>(0.0, 100.0, 0.0), tile_scale, shadow);
|
||||
let brightness = max(flowmap_strokes, point_strokes);
|
||||
|
||||
return vec4<f32>(brightness, brightness, brightness, 1.0);
|
||||
}
|
||||
|
||||
@@ -8,16 +8,20 @@ pub struct CameraUniforms
|
||||
pub model: [[f32; 4]; 4],
|
||||
pub view: [[f32; 4]; 4],
|
||||
pub projection: [[f32; 4]; 4],
|
||||
pub light_direction: [f32; 3],
|
||||
pub _padding: f32,
|
||||
}
|
||||
|
||||
impl CameraUniforms
|
||||
{
|
||||
pub fn new(model: Mat4, view: Mat4, projection: Mat4) -> Self
|
||||
pub fn new(model: Mat4, view: Mat4, projection: Mat4, light_direction: Vec3) -> Self
|
||||
{
|
||||
Self {
|
||||
model: model.to_cols_array_2d(),
|
||||
view: view.to_cols_array_2d(),
|
||||
projection: projection.to_cols_array_2d(),
|
||||
light_direction: light_direction.to_array(),
|
||||
_padding: 0.0,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -20,7 +20,7 @@ impl CameraComponent
|
||||
fov: 45.0_f32.to_radians(),
|
||||
aspect,
|
||||
near: 0.1,
|
||||
far: 100.0,
|
||||
far: 2000.0,
|
||||
yaw: -135.0_f32.to_radians(),
|
||||
pitch: -30.0_f32.to_radians(),
|
||||
is_active: true,
|
||||
|
||||
@@ -8,4 +8,6 @@ pub struct MeshComponent
|
||||
{
|
||||
pub mesh: Rc<Mesh>,
|
||||
pub pipeline: Pipeline,
|
||||
pub instance_buffer: Option<wgpu::Buffer>,
|
||||
pub num_instances: u32,
|
||||
}
|
||||
|
||||
@@ -6,10 +6,12 @@ use nalgebra::DMatrix;
|
||||
use rapier3d::parry::shape::HeightField;
|
||||
|
||||
use crate::{
|
||||
mesh::{Mesh, Vertex},
|
||||
mesh::{InstanceRaw, Mesh, Vertex},
|
||||
physics::PhysicsManager,
|
||||
render::{self, DrawCall, Pipeline},
|
||||
};
|
||||
use bytemuck::cast_slice;
|
||||
use wgpu::util::DeviceExt;
|
||||
|
||||
thread_local! {
|
||||
static WIREFRAME_BOX: OnceCell<Rc<Mesh>> = OnceCell::new();
|
||||
@@ -122,12 +124,26 @@ pub fn render_collider_debug() -> Vec<DrawCall>
|
||||
let translation = Mat4::from_translation(center);
|
||||
let model = translation * scale;
|
||||
|
||||
let instance_data = InstanceRaw {
|
||||
model: model.to_cols_array_2d(),
|
||||
};
|
||||
|
||||
let instance_buffer = render::with_device(|device| {
|
||||
device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
|
||||
label: Some("Debug Instance Buffer"),
|
||||
contents: cast_slice(&[instance_data]),
|
||||
usage: wgpu::BufferUsages::VERTEX,
|
||||
})
|
||||
});
|
||||
|
||||
draw_calls.push(DrawCall {
|
||||
vertex_buffer: wireframe_box.vertex_buffer.clone(),
|
||||
index_buffer: wireframe_box.index_buffer.clone(),
|
||||
num_indices: wireframe_box.num_indices,
|
||||
model,
|
||||
pipeline: Pipeline::Wireframe,
|
||||
instance_buffer: Some(instance_buffer),
|
||||
num_instances: 1,
|
||||
});
|
||||
}
|
||||
});
|
||||
@@ -135,12 +151,26 @@ pub fn render_collider_debug() -> Vec<DrawCall>
|
||||
DEBUG_HEIGHTFIELD.with(|cell| {
|
||||
if let Some(Some(heightfield_mesh)) = cell.get()
|
||||
{
|
||||
let instance_data = InstanceRaw {
|
||||
model: Mat4::IDENTITY.to_cols_array_2d(),
|
||||
};
|
||||
|
||||
let instance_buffer = render::with_device(|device| {
|
||||
device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
|
||||
label: Some("Heightfield Debug Instance Buffer"),
|
||||
contents: cast_slice(&[instance_data]),
|
||||
usage: wgpu::BufferUsages::VERTEX,
|
||||
})
|
||||
});
|
||||
|
||||
draw_calls.push(DrawCall {
|
||||
vertex_buffer: heightfield_mesh.vertex_buffer.clone(),
|
||||
index_buffer: heightfield_mesh.index_buffer.clone(),
|
||||
num_indices: heightfield_mesh.num_indices,
|
||||
model: Mat4::IDENTITY,
|
||||
pipeline: Pipeline::Wireframe,
|
||||
instance_buffer: Some(instance_buffer),
|
||||
num_instances: 1,
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
43
src/main.rs
@@ -15,6 +15,7 @@ mod shader;
|
||||
mod state;
|
||||
mod systems;
|
||||
mod terrain;
|
||||
mod texture_loader;
|
||||
mod utility;
|
||||
mod world;
|
||||
|
||||
@@ -35,7 +36,7 @@ use crate::systems::{
|
||||
player_input_system, render_system, start_camera_following, state_machine_physics_system,
|
||||
state_machine_system, stop_camera_following,
|
||||
};
|
||||
use crate::terrain::Terrain;
|
||||
use crate::terrain::{Terrain, TerrainConfig};
|
||||
use crate::utility::time::Time;
|
||||
|
||||
fn main() -> Result<(), Box<dyn std::error::Error>>
|
||||
@@ -53,29 +54,21 @@ fn main() -> Result<(), Box<dyn std::error::Error>>
|
||||
let renderer = pollster::block_on(Renderer::new(&window, 1))?;
|
||||
render::init(renderer);
|
||||
|
||||
let terrain_data = render::with_device(|device| {
|
||||
render::with_queue(|queue| {
|
||||
let height_map =
|
||||
heightmap::load_exr_heightmap(device, queue, "textures/height_map_x0_y0.exr");
|
||||
let (height_texture, height_view, height_sampler) = height_map.unwrap();
|
||||
render::TerrainData {
|
||||
height_texture,
|
||||
height_view,
|
||||
height_sampler,
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
render::set_terrain_data(terrain_data);
|
||||
let terrain_config = TerrainConfig::default();
|
||||
|
||||
let mut world = World::new();
|
||||
let player_entity = Player::spawn(&mut world);
|
||||
let _terrain_entity = Terrain::spawn(&mut world, "textures/height_map_x0_y0.exr", 10.0)?;
|
||||
let _terrain_entity = Terrain::spawn(&mut world, &terrain_config)?;
|
||||
|
||||
render::set_terrain_data();
|
||||
|
||||
let mut noclip_mode = true;
|
||||
|
||||
let camera_entity = spawn_camera(&mut world, player_entity);
|
||||
start_camera_following(&mut world, camera_entity);
|
||||
|
||||
let mut noclip_mode = false;
|
||||
if noclip_mode == false
|
||||
{
|
||||
start_camera_following(&mut world, camera_entity);
|
||||
}
|
||||
|
||||
let mut event_pump = sdl_context.event_pump()?;
|
||||
let mut input_state = InputState::new();
|
||||
@@ -142,7 +135,6 @@ fn main() -> Result<(), Box<dyn std::error::Error>>
|
||||
player_input_system(&mut world, &input_state);
|
||||
}
|
||||
|
||||
|
||||
physics_accumulator += delta;
|
||||
|
||||
while physics_accumulator >= FIXED_TIMESTEP
|
||||
@@ -165,10 +157,17 @@ fn main() -> Result<(), Box<dyn std::error::Error>>
|
||||
{
|
||||
if let Some(camera_transform) = world.transforms.get(camera_entity)
|
||||
{
|
||||
let view = get_view_matrix(&world, camera_entity, camera_transform, camera_component);
|
||||
let view =
|
||||
get_view_matrix(&world, camera_entity, camera_transform, camera_component);
|
||||
let projection = camera_component.projection_matrix();
|
||||
|
||||
render::render_with_matrices(&view, &projection, &draw_calls, time);
|
||||
render::render_with_matrices(
|
||||
&view,
|
||||
&projection,
|
||||
camera_transform.position,
|
||||
&draw_calls,
|
||||
time,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
328
src/mesh.rs
@@ -1,5 +1,5 @@
|
||||
use bytemuck::{Pod, Zeroable};
|
||||
use glam::{Mat4, Vec3};
|
||||
use glam::{Mat4, Quat, Vec3};
|
||||
use std::path::Path;
|
||||
use std::rc::Rc;
|
||||
|
||||
@@ -42,6 +42,65 @@ impl Vertex
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct InstanceData
|
||||
{
|
||||
pub position: Vec3,
|
||||
pub rotation: Quat,
|
||||
pub scale: Vec3,
|
||||
}
|
||||
|
||||
impl InstanceData
|
||||
{
|
||||
pub fn to_raw(&self) -> InstanceRaw
|
||||
{
|
||||
let model = Mat4::from_scale_rotation_translation(self.scale, self.rotation, self.position);
|
||||
InstanceRaw {
|
||||
model: model.to_cols_array_2d(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[repr(C)]
|
||||
#[derive(Clone, Copy, Pod, Zeroable)]
|
||||
pub struct InstanceRaw
|
||||
{
|
||||
pub model: [[f32; 4]; 4],
|
||||
}
|
||||
|
||||
impl InstanceRaw
|
||||
{
|
||||
pub fn desc() -> wgpu::VertexBufferLayout<'static>
|
||||
{
|
||||
wgpu::VertexBufferLayout {
|
||||
array_stride: std::mem::size_of::<InstanceRaw>() as wgpu::BufferAddress,
|
||||
step_mode: wgpu::VertexStepMode::Instance,
|
||||
attributes: &[
|
||||
wgpu::VertexAttribute {
|
||||
offset: 0,
|
||||
shader_location: 3,
|
||||
format: wgpu::VertexFormat::Float32x4,
|
||||
},
|
||||
wgpu::VertexAttribute {
|
||||
offset: std::mem::size_of::<[f32; 4]>() as wgpu::BufferAddress,
|
||||
shader_location: 4,
|
||||
format: wgpu::VertexFormat::Float32x4,
|
||||
},
|
||||
wgpu::VertexAttribute {
|
||||
offset: (std::mem::size_of::<[f32; 4]>() * 2) as wgpu::BufferAddress,
|
||||
shader_location: 5,
|
||||
format: wgpu::VertexFormat::Float32x4,
|
||||
},
|
||||
wgpu::VertexAttribute {
|
||||
offset: (std::mem::size_of::<[f32; 4]>() * 3) as wgpu::BufferAddress,
|
||||
shader_location: 6,
|
||||
format: wgpu::VertexFormat::Float32x4,
|
||||
},
|
||||
],
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct Mesh
|
||||
{
|
||||
pub vertex_buffer: wgpu::Buffer,
|
||||
@@ -411,4 +470,271 @@ impl Mesh
|
||||
{
|
||||
crate::render::with_device(|device| Mesh::load_gltf_mesh(device, path))
|
||||
}
|
||||
|
||||
pub fn load_gltf_with_instances(
|
||||
device: &wgpu::Device,
|
||||
path: impl AsRef<Path>,
|
||||
) -> anyhow::Result<Vec<(Mesh, Vec<InstanceData>)>>
|
||||
{
|
||||
let path = path.as_ref();
|
||||
let gltf_str = std::fs::read_to_string(path)?;
|
||||
let gltf_json: serde_json::Value = serde_json::from_str(&gltf_str)?;
|
||||
|
||||
let (document, buffers, _images) = gltf::import(path)?;
|
||||
|
||||
let mut result = Vec::new();
|
||||
|
||||
let nodes = gltf_json["nodes"]
|
||||
.as_array()
|
||||
.ok_or_else(|| anyhow::anyhow!("Missing nodes array"))?;
|
||||
|
||||
for (node_index, json_node) in nodes.iter().enumerate()
|
||||
{
|
||||
let node = document
|
||||
.nodes()
|
||||
.nth(node_index)
|
||||
.ok_or_else(|| anyhow::anyhow!("Node index mismatch"))?;
|
||||
|
||||
if let Some(mesh_data) = node.mesh()
|
||||
{
|
||||
let has_instancing = json_node
|
||||
.get("extensions")
|
||||
.and_then(|ext| ext.get("EXT_mesh_gpu_instancing"))
|
||||
.is_some();
|
||||
|
||||
if has_instancing
|
||||
{
|
||||
let extensions = json_node.get("extensions").unwrap();
|
||||
let instancing_ext = extensions.get("EXT_mesh_gpu_instancing").unwrap();
|
||||
let mut mesh_vertices = Vec::new();
|
||||
let mut mesh_indices = Vec::new();
|
||||
|
||||
for primitive in mesh_data.primitives()
|
||||
{
|
||||
let reader = primitive
|
||||
.reader(|buffer| buffers.get(buffer.index()).map(|data| &data[..]));
|
||||
|
||||
let positions = reader
|
||||
.read_positions()
|
||||
.ok_or_else(|| anyhow::anyhow!("Missing position data"))?
|
||||
.collect::<Vec<[f32; 3]>>();
|
||||
|
||||
let normals = reader
|
||||
.read_normals()
|
||||
.ok_or_else(|| anyhow::anyhow!("Missing normal data"))?
|
||||
.collect::<Vec<[f32; 3]>>();
|
||||
|
||||
let uvs = reader
|
||||
.read_tex_coords(0)
|
||||
.map(|iter| iter.into_f32().collect::<Vec<[f32; 2]>>())
|
||||
.unwrap_or_else(|| vec![[0.0, 0.0]; positions.len()]);
|
||||
|
||||
let base_index = mesh_vertices.len() as u32;
|
||||
|
||||
for ((pos, normal), uv) in
|
||||
positions.iter().zip(normals.iter()).zip(uvs.iter())
|
||||
{
|
||||
mesh_vertices.push(Vertex {
|
||||
position: *pos,
|
||||
normal: *normal,
|
||||
uv: *uv,
|
||||
});
|
||||
}
|
||||
|
||||
if let Some(indices_reader) = reader.read_indices()
|
||||
{
|
||||
mesh_indices
|
||||
.extend(indices_reader.into_u32().map(|i| i + base_index));
|
||||
}
|
||||
}
|
||||
|
||||
let attributes = instancing_ext
|
||||
.get("attributes")
|
||||
.and_then(|v| v.as_object())
|
||||
.ok_or_else(|| anyhow::anyhow!("Missing attributes in EXT_mesh_gpu_instancing"))?;
|
||||
|
||||
let translation_accessor_index = attributes
|
||||
.get("TRANSLATION")
|
||||
.and_then(|v| v.as_u64())
|
||||
.ok_or_else(|| anyhow::anyhow!("Missing TRANSLATION in instancing extension"))? as usize;
|
||||
|
||||
let rotation_accessor_index = attributes
|
||||
.get("ROTATION")
|
||||
.and_then(|v| v.as_u64())
|
||||
.ok_or_else(|| anyhow::anyhow!("Missing ROTATION in instancing extension"))? as usize;
|
||||
|
||||
let scale_accessor_index = attributes
|
||||
.get("SCALE")
|
||||
.and_then(|v| v.as_u64())
|
||||
.ok_or_else(|| anyhow::anyhow!("Missing SCALE in instancing extension"))? as usize;
|
||||
|
||||
let translations = Self::read_vec3_accessor(
|
||||
&document,
|
||||
&buffers,
|
||||
translation_accessor_index,
|
||||
)?;
|
||||
let rotations =
|
||||
Self::read_quat_accessor(&document, &buffers, rotation_accessor_index)?;
|
||||
let scales =
|
||||
Self::read_vec3_accessor(&document, &buffers, scale_accessor_index)?;
|
||||
|
||||
let instances: Vec<InstanceData> = translations
|
||||
.into_iter()
|
||||
.zip(rotations.into_iter())
|
||||
.zip(scales.into_iter())
|
||||
.map(|((position, rotation), scale)| InstanceData {
|
||||
position,
|
||||
rotation,
|
||||
scale,
|
||||
})
|
||||
.collect();
|
||||
|
||||
let mesh = Mesh::new(device, &mesh_vertices, &mesh_indices);
|
||||
result.push((mesh, instances));
|
||||
}
|
||||
else
|
||||
{
|
||||
let mut mesh_vertices = Vec::new();
|
||||
let mut mesh_indices = Vec::new();
|
||||
|
||||
for primitive in mesh_data.primitives()
|
||||
{
|
||||
let reader = primitive
|
||||
.reader(|buffer| buffers.get(buffer.index()).map(|data| &data[..]));
|
||||
|
||||
let positions = reader
|
||||
.read_positions()
|
||||
.ok_or_else(|| anyhow::anyhow!("Missing position data"))?
|
||||
.collect::<Vec<[f32; 3]>>();
|
||||
|
||||
let normals = reader
|
||||
.read_normals()
|
||||
.ok_or_else(|| anyhow::anyhow!("Missing normal data"))?
|
||||
.collect::<Vec<[f32; 3]>>();
|
||||
|
||||
let uvs = reader
|
||||
.read_tex_coords(0)
|
||||
.map(|iter| iter.into_f32().collect::<Vec<[f32; 2]>>())
|
||||
.unwrap_or_else(|| vec![[0.0, 0.0]; positions.len()]);
|
||||
|
||||
let base_index = mesh_vertices.len() as u32;
|
||||
|
||||
for ((pos, normal), uv) in
|
||||
positions.iter().zip(normals.iter()).zip(uvs.iter())
|
||||
{
|
||||
mesh_vertices.push(Vertex {
|
||||
position: *pos,
|
||||
normal: *normal,
|
||||
uv: *uv,
|
||||
});
|
||||
}
|
||||
|
||||
if let Some(indices_reader) = reader.read_indices()
|
||||
{
|
||||
mesh_indices.extend(indices_reader.into_u32().map(|i| i + base_index));
|
||||
}
|
||||
}
|
||||
|
||||
let mesh = Mesh::new(device, &mesh_vertices, &mesh_indices);
|
||||
result.push((mesh, Vec::new()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
fn read_vec3_accessor(
|
||||
document: &gltf::Document,
|
||||
buffers: &[gltf::buffer::Data],
|
||||
accessor_index: usize,
|
||||
) -> anyhow::Result<Vec<Vec3>>
|
||||
{
|
||||
let accessor = document
|
||||
.accessors()
|
||||
.nth(accessor_index)
|
||||
.ok_or_else(|| anyhow::anyhow!("Invalid accessor index"))?;
|
||||
|
||||
let buffer_view = accessor.view().ok_or_else(|| anyhow::anyhow!("Missing buffer view"))?;
|
||||
let buffer = &buffers[buffer_view.buffer().index()];
|
||||
let start = buffer_view.offset() + accessor.offset();
|
||||
let stride = buffer_view.stride().unwrap_or(12);
|
||||
|
||||
let mut result = Vec::new();
|
||||
for i in 0..accessor.count()
|
||||
{
|
||||
let offset = start + i * stride;
|
||||
let x = f32::from_le_bytes([
|
||||
buffer[offset],
|
||||
buffer[offset + 1],
|
||||
buffer[offset + 2],
|
||||
buffer[offset + 3],
|
||||
]);
|
||||
let y = f32::from_le_bytes([
|
||||
buffer[offset + 4],
|
||||
buffer[offset + 5],
|
||||
buffer[offset + 6],
|
||||
buffer[offset + 7],
|
||||
]);
|
||||
let z = f32::from_le_bytes([
|
||||
buffer[offset + 8],
|
||||
buffer[offset + 9],
|
||||
buffer[offset + 10],
|
||||
buffer[offset + 11],
|
||||
]);
|
||||
result.push(Vec3::new(x, y, z));
|
||||
}
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
fn read_quat_accessor(
|
||||
document: &gltf::Document,
|
||||
buffers: &[gltf::buffer::Data],
|
||||
accessor_index: usize,
|
||||
) -> anyhow::Result<Vec<Quat>>
|
||||
{
|
||||
let accessor = document
|
||||
.accessors()
|
||||
.nth(accessor_index)
|
||||
.ok_or_else(|| anyhow::anyhow!("Invalid accessor index"))?;
|
||||
|
||||
let buffer_view = accessor.view().ok_or_else(|| anyhow::anyhow!("Missing buffer view"))?;
|
||||
let buffer = &buffers[buffer_view.buffer().index()];
|
||||
let start = buffer_view.offset() + accessor.offset();
|
||||
let stride = buffer_view.stride().unwrap_or(16);
|
||||
|
||||
let mut result = Vec::new();
|
||||
for i in 0..accessor.count()
|
||||
{
|
||||
let offset = start + i * stride;
|
||||
let x = f32::from_le_bytes([
|
||||
buffer[offset],
|
||||
buffer[offset + 1],
|
||||
buffer[offset + 2],
|
||||
buffer[offset + 3],
|
||||
]);
|
||||
let y = f32::from_le_bytes([
|
||||
buffer[offset + 4],
|
||||
buffer[offset + 5],
|
||||
buffer[offset + 6],
|
||||
buffer[offset + 7],
|
||||
]);
|
||||
let z = f32::from_le_bytes([
|
||||
buffer[offset + 8],
|
||||
buffer[offset + 9],
|
||||
buffer[offset + 10],
|
||||
buffer[offset + 11],
|
||||
]);
|
||||
let w = f32::from_le_bytes([
|
||||
buffer[offset + 12],
|
||||
buffer[offset + 13],
|
||||
buffer[offset + 14],
|
||||
buffer[offset + 15],
|
||||
]);
|
||||
result.push(Quat::from_xyzw(x, y, z, w));
|
||||
}
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -179,6 +179,8 @@ impl Player
|
||||
MeshComponent {
|
||||
mesh: Rc::new(mesh),
|
||||
pipeline: Pipeline::Render,
|
||||
instance_buffer: None,
|
||||
num_instances: 1,
|
||||
},
|
||||
);
|
||||
world.player_tags.insert(entity);
|
||||
|
||||
611
src/render.rs
@@ -3,6 +3,7 @@ use crate::mesh::Mesh;
|
||||
use crate::postprocess::{create_blit_pipeline, create_fullscreen_quad, LowResFramebuffer};
|
||||
use crate::shader::create_render_pipeline;
|
||||
use crate::terrain::create_terrain_render_pipeline;
|
||||
use crate::texture_loader::{DitherTextures, FlowmapTexture};
|
||||
use crate::utility::transform::Transform;
|
||||
use bytemuck::{Pod, Zeroable};
|
||||
use glam::Mat4;
|
||||
@@ -16,22 +17,42 @@ struct TerrainUniforms
|
||||
model: [[f32; 4]; 4],
|
||||
view: [[f32; 4]; 4],
|
||||
projection: [[f32; 4]; 4],
|
||||
light_view_projection: [[f32; 4]; 4],
|
||||
camera_position: [f32; 3],
|
||||
height_scale: f32,
|
||||
time: f32,
|
||||
_padding: [f32; 2],
|
||||
shadow_bias: f32,
|
||||
_padding1: [f32; 2],
|
||||
light_direction: [f32; 3],
|
||||
_padding2: u32,
|
||||
}
|
||||
|
||||
impl TerrainUniforms
|
||||
{
|
||||
fn new(model: Mat4, view: Mat4, projection: Mat4, height_scale: f32, time: f32) -> Self
|
||||
fn new(
|
||||
model: Mat4,
|
||||
view: Mat4,
|
||||
projection: Mat4,
|
||||
light_view_projection: Mat4,
|
||||
camera_position: glam::Vec3,
|
||||
height_scale: f32,
|
||||
time: f32,
|
||||
shadow_bias: f32,
|
||||
light_direction: glam::Vec3,
|
||||
) -> Self
|
||||
{
|
||||
Self {
|
||||
model: model.to_cols_array_2d(),
|
||||
view: view.to_cols_array_2d(),
|
||||
projection: projection.to_cols_array_2d(),
|
||||
light_view_projection: light_view_projection.to_cols_array_2d(),
|
||||
camera_position: camera_position.to_array(),
|
||||
height_scale,
|
||||
time,
|
||||
_padding: [0.0; 2],
|
||||
shadow_bias,
|
||||
_padding1: [0.0; 2],
|
||||
light_direction: light_direction.to_array(),
|
||||
_padding2: 0,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -51,13 +72,8 @@ pub struct DrawCall
|
||||
pub num_indices: u32,
|
||||
pub model: Mat4,
|
||||
pub pipeline: Pipeline,
|
||||
}
|
||||
|
||||
pub struct TerrainData
|
||||
{
|
||||
pub height_texture: wgpu::Texture,
|
||||
pub height_view: wgpu::TextureView,
|
||||
pub height_sampler: wgpu::Sampler,
|
||||
pub instance_buffer: Option<wgpu::Buffer>,
|
||||
pub num_instances: u32,
|
||||
}
|
||||
|
||||
pub struct Renderer
|
||||
@@ -85,7 +101,25 @@ pub struct Renderer
|
||||
terrain_bind_group: Option<wgpu::BindGroup>,
|
||||
terrain_height_scale: f32,
|
||||
|
||||
shadow_pipeline: Option<wgpu::RenderPipeline>,
|
||||
shadow_bind_group_layout: wgpu::BindGroupLayout,
|
||||
shadow_bind_group: Option<wgpu::BindGroup>,
|
||||
|
||||
wireframe_pipeline: wgpu::RenderPipeline,
|
||||
|
||||
pub light_direction: glam::Vec3,
|
||||
pub shadow_focus_point: glam::Vec3,
|
||||
pub shadow_ortho_size: f32,
|
||||
pub shadow_distance: f32,
|
||||
pub shadow_bias: f32,
|
||||
|
||||
shadow_map_texture: wgpu::Texture,
|
||||
shadow_map_view: wgpu::TextureView,
|
||||
shadow_map_sampler: wgpu::Sampler,
|
||||
shadow_map_size: u32,
|
||||
|
||||
dither_textures: Option<DitherTextures>,
|
||||
flowmap_texture: Option<FlowmapTexture>,
|
||||
}
|
||||
|
||||
impl Renderer
|
||||
@@ -140,34 +174,181 @@ impl Renderer
|
||||
|
||||
let uniform_buffer = device.create_buffer(&wgpu::BufferDescriptor {
|
||||
label: Some("Uniform Buffer"),
|
||||
size: std::mem::size_of::<CameraUniforms>() as wgpu::BufferAddress,
|
||||
size: std::mem::size_of::<TerrainUniforms>() as wgpu::BufferAddress,
|
||||
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
|
||||
mapped_at_creation: false,
|
||||
});
|
||||
|
||||
let bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
|
||||
label: Some("Bind Group Layout"),
|
||||
entries: &[wgpu::BindGroupLayoutEntry {
|
||||
binding: 0,
|
||||
visibility: wgpu::ShaderStages::VERTEX | wgpu::ShaderStages::FRAGMENT,
|
||||
ty: wgpu::BindingType::Buffer {
|
||||
ty: wgpu::BufferBindingType::Uniform,
|
||||
has_dynamic_offset: false,
|
||||
min_binding_size: None,
|
||||
},
|
||||
count: None,
|
||||
}],
|
||||
let dither_textures = match DitherTextures::load_octaves(&device, &queue)
|
||||
{
|
||||
Ok(textures) =>
|
||||
{
|
||||
println!("Loaded dither textures successfully");
|
||||
Some(textures)
|
||||
}
|
||||
Err(e) =>
|
||||
{
|
||||
eprintln!(
|
||||
"Warning: Could not load dither textures: {}. Rendering may look incorrect.",
|
||||
e
|
||||
);
|
||||
None
|
||||
}
|
||||
};
|
||||
|
||||
let flowmap_texture = match FlowmapTexture::load(&device, &queue, "textures/terrain_flowmap.exr")
|
||||
{
|
||||
Ok(texture) =>
|
||||
{
|
||||
println!("Loaded terrain flowmap successfully");
|
||||
Some(texture)
|
||||
}
|
||||
Err(e) =>
|
||||
{
|
||||
eprintln!(
|
||||
"Warning: Could not load terrain flowmap: {}. Path lighting will not work.",
|
||||
e
|
||||
);
|
||||
None
|
||||
}
|
||||
};
|
||||
|
||||
let shadow_map_size = 4096;
|
||||
let shadow_map_texture = device.create_texture(&wgpu::TextureDescriptor {
|
||||
label: Some("Shadow Map"),
|
||||
size: wgpu::Extent3d {
|
||||
width: shadow_map_size,
|
||||
height: shadow_map_size,
|
||||
depth_or_array_layers: 1,
|
||||
},
|
||||
mip_level_count: 1,
|
||||
sample_count: 1,
|
||||
dimension: wgpu::TextureDimension::D2,
|
||||
format: wgpu::TextureFormat::Depth32Float,
|
||||
usage: wgpu::TextureUsages::RENDER_ATTACHMENT | wgpu::TextureUsages::TEXTURE_BINDING,
|
||||
view_formats: &[],
|
||||
});
|
||||
|
||||
let bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
|
||||
label: Some("Bind Group"),
|
||||
layout: &bind_group_layout,
|
||||
entries: &[wgpu::BindGroupEntry {
|
||||
binding: 0,
|
||||
resource: uniform_buffer.as_entire_binding(),
|
||||
}],
|
||||
let shadow_map_view = shadow_map_texture.create_view(&wgpu::TextureViewDescriptor::default());
|
||||
|
||||
let shadow_map_sampler = device.create_sampler(&wgpu::SamplerDescriptor {
|
||||
label: Some("Shadow Map Sampler"),
|
||||
address_mode_u: wgpu::AddressMode::ClampToEdge,
|
||||
address_mode_v: wgpu::AddressMode::ClampToEdge,
|
||||
address_mode_w: wgpu::AddressMode::ClampToEdge,
|
||||
mag_filter: wgpu::FilterMode::Linear,
|
||||
min_filter: wgpu::FilterMode::Linear,
|
||||
mipmap_filter: wgpu::FilterMode::Nearest,
|
||||
compare: Some(wgpu::CompareFunction::LessEqual),
|
||||
..Default::default()
|
||||
});
|
||||
|
||||
let bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
|
||||
label: Some("Bind Group Layout"),
|
||||
entries: &[
|
||||
wgpu::BindGroupLayoutEntry {
|
||||
binding: 0,
|
||||
visibility: wgpu::ShaderStages::VERTEX | wgpu::ShaderStages::FRAGMENT,
|
||||
ty: wgpu::BindingType::Buffer {
|
||||
ty: wgpu::BufferBindingType::Uniform,
|
||||
has_dynamic_offset: false,
|
||||
min_binding_size: None,
|
||||
},
|
||||
count: None,
|
||||
},
|
||||
wgpu::BindGroupLayoutEntry {
|
||||
binding: 1,
|
||||
visibility: wgpu::ShaderStages::FRAGMENT,
|
||||
ty: wgpu::BindingType::Texture {
|
||||
sample_type: wgpu::TextureSampleType::Depth,
|
||||
view_dimension: wgpu::TextureViewDimension::D2,
|
||||
multisampled: false,
|
||||
},
|
||||
count: None,
|
||||
},
|
||||
wgpu::BindGroupLayoutEntry {
|
||||
binding: 2,
|
||||
visibility: wgpu::ShaderStages::FRAGMENT,
|
||||
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Comparison),
|
||||
count: None,
|
||||
},
|
||||
wgpu::BindGroupLayoutEntry {
|
||||
binding: 3,
|
||||
visibility: wgpu::ShaderStages::FRAGMENT,
|
||||
ty: wgpu::BindingType::Texture {
|
||||
sample_type: wgpu::TextureSampleType::Float { filterable: false },
|
||||
view_dimension: wgpu::TextureViewDimension::D2Array,
|
||||
multisampled: false,
|
||||
},
|
||||
count: None,
|
||||
},
|
||||
wgpu::BindGroupLayoutEntry {
|
||||
binding: 4,
|
||||
visibility: wgpu::ShaderStages::FRAGMENT,
|
||||
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::NonFiltering),
|
||||
count: None,
|
||||
},
|
||||
wgpu::BindGroupLayoutEntry {
|
||||
binding: 5,
|
||||
visibility: wgpu::ShaderStages::FRAGMENT,
|
||||
ty: wgpu::BindingType::Texture {
|
||||
sample_type: wgpu::TextureSampleType::Float { filterable: true },
|
||||
view_dimension: wgpu::TextureViewDimension::D2,
|
||||
multisampled: false,
|
||||
},
|
||||
count: None,
|
||||
},
|
||||
wgpu::BindGroupLayoutEntry {
|
||||
binding: 6,
|
||||
visibility: wgpu::ShaderStages::FRAGMENT,
|
||||
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Filtering),
|
||||
count: None,
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
let bind_group = if let (Some(ref dither_tex), Some(ref flowmap)) = (&dither_textures, &flowmap_texture)
|
||||
{
|
||||
device.create_bind_group(&wgpu::BindGroupDescriptor {
|
||||
label: Some("Bind Group"),
|
||||
layout: &bind_group_layout,
|
||||
entries: &[
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 0,
|
||||
resource: uniform_buffer.as_entire_binding(),
|
||||
},
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 1,
|
||||
resource: wgpu::BindingResource::TextureView(&shadow_map_view),
|
||||
},
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 2,
|
||||
resource: wgpu::BindingResource::Sampler(&shadow_map_sampler),
|
||||
},
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 3,
|
||||
resource: wgpu::BindingResource::TextureView(&dither_tex.view),
|
||||
},
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 4,
|
||||
resource: wgpu::BindingResource::Sampler(&dither_tex.sampler),
|
||||
},
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 5,
|
||||
resource: wgpu::BindingResource::TextureView(&flowmap.view),
|
||||
},
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 6,
|
||||
resource: wgpu::BindingResource::Sampler(&flowmap.sampler),
|
||||
},
|
||||
],
|
||||
})
|
||||
}
|
||||
else
|
||||
{
|
||||
panic!("Cannot create renderer without dither textures and flowmap");
|
||||
};
|
||||
|
||||
let render_pipeline = create_render_pipeline(&device, &config, &bind_group_layout);
|
||||
|
||||
let (quad_vb, quad_ib, quad_num_indices) = create_fullscreen_quad(&device);
|
||||
@@ -212,9 +393,9 @@ impl Renderer
|
||||
|
||||
let blit_pipeline = create_blit_pipeline(&device, config.format, &blit_bind_group_layout);
|
||||
|
||||
let terrain_bind_group_layout =
|
||||
let shadow_bind_group_layout =
|
||||
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
|
||||
label: Some("Terrain Bind Group Layout"),
|
||||
label: Some("Shadow Bind Group Layout"),
|
||||
entries: &[
|
||||
wgpu::BindGroupLayoutEntry {
|
||||
binding: 0,
|
||||
@@ -226,32 +407,9 @@ impl Renderer
|
||||
},
|
||||
count: None,
|
||||
},
|
||||
wgpu::BindGroupLayoutEntry {
|
||||
binding: 1,
|
||||
visibility: wgpu::ShaderStages::VERTEX,
|
||||
ty: wgpu::BindingType::Texture {
|
||||
sample_type: wgpu::TextureSampleType::Float { filterable: false },
|
||||
view_dimension: wgpu::TextureViewDimension::D2,
|
||||
multisampled: false,
|
||||
},
|
||||
count: None,
|
||||
},
|
||||
wgpu::BindGroupLayoutEntry {
|
||||
binding: 2,
|
||||
visibility: wgpu::ShaderStages::VERTEX,
|
||||
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::NonFiltering),
|
||||
count: None,
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
let terrain_uniform_buffer = device.create_buffer(&wgpu::BufferDescriptor {
|
||||
label: Some("Terrain Uniform Buffer"),
|
||||
size: std::mem::size_of::<TerrainUniforms>() as wgpu::BufferAddress,
|
||||
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
|
||||
mapped_at_creation: false,
|
||||
});
|
||||
|
||||
let wireframe_pipeline =
|
||||
create_wireframe_pipeline(&device, config.format, &bind_group_layout);
|
||||
|
||||
@@ -262,19 +420,33 @@ impl Renderer
|
||||
config,
|
||||
framebuffer,
|
||||
render_pipeline,
|
||||
uniform_buffer,
|
||||
bind_group,
|
||||
uniform_buffer: uniform_buffer.clone(),
|
||||
bind_group: bind_group.clone(),
|
||||
quad_vb,
|
||||
quad_ib,
|
||||
quad_num_indices,
|
||||
blit_pipeline,
|
||||
blit_bind_group,
|
||||
terrain_pipeline: None,
|
||||
terrain_bind_group_layout,
|
||||
terrain_uniform_buffer,
|
||||
terrain_bind_group: None,
|
||||
terrain_bind_group_layout: bind_group_layout,
|
||||
terrain_uniform_buffer: uniform_buffer,
|
||||
terrain_bind_group: Some(bind_group),
|
||||
terrain_height_scale: 10.0,
|
||||
shadow_pipeline: None,
|
||||
shadow_bind_group_layout,
|
||||
shadow_bind_group: None,
|
||||
wireframe_pipeline,
|
||||
light_direction: glam::Vec3::new(-1.0, -0.5, 1.0).normalize(),
|
||||
shadow_focus_point: glam::Vec3::ZERO,
|
||||
shadow_ortho_size: 600.0,
|
||||
shadow_distance: 1000.0,
|
||||
shadow_bias: 0.001,
|
||||
shadow_map_texture,
|
||||
shadow_map_view,
|
||||
shadow_map_sampler,
|
||||
shadow_map_size,
|
||||
dither_textures,
|
||||
flowmap_texture,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -282,14 +454,28 @@ impl Renderer
|
||||
{
|
||||
let view = camera.view_matrix();
|
||||
let projection = camera.projection_matrix();
|
||||
let light_view_projection = self.calculate_light_view_projection();
|
||||
|
||||
self.render_shadow_pass(draw_calls, light_view_projection, time);
|
||||
|
||||
for (i, draw_call) in draw_calls.iter().enumerate()
|
||||
{
|
||||
let uniforms = TerrainUniforms::new(
|
||||
draw_call.model,
|
||||
view,
|
||||
projection,
|
||||
light_view_projection,
|
||||
camera.position,
|
||||
self.terrain_height_scale,
|
||||
time,
|
||||
self.shadow_bias,
|
||||
self.light_direction,
|
||||
);
|
||||
|
||||
match draw_call.pipeline
|
||||
{
|
||||
Pipeline::Render | Pipeline::Wireframe =>
|
||||
{
|
||||
let uniforms = CameraUniforms::new(draw_call.model, view, projection);
|
||||
self.queue.write_buffer(
|
||||
&self.uniform_buffer,
|
||||
0,
|
||||
@@ -298,13 +484,6 @@ impl Renderer
|
||||
}
|
||||
Pipeline::Terrain =>
|
||||
{
|
||||
let uniforms = TerrainUniforms::new(
|
||||
draw_call.model,
|
||||
view,
|
||||
projection,
|
||||
self.terrain_height_scale,
|
||||
time,
|
||||
);
|
||||
self.queue.write_buffer(
|
||||
&self.terrain_uniform_buffer,
|
||||
0,
|
||||
@@ -384,9 +563,15 @@ impl Renderer
|
||||
render_pass.set_pipeline(pipeline);
|
||||
render_pass.set_bind_group(0, bind_group, &[]);
|
||||
render_pass.set_vertex_buffer(0, draw_call.vertex_buffer.slice(..));
|
||||
|
||||
if let Some(ref instance_buffer) = draw_call.instance_buffer
|
||||
{
|
||||
render_pass.set_vertex_buffer(1, instance_buffer.slice(..));
|
||||
}
|
||||
|
||||
render_pass
|
||||
.set_index_buffer(draw_call.index_buffer.slice(..), wgpu::IndexFormat::Uint32);
|
||||
render_pass.draw_indexed(0..draw_call.num_indices, 0, 0..1);
|
||||
render_pass.draw_indexed(0..draw_call.num_indices, 0, 0..draw_call.num_instances);
|
||||
}
|
||||
|
||||
self.queue.submit(std::iter::once(encoder.finish()));
|
||||
@@ -446,17 +631,22 @@ impl Renderer
|
||||
&mut self,
|
||||
view: &glam::Mat4,
|
||||
projection: &glam::Mat4,
|
||||
camera_position: glam::Vec3,
|
||||
draw_calls: &[DrawCall],
|
||||
time: f32,
|
||||
)
|
||||
{
|
||||
let light_view_projection = self.calculate_light_view_projection();
|
||||
|
||||
self.render_shadow_pass(draw_calls, light_view_projection, time);
|
||||
|
||||
for (i, draw_call) in draw_calls.iter().enumerate()
|
||||
{
|
||||
match draw_call.pipeline
|
||||
{
|
||||
Pipeline::Render | Pipeline::Wireframe =>
|
||||
{
|
||||
let uniforms = CameraUniforms::new(draw_call.model, *view, *projection);
|
||||
let uniforms = CameraUniforms::new(draw_call.model, *view, *projection, self.light_direction);
|
||||
self.queue.write_buffer(
|
||||
&self.uniform_buffer,
|
||||
0,
|
||||
@@ -469,8 +659,12 @@ impl Renderer
|
||||
draw_call.model,
|
||||
*view,
|
||||
*projection,
|
||||
light_view_projection,
|
||||
camera_position,
|
||||
self.terrain_height_scale,
|
||||
time,
|
||||
self.shadow_bias,
|
||||
self.light_direction,
|
||||
);
|
||||
self.queue.write_buffer(
|
||||
&self.terrain_uniform_buffer,
|
||||
@@ -552,11 +746,17 @@ impl Renderer
|
||||
render_pass.set_bind_group(0, bind_group, &[]);
|
||||
|
||||
render_pass.set_vertex_buffer(0, draw_call.vertex_buffer.slice(..));
|
||||
|
||||
if let Some(ref instance_buffer) = draw_call.instance_buffer
|
||||
{
|
||||
render_pass.set_vertex_buffer(1, instance_buffer.slice(..));
|
||||
}
|
||||
|
||||
render_pass.set_index_buffer(
|
||||
draw_call.index_buffer.slice(..),
|
||||
wgpu::IndexFormat::Uint32,
|
||||
);
|
||||
render_pass.draw_indexed(0..draw_call.num_indices, 0, 0..1);
|
||||
render_pass.draw_indexed(0..draw_call.num_indices, 0, 0..draw_call.num_instances);
|
||||
}
|
||||
|
||||
self.queue.submit(std::iter::once(encoder.finish()));
|
||||
@@ -620,8 +820,18 @@ impl Renderer
|
||||
)
|
||||
}
|
||||
|
||||
pub fn set_terrain_data(&mut self, terrain_data: TerrainData)
|
||||
pub fn set_terrain_data(&mut self)
|
||||
{
|
||||
let dither_textures = self
|
||||
.dither_textures
|
||||
.as_ref()
|
||||
.expect("Dither textures should be loaded during initialization");
|
||||
|
||||
let flowmap_texture = self
|
||||
.flowmap_texture
|
||||
.as_ref()
|
||||
.expect("Flowmap texture should be loaded during initialization");
|
||||
|
||||
let terrain_bind_group = self.device.create_bind_group(&wgpu::BindGroupDescriptor {
|
||||
label: Some("Terrain Bind Group"),
|
||||
layout: &self.terrain_bind_group_layout,
|
||||
@@ -632,11 +842,38 @@ impl Renderer
|
||||
},
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 1,
|
||||
resource: wgpu::BindingResource::TextureView(&terrain_data.height_view),
|
||||
resource: wgpu::BindingResource::TextureView(&self.shadow_map_view),
|
||||
},
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 2,
|
||||
resource: wgpu::BindingResource::Sampler(&terrain_data.height_sampler),
|
||||
resource: wgpu::BindingResource::Sampler(&self.shadow_map_sampler),
|
||||
},
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 3,
|
||||
resource: wgpu::BindingResource::TextureView(&dither_textures.view),
|
||||
},
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 4,
|
||||
resource: wgpu::BindingResource::Sampler(&dither_textures.sampler),
|
||||
},
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 5,
|
||||
resource: wgpu::BindingResource::TextureView(&flowmap_texture.view),
|
||||
},
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 6,
|
||||
resource: wgpu::BindingResource::Sampler(&flowmap_texture.sampler),
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
let shadow_bind_group = self.device.create_bind_group(&wgpu::BindGroupDescriptor {
|
||||
label: Some("Shadow Bind Group"),
|
||||
layout: &self.shadow_bind_group_layout,
|
||||
entries: &[
|
||||
wgpu::BindGroupEntry {
|
||||
binding: 0,
|
||||
resource: self.terrain_uniform_buffer.as_entire_binding(),
|
||||
},
|
||||
],
|
||||
});
|
||||
@@ -647,8 +884,13 @@ impl Renderer
|
||||
&self.terrain_bind_group_layout,
|
||||
);
|
||||
|
||||
let shadow_pipeline = create_shadow_pipeline(&self.device, &self.shadow_bind_group_layout);
|
||||
|
||||
self.terrain_bind_group = Some(terrain_bind_group);
|
||||
self.terrain_pipeline = Some(terrain_pipeline);
|
||||
self.shadow_bind_group = Some(shadow_bind_group);
|
||||
self.shadow_pipeline = Some(shadow_pipeline);
|
||||
self.terrain_height_scale = 1.0;
|
||||
}
|
||||
|
||||
pub fn get_device(&self) -> &wgpu::Device
|
||||
@@ -660,6 +902,119 @@ impl Renderer
|
||||
{
|
||||
self.config.width as f32 / self.config.height as f32
|
||||
}
|
||||
|
||||
fn render_shadow_pass(
|
||||
&mut self,
|
||||
draw_calls: &[DrawCall],
|
||||
light_view_projection: Mat4,
|
||||
time: f32,
|
||||
)
|
||||
{
|
||||
let mut encoder = self
|
||||
.device
|
||||
.create_command_encoder(&wgpu::CommandEncoderDescriptor {
|
||||
label: Some("Shadow Pass Encoder"),
|
||||
});
|
||||
|
||||
{
|
||||
let mut shadow_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
|
||||
label: Some("Shadow Pass"),
|
||||
color_attachments: &[],
|
||||
depth_stencil_attachment: Some(wgpu::RenderPassDepthStencilAttachment {
|
||||
view: &self.shadow_map_view,
|
||||
depth_ops: Some(wgpu::Operations {
|
||||
load: wgpu::LoadOp::Clear(1.0),
|
||||
store: wgpu::StoreOp::Store,
|
||||
}),
|
||||
stencil_ops: None,
|
||||
}),
|
||||
timestamp_writes: None,
|
||||
occlusion_query_set: None,
|
||||
});
|
||||
|
||||
shadow_pass.set_pipeline(
|
||||
self.shadow_pipeline
|
||||
.as_ref()
|
||||
.expect("shadow pipeline missing"),
|
||||
);
|
||||
shadow_pass.set_bind_group(
|
||||
0,
|
||||
self.shadow_bind_group
|
||||
.as_ref()
|
||||
.expect("shadow bind group missing"),
|
||||
&[],
|
||||
);
|
||||
|
||||
for draw_call in draw_calls.iter()
|
||||
{
|
||||
if !matches!(draw_call.pipeline, Pipeline::Terrain)
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
let uniforms = TerrainUniforms::new(
|
||||
draw_call.model,
|
||||
Mat4::IDENTITY,
|
||||
light_view_projection,
|
||||
light_view_projection,
|
||||
glam::Vec3::ZERO,
|
||||
self.terrain_height_scale,
|
||||
time,
|
||||
self.shadow_bias,
|
||||
self.light_direction,
|
||||
);
|
||||
self.queue.write_buffer(
|
||||
&self.terrain_uniform_buffer,
|
||||
0,
|
||||
bytemuck::cast_slice(&[uniforms]),
|
||||
);
|
||||
|
||||
shadow_pass.set_vertex_buffer(0, draw_call.vertex_buffer.slice(..));
|
||||
|
||||
if let Some(ref instance_buffer) = draw_call.instance_buffer
|
||||
{
|
||||
shadow_pass.set_vertex_buffer(1, instance_buffer.slice(..));
|
||||
}
|
||||
|
||||
shadow_pass.set_index_buffer(
|
||||
draw_call.index_buffer.slice(..),
|
||||
wgpu::IndexFormat::Uint32,
|
||||
);
|
||||
shadow_pass.draw_indexed(0..draw_call.num_indices, 0, 0..draw_call.num_instances);
|
||||
}
|
||||
}
|
||||
|
||||
self.queue.submit(std::iter::once(encoder.finish()));
|
||||
}
|
||||
|
||||
fn calculate_light_view_projection(&self) -> Mat4
|
||||
{
|
||||
let light_dir = self.light_direction.normalize();
|
||||
let light_position = self.shadow_focus_point - light_dir * self.shadow_distance;
|
||||
|
||||
let light_view = Mat4::look_at_rh(
|
||||
light_position,
|
||||
self.shadow_focus_point,
|
||||
glam::Vec3::Y,
|
||||
);
|
||||
|
||||
let far_plane = self.shadow_distance * 2.0 + 50.0;
|
||||
let light_projection = Mat4::orthographic_rh(
|
||||
-self.shadow_ortho_size,
|
||||
self.shadow_ortho_size,
|
||||
-self.shadow_ortho_size,
|
||||
self.shadow_ortho_size,
|
||||
0.1,
|
||||
far_plane,
|
||||
);
|
||||
|
||||
println!("Shadow Frustum - Size: {:.1}×{:.1}, Coverage: {:.1}×{:.1}, Depth: 0.1-{:.1}, Focus: {:?}, Light: {:?}",
|
||||
self.shadow_ortho_size * 2.0, self.shadow_ortho_size * 2.0,
|
||||
self.shadow_ortho_size, self.shadow_ortho_size,
|
||||
far_plane, self.shadow_focus_point, light_position);
|
||||
|
||||
light_projection * light_view
|
||||
}
|
||||
}
|
||||
|
||||
thread_local! {
|
||||
@@ -693,12 +1048,12 @@ where
|
||||
})
|
||||
}
|
||||
|
||||
pub fn set_terrain_data(terrain_data: TerrainData)
|
||||
pub fn set_terrain_data()
|
||||
{
|
||||
GLOBAL_RENDERER.with(|r| {
|
||||
let mut renderer = r.borrow_mut();
|
||||
let renderer = renderer.as_mut().expect("Renderer not set");
|
||||
renderer.set_terrain_data(terrain_data);
|
||||
renderer.set_terrain_data();
|
||||
});
|
||||
}
|
||||
|
||||
@@ -723,6 +1078,7 @@ pub fn render(camera: &Camera, draw_calls: &[DrawCall], time: f32)
|
||||
pub fn render_with_matrices(
|
||||
view: &glam::Mat4,
|
||||
projection: &glam::Mat4,
|
||||
camera_position: glam::Vec3,
|
||||
draw_calls: &[DrawCall],
|
||||
time: f32,
|
||||
)
|
||||
@@ -730,18 +1086,115 @@ pub fn render_with_matrices(
|
||||
GLOBAL_RENDERER.with(|r| {
|
||||
let mut renderer = r.borrow_mut();
|
||||
let renderer = renderer.as_mut().expect("Renderer not set");
|
||||
renderer.render_with_matrices(view, projection, draw_calls, time);
|
||||
renderer.render_with_matrices(view, projection, camera_position, draw_calls, time);
|
||||
});
|
||||
}
|
||||
|
||||
pub fn set_shadow_focus_point(focus_point: glam::Vec3)
|
||||
{
|
||||
GLOBAL_RENDERER.with(|r| {
|
||||
let mut renderer = r.borrow_mut();
|
||||
let renderer = renderer.as_mut().expect("Renderer not set");
|
||||
renderer.shadow_focus_point = focus_point;
|
||||
});
|
||||
}
|
||||
|
||||
pub fn set_shadow_ortho_size(size: f32)
|
||||
{
|
||||
GLOBAL_RENDERER.with(|r| {
|
||||
let mut renderer = r.borrow_mut();
|
||||
let renderer = renderer.as_mut().expect("Renderer not set");
|
||||
renderer.shadow_ortho_size = size;
|
||||
});
|
||||
}
|
||||
|
||||
pub fn set_shadow_distance(distance: f32)
|
||||
{
|
||||
GLOBAL_RENDERER.with(|r| {
|
||||
let mut renderer = r.borrow_mut();
|
||||
let renderer = renderer.as_mut().expect("Renderer not set");
|
||||
renderer.shadow_distance = distance;
|
||||
});
|
||||
}
|
||||
|
||||
pub fn set_shadow_bias(bias: f32)
|
||||
{
|
||||
GLOBAL_RENDERER.with(|r| {
|
||||
let mut renderer = r.borrow_mut();
|
||||
let renderer = renderer.as_mut().expect("Renderer not set");
|
||||
renderer.shadow_bias = bias;
|
||||
});
|
||||
}
|
||||
|
||||
fn create_shadow_pipeline(
|
||||
device: &wgpu::Device,
|
||||
bind_group_layout: &wgpu::BindGroupLayout,
|
||||
) -> wgpu::RenderPipeline
|
||||
{
|
||||
let shared_source =
|
||||
std::fs::read_to_string("shaders/shared.wgsl").expect("Failed to read shared shader");
|
||||
let terrain_source =
|
||||
std::fs::read_to_string("shaders/terrain.wgsl").expect("Failed to read terrain shader");
|
||||
let shader_source = format!("{}\n{}", shared_source, terrain_source);
|
||||
|
||||
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
|
||||
label: Some("Shadow Shader"),
|
||||
source: wgpu::ShaderSource::Wgsl(shader_source.into()),
|
||||
});
|
||||
|
||||
let render_pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
|
||||
label: Some("Shadow Pipeline Layout"),
|
||||
bind_group_layouts: &[bind_group_layout],
|
||||
push_constant_ranges: &[],
|
||||
});
|
||||
|
||||
device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
|
||||
label: Some("Shadow Pipeline"),
|
||||
layout: Some(&render_pipeline_layout),
|
||||
vertex: wgpu::VertexState {
|
||||
module: &shader,
|
||||
entry_point: Some("vs_main"),
|
||||
buffers: &[crate::mesh::Vertex::desc(), crate::mesh::InstanceRaw::desc()],
|
||||
compilation_options: Default::default(),
|
||||
},
|
||||
fragment: None,
|
||||
primitive: wgpu::PrimitiveState {
|
||||
topology: wgpu::PrimitiveTopology::TriangleList,
|
||||
strip_index_format: None,
|
||||
front_face: wgpu::FrontFace::Ccw,
|
||||
cull_mode: Some(wgpu::Face::Back),
|
||||
polygon_mode: wgpu::PolygonMode::Fill,
|
||||
unclipped_depth: false,
|
||||
conservative: false,
|
||||
},
|
||||
depth_stencil: Some(wgpu::DepthStencilState {
|
||||
format: wgpu::TextureFormat::Depth32Float,
|
||||
depth_write_enabled: true,
|
||||
depth_compare: wgpu::CompareFunction::Less,
|
||||
stencil: wgpu::StencilState::default(),
|
||||
bias: wgpu::DepthBiasState::default(),
|
||||
}),
|
||||
multisample: wgpu::MultisampleState {
|
||||
count: 1,
|
||||
mask: !0,
|
||||
alpha_to_coverage_enabled: false,
|
||||
},
|
||||
multiview: None,
|
||||
cache: None,
|
||||
})
|
||||
}
|
||||
|
||||
fn create_wireframe_pipeline(
|
||||
device: &wgpu::Device,
|
||||
format: wgpu::TextureFormat,
|
||||
bind_group_layout: &wgpu::BindGroupLayout,
|
||||
) -> wgpu::RenderPipeline
|
||||
{
|
||||
let shader_source =
|
||||
let shared_source =
|
||||
std::fs::read_to_string("shaders/shared.wgsl").expect("Failed to read shared shader");
|
||||
let standard_source =
|
||||
std::fs::read_to_string("shaders/standard.wgsl").expect("Failed to read shader");
|
||||
let shader_source = format!("{}\n{}", shared_source, standard_source);
|
||||
|
||||
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
|
||||
label: Some("Wireframe Shader"),
|
||||
@@ -760,7 +1213,7 @@ fn create_wireframe_pipeline(
|
||||
vertex: wgpu::VertexState {
|
||||
module: &shader,
|
||||
entry_point: Some("vs_main"),
|
||||
buffers: &[crate::mesh::Vertex::desc()],
|
||||
buffers: &[crate::mesh::Vertex::desc(), crate::mesh::InstanceRaw::desc()],
|
||||
compilation_options: Default::default(),
|
||||
},
|
||||
fragment: Some(wgpu::FragmentState {
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
use crate::mesh::Vertex;
|
||||
use crate::mesh::{InstanceRaw, Vertex};
|
||||
|
||||
pub fn create_render_pipeline(
|
||||
device: &wgpu::Device,
|
||||
@@ -6,8 +6,11 @@ pub fn create_render_pipeline(
|
||||
bind_group_layout: &wgpu::BindGroupLayout,
|
||||
) -> wgpu::RenderPipeline
|
||||
{
|
||||
let shader_source =
|
||||
let shared_source =
|
||||
std::fs::read_to_string("shaders/shared.wgsl").expect("Failed to read shared shader");
|
||||
let standard_source =
|
||||
std::fs::read_to_string("shaders/standard.wgsl").expect("Failed to read standard shader");
|
||||
let shader_source = format!("{}\n{}", shared_source, standard_source);
|
||||
|
||||
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
|
||||
label: Some("Shader"),
|
||||
@@ -26,7 +29,7 @@ pub fn create_render_pipeline(
|
||||
vertex: wgpu::VertexState {
|
||||
module: &shader,
|
||||
entry_point: Some("vs_main"),
|
||||
buffers: &[Vertex::desc()],
|
||||
buffers: &[Vertex::desc(), InstanceRaw::desc()],
|
||||
compilation_options: Default::default(),
|
||||
},
|
||||
fragment: Some(wgpu::FragmentState {
|
||||
|
||||
@@ -126,7 +126,7 @@ pub fn camera_noclip_system(world: &mut World, input_state: &InputState, delta:
|
||||
}
|
||||
if input_state.space
|
||||
{
|
||||
input_vec.y += 1.0;
|
||||
input_vec.y += 10.0;
|
||||
}
|
||||
|
||||
if input_vec.length_squared() > 0.0
|
||||
@@ -137,7 +137,7 @@ pub fn camera_noclip_system(world: &mut World, input_state: &InputState, delta:
|
||||
let mut speed = 10.0 * delta;
|
||||
if input_state.shift
|
||||
{
|
||||
speed *= 2.0;
|
||||
speed *= 10.0;
|
||||
}
|
||||
|
||||
if let Some(camera_transform) = world.transforms.get_mut(camera_entity)
|
||||
|
||||
@@ -1,5 +1,9 @@
|
||||
use crate::mesh::InstanceRaw;
|
||||
use crate::render::DrawCall;
|
||||
use crate::world::World;
|
||||
use bytemuck::cast_slice;
|
||||
use glam::Mat4;
|
||||
use wgpu::util::DeviceExt;
|
||||
|
||||
pub fn render_system(world: &World) -> Vec<DrawCall>
|
||||
{
|
||||
@@ -11,12 +15,38 @@ pub fn render_system(world: &World) -> Vec<DrawCall>
|
||||
let transform = world.transforms.get(entity)?;
|
||||
let mesh_component = world.meshes.get(entity)?;
|
||||
|
||||
let model_matrix = transform.to_matrix();
|
||||
|
||||
let (instance_buffer, num_instances) = if let Some(ref buffer) =
|
||||
mesh_component.instance_buffer
|
||||
{
|
||||
(Some(buffer.clone()), mesh_component.num_instances)
|
||||
}
|
||||
else
|
||||
{
|
||||
let instance_data = InstanceRaw {
|
||||
model: model_matrix.to_cols_array_2d(),
|
||||
};
|
||||
|
||||
let buffer = crate::render::with_device(|device| {
|
||||
device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
|
||||
label: Some("Instance Buffer"),
|
||||
contents: cast_slice(&[instance_data]),
|
||||
usage: wgpu::BufferUsages::VERTEX,
|
||||
})
|
||||
});
|
||||
|
||||
(Some(buffer), 1)
|
||||
};
|
||||
|
||||
Some(DrawCall {
|
||||
vertex_buffer: mesh_component.mesh.vertex_buffer.clone(),
|
||||
index_buffer: mesh_component.mesh.index_buffer.clone(),
|
||||
num_indices: mesh_component.mesh.num_indices,
|
||||
model: transform.to_matrix(),
|
||||
model: model_matrix,
|
||||
pipeline: mesh_component.pipeline,
|
||||
instance_buffer,
|
||||
num_instances,
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
|
||||
174
src/terrain.rs
@@ -1,7 +1,7 @@
|
||||
use std::rc::Rc;
|
||||
|
||||
use exr::prelude::{ReadChannels, ReadLayers};
|
||||
use glam::{Vec2, Vec3};
|
||||
use glam::Vec2;
|
||||
use nalgebra::{vector, DMatrix};
|
||||
use rapier3d::{
|
||||
math::Isometry,
|
||||
@@ -11,75 +11,152 @@ use rapier3d::{
|
||||
use crate::{
|
||||
components::{MeshComponent, PhysicsComponent},
|
||||
entity::EntityHandle,
|
||||
mesh::{Mesh, Vertex},
|
||||
mesh::{InstanceRaw, Mesh, Vertex},
|
||||
physics::PhysicsManager,
|
||||
render,
|
||||
world::{Transform, World},
|
||||
};
|
||||
|
||||
pub struct TerrainConfig
|
||||
{
|
||||
pub gltf_path: String,
|
||||
pub heightmap_path: String,
|
||||
pub size: Vec2,
|
||||
}
|
||||
|
||||
impl TerrainConfig
|
||||
{
|
||||
pub fn new(gltf_path: &str, heightmap_path: &str, size: Vec2) -> Self
|
||||
{
|
||||
Self {
|
||||
gltf_path: gltf_path.to_string(),
|
||||
heightmap_path: heightmap_path.to_string(),
|
||||
size,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn default() -> Self
|
||||
{
|
||||
Self {
|
||||
gltf_path: "meshes/terrain.gltf".to_string(),
|
||||
heightmap_path: "textures/terrain.exr".to_string(),
|
||||
size: Vec2::new(1000.0, 1000.0),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct Terrain;
|
||||
|
||||
impl Terrain
|
||||
{
|
||||
pub fn spawn(
|
||||
world: &mut World,
|
||||
heightmap_path: &str,
|
||||
height_scale: f32,
|
||||
) -> anyhow::Result<EntityHandle>
|
||||
pub fn spawn(world: &mut World, config: &TerrainConfig) -> anyhow::Result<EntityHandle>
|
||||
{
|
||||
let entity = world.spawn();
|
||||
|
||||
let plane_size = Vec2::new(100.0, 100.0);
|
||||
|
||||
let plane_mesh = render::with_device(|device| {
|
||||
Mesh::create_plane_mesh(device, plane_size.x, plane_size.y, 100, 100)
|
||||
});
|
||||
let gltf_data = render::with_device(|device| {
|
||||
Mesh::load_gltf_with_instances(device, &config.gltf_path)
|
||||
})?;
|
||||
|
||||
let terrain_entity = world.spawn();
|
||||
let transform = Transform::IDENTITY;
|
||||
|
||||
world.transforms.insert(entity, transform);
|
||||
world.meshes.insert(
|
||||
entity,
|
||||
MeshComponent {
|
||||
mesh: Rc::new(plane_mesh),
|
||||
pipeline: render::Pipeline::Terrain,
|
||||
},
|
||||
);
|
||||
let mut terrain_mesh = None;
|
||||
let mut tree_mesh = None;
|
||||
let mut tree_instances = None;
|
||||
|
||||
let heights = Self::load_heightfield_data(heightmap_path)?;
|
||||
for (mesh, instances) in gltf_data
|
||||
{
|
||||
if instances.is_empty()
|
||||
{
|
||||
if terrain_mesh.is_none()
|
||||
{
|
||||
terrain_mesh = Some(mesh);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
tree_mesh = Some(mesh);
|
||||
tree_instances = Some(instances);
|
||||
}
|
||||
}
|
||||
|
||||
println!(
|
||||
"Heightmap dimensions: {} rows × {} cols",
|
||||
heights.nrows(),
|
||||
heights.ncols()
|
||||
);
|
||||
if let Some(terrain_mesh) = terrain_mesh
|
||||
{
|
||||
world.transforms.insert(terrain_entity, transform);
|
||||
world.meshes.insert(
|
||||
terrain_entity,
|
||||
MeshComponent {
|
||||
mesh: Rc::new(terrain_mesh),
|
||||
pipeline: render::Pipeline::Terrain,
|
||||
instance_buffer: None,
|
||||
num_instances: 1,
|
||||
},
|
||||
);
|
||||
|
||||
let scale = vector![plane_size.x, height_scale, plane_size.y,];
|
||||
let heights = Self::load_heightfield_from_exr(&config.heightmap_path)?;
|
||||
|
||||
let body = RigidBodyBuilder::fixed()
|
||||
.translation(transform.get_position().into())
|
||||
.build();
|
||||
println!(
|
||||
"Loaded terrain: {} rows × {} cols heightfield from EXR",
|
||||
heights.nrows(),
|
||||
heights.ncols()
|
||||
);
|
||||
|
||||
let rigidbody_handle = PhysicsManager::add_rigidbody(body);
|
||||
let height_scale = 1.0;
|
||||
let scale = vector![config.size.x, height_scale, config.size.y];
|
||||
|
||||
let collider = ColliderBuilder::heightfield(heights.clone(), scale).build();
|
||||
let body = RigidBodyBuilder::fixed()
|
||||
.translation(transform.get_position().into())
|
||||
.build();
|
||||
|
||||
let collider_handle = PhysicsManager::add_collider(collider, Some(rigidbody_handle));
|
||||
let rigidbody_handle = PhysicsManager::add_rigidbody(body);
|
||||
|
||||
PhysicsManager::set_heightfield_data(heights, scale, transform.get_position().into());
|
||||
let collider = ColliderBuilder::heightfield(heights.clone(), scale).build();
|
||||
|
||||
world.physics.insert(
|
||||
entity,
|
||||
PhysicsComponent {
|
||||
rigidbody: rigidbody_handle,
|
||||
collider: Some(collider_handle),
|
||||
},
|
||||
);
|
||||
let collider_handle = PhysicsManager::add_collider(collider, Some(rigidbody_handle));
|
||||
|
||||
Ok(entity)
|
||||
PhysicsManager::set_heightfield_data(heights, scale, transform.get_position().into());
|
||||
|
||||
world.physics.insert(
|
||||
terrain_entity,
|
||||
PhysicsComponent {
|
||||
rigidbody: rigidbody_handle,
|
||||
collider: Some(collider_handle),
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
if let (Some(tree_mesh), Some(instances)) = (tree_mesh, tree_instances)
|
||||
{
|
||||
let num_instances = instances.len();
|
||||
println!("Loaded {} tree instances", num_instances);
|
||||
|
||||
let tree_entity = world.spawn();
|
||||
|
||||
let instance_raw: Vec<InstanceRaw> = instances.iter().map(|i| i.to_raw()).collect();
|
||||
|
||||
let instance_buffer = render::with_device(|device| {
|
||||
use wgpu::util::DeviceExt;
|
||||
device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
|
||||
label: Some("Tree Instance Buffer"),
|
||||
contents: bytemuck::cast_slice(&instance_raw),
|
||||
usage: wgpu::BufferUsages::VERTEX,
|
||||
})
|
||||
});
|
||||
|
||||
world.transforms.insert(tree_entity, Transform::IDENTITY);
|
||||
world.meshes.insert(
|
||||
tree_entity,
|
||||
MeshComponent {
|
||||
mesh: Rc::new(tree_mesh),
|
||||
pipeline: render::Pipeline::Render,
|
||||
instance_buffer: Some(instance_buffer),
|
||||
num_instances: num_instances as u32,
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
Ok(terrain_entity)
|
||||
}
|
||||
|
||||
fn load_heightfield_data(path: &str) -> anyhow::Result<DMatrix<f32>>
|
||||
fn load_heightfield_from_exr(path: &str) -> anyhow::Result<DMatrix<f32>>
|
||||
{
|
||||
let image = exr::prelude::read()
|
||||
.no_deep_data()
|
||||
@@ -107,8 +184,11 @@ pub fn create_terrain_render_pipeline(
|
||||
bind_group_layout: &wgpu::BindGroupLayout,
|
||||
) -> wgpu::RenderPipeline
|
||||
{
|
||||
let shader_source =
|
||||
let shared_source =
|
||||
std::fs::read_to_string("shaders/shared.wgsl").expect("Failed to read shared shader");
|
||||
let terrain_source =
|
||||
std::fs::read_to_string("shaders/terrain.wgsl").expect("Failed to read terrain shader");
|
||||
let shader_source = format!("{}\n{}", shared_source, terrain_source);
|
||||
|
||||
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
|
||||
label: Some("Terrain Shader"),
|
||||
@@ -127,7 +207,7 @@ pub fn create_terrain_render_pipeline(
|
||||
vertex: wgpu::VertexState {
|
||||
module: &shader,
|
||||
entry_point: Some("vs_main"),
|
||||
buffers: &[Vertex::desc()],
|
||||
buffers: &[Vertex::desc(), InstanceRaw::desc()],
|
||||
compilation_options: Default::default(),
|
||||
},
|
||||
fragment: Some(wgpu::FragmentState {
|
||||
|
||||
238
src/texture_loader.rs
Normal file
@@ -0,0 +1,238 @@
|
||||
use anyhow::Result;
|
||||
use exr::prelude::{ReadChannels, ReadLayers};
|
||||
use half::f16;
|
||||
|
||||
pub struct DitherTextures
|
||||
{
|
||||
pub texture_array: wgpu::Texture,
|
||||
pub view: wgpu::TextureView,
|
||||
pub sampler: wgpu::Sampler,
|
||||
}
|
||||
|
||||
pub struct FlowmapTexture
|
||||
{
|
||||
pub texture: wgpu::Texture,
|
||||
pub view: wgpu::TextureView,
|
||||
pub sampler: wgpu::Sampler,
|
||||
}
|
||||
|
||||
impl DitherTextures
|
||||
{
|
||||
pub fn load_octaves(device: &wgpu::Device, queue: &wgpu::Queue) -> Result<Self>
|
||||
{
|
||||
let octave_paths = [
|
||||
"textures/dither/octave_0.png",
|
||||
"textures/dither/octave_1.png",
|
||||
"textures/dither/octave_2.png",
|
||||
"textures/dither/octave_3.png",
|
||||
];
|
||||
|
||||
let mut images = Vec::new();
|
||||
let mut texture_size = 0;
|
||||
|
||||
for path in &octave_paths
|
||||
{
|
||||
let img = image::open(path)?.to_luma8();
|
||||
let (width, height) = img.dimensions();
|
||||
|
||||
if texture_size == 0
|
||||
{
|
||||
texture_size = width;
|
||||
}
|
||||
else if width != texture_size || height != texture_size
|
||||
{
|
||||
return Err(anyhow::anyhow!(
|
||||
"All dither textures must be the same size. Expected {}x{}, got {}x{}",
|
||||
texture_size,
|
||||
texture_size,
|
||||
width,
|
||||
height
|
||||
));
|
||||
}
|
||||
|
||||
if width != height
|
||||
{
|
||||
return Err(anyhow::anyhow!(
|
||||
"Dither textures must be square. Got {}x{}",
|
||||
width,
|
||||
height
|
||||
));
|
||||
}
|
||||
|
||||
images.push(img);
|
||||
}
|
||||
|
||||
let texture_array = device.create_texture(&wgpu::TextureDescriptor {
|
||||
label: Some("Dither Texture Array"),
|
||||
size: wgpu::Extent3d {
|
||||
width: texture_size,
|
||||
height: texture_size,
|
||||
depth_or_array_layers: 4,
|
||||
},
|
||||
mip_level_count: 1,
|
||||
sample_count: 1,
|
||||
dimension: wgpu::TextureDimension::D2,
|
||||
format: wgpu::TextureFormat::R8Unorm,
|
||||
usage: wgpu::TextureUsages::TEXTURE_BINDING | wgpu::TextureUsages::COPY_DST,
|
||||
view_formats: &[],
|
||||
});
|
||||
|
||||
for (i, img) in images.iter().enumerate()
|
||||
{
|
||||
queue.write_texture(
|
||||
wgpu::TexelCopyTextureInfo {
|
||||
texture: &texture_array,
|
||||
mip_level: 0,
|
||||
origin: wgpu::Origin3d {
|
||||
x: 0,
|
||||
y: 0,
|
||||
z: i as u32,
|
||||
},
|
||||
aspect: wgpu::TextureAspect::All,
|
||||
},
|
||||
img.as_raw(),
|
||||
wgpu::TexelCopyBufferLayout {
|
||||
offset: 0,
|
||||
bytes_per_row: Some(texture_size),
|
||||
rows_per_image: Some(texture_size),
|
||||
},
|
||||
wgpu::Extent3d {
|
||||
width: texture_size,
|
||||
height: texture_size,
|
||||
depth_or_array_layers: 1,
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
let view = texture_array.create_view(&wgpu::TextureViewDescriptor {
|
||||
label: Some("Dither Texture Array View"),
|
||||
dimension: Some(wgpu::TextureViewDimension::D2Array),
|
||||
..Default::default()
|
||||
});
|
||||
|
||||
let sampler = device.create_sampler(&wgpu::SamplerDescriptor {
|
||||
label: Some("Dither Sampler"),
|
||||
address_mode_u: wgpu::AddressMode::Repeat,
|
||||
address_mode_v: wgpu::AddressMode::Repeat,
|
||||
address_mode_w: wgpu::AddressMode::Repeat,
|
||||
mag_filter: wgpu::FilterMode::Nearest,
|
||||
min_filter: wgpu::FilterMode::Nearest,
|
||||
mipmap_filter: wgpu::FilterMode::Nearest,
|
||||
..Default::default()
|
||||
});
|
||||
|
||||
Ok(Self {
|
||||
texture_array,
|
||||
view,
|
||||
sampler,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl FlowmapTexture
|
||||
{
|
||||
pub fn load(device: &wgpu::Device, queue: &wgpu::Queue, path: &str) -> Result<Self>
|
||||
{
|
||||
let image = exr::prelude::read()
|
||||
.no_deep_data()
|
||||
.largest_resolution_level()
|
||||
.all_channels()
|
||||
.all_layers()
|
||||
.all_attributes()
|
||||
.from_file(path)?;
|
||||
|
||||
let layer = &image.layer_data[0];
|
||||
let width = layer.size.width();
|
||||
let height = layer.size.height();
|
||||
|
||||
if width != height
|
||||
{
|
||||
return Err(anyhow::anyhow!(
|
||||
"Flowmap texture must be square. Got {}x{}",
|
||||
width,
|
||||
height
|
||||
));
|
||||
}
|
||||
|
||||
let mut rgba_data: Vec<f32> = vec![1.0; width * height * 4];
|
||||
|
||||
for channel in &layer.channel_data.list
|
||||
{
|
||||
let channel_name = channel.name.to_string();
|
||||
let values: Vec<f32> = channel.sample_data.values_as_f32().collect();
|
||||
|
||||
let target_channel = match channel_name.as_str()
|
||||
{
|
||||
"R" => 0,
|
||||
"G" => 1,
|
||||
"B" => 2,
|
||||
"A" => 3,
|
||||
_ => continue,
|
||||
};
|
||||
|
||||
for (i, &value) in values.iter().enumerate()
|
||||
{
|
||||
rgba_data[i * 4 + target_channel] = value;
|
||||
}
|
||||
}
|
||||
|
||||
let texture = device.create_texture(&wgpu::TextureDescriptor {
|
||||
label: Some("Flowmap Texture"),
|
||||
size: wgpu::Extent3d {
|
||||
width: width as u32,
|
||||
height: height as u32,
|
||||
depth_or_array_layers: 1,
|
||||
},
|
||||
mip_level_count: 1,
|
||||
sample_count: 1,
|
||||
dimension: wgpu::TextureDimension::D2,
|
||||
format: wgpu::TextureFormat::Rgba16Float,
|
||||
usage: wgpu::TextureUsages::TEXTURE_BINDING | wgpu::TextureUsages::COPY_DST,
|
||||
view_formats: &[],
|
||||
});
|
||||
|
||||
let rgba_data_f16: Vec<u16> = rgba_data
|
||||
.iter()
|
||||
.map(|&f| f16::from_f32(f).to_bits())
|
||||
.collect();
|
||||
|
||||
queue.write_texture(
|
||||
wgpu::TexelCopyTextureInfo {
|
||||
texture: &texture,
|
||||
mip_level: 0,
|
||||
origin: wgpu::Origin3d::ZERO,
|
||||
aspect: wgpu::TextureAspect::All,
|
||||
},
|
||||
bytemuck::cast_slice(&rgba_data_f16),
|
||||
wgpu::TexelCopyBufferLayout {
|
||||
offset: 0,
|
||||
bytes_per_row: Some(8 * width as u32),
|
||||
rows_per_image: Some(height as u32),
|
||||
},
|
||||
wgpu::Extent3d {
|
||||
width: width as u32,
|
||||
height: height as u32,
|
||||
depth_or_array_layers: 1,
|
||||
},
|
||||
);
|
||||
|
||||
let view = texture.create_view(&wgpu::TextureViewDescriptor::default());
|
||||
|
||||
let sampler = device.create_sampler(&wgpu::SamplerDescriptor {
|
||||
label: Some("Flowmap Sampler"),
|
||||
address_mode_u: wgpu::AddressMode::Repeat,
|
||||
address_mode_v: wgpu::AddressMode::Repeat,
|
||||
address_mode_w: wgpu::AddressMode::Repeat,
|
||||
mag_filter: wgpu::FilterMode::Linear,
|
||||
min_filter: wgpu::FilterMode::Linear,
|
||||
mipmap_filter: wgpu::FilterMode::Nearest,
|
||||
..Default::default()
|
||||
});
|
||||
|
||||
Ok(Self {
|
||||
texture,
|
||||
view,
|
||||
sampler,
|
||||
})
|
||||
}
|
||||
}
|
||||
BIN
textures/dither/octave_0.png
Normal file
|
After Width: | Height: | Size: 425 B |
BIN
textures/dither/octave_1.png
Normal file
|
After Width: | Height: | Size: 751 B |
BIN
textures/dither/octave_2.png
Normal file
|
After Width: | Height: | Size: 824 B |
BIN
textures/dither/octave_3.png
Normal file
|
After Width: | Height: | Size: 425 B |
BIN
textures/path_direction_debug.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
BIN
textures/path_distance_debug.png
Normal file
|
After Width: | Height: | Size: 392 KiB |
BIN
textures/path_hotspot_debug.exr
Normal file
BIN
textures/path_hotspot_heatmap.png
Normal file
|
After Width: | Height: | Size: 46 KiB |
BIN
textures/path_segment_debug.png
Normal file
|
After Width: | Height: | Size: 184 KiB |
BIN
textures/terrain.exr
Normal file
BIN
textures/terrain_flowmap.exr
Normal file
BIN
textures/terrain_flowmap.png
Normal file
|
After Width: | Height: | Size: 2.6 MiB |
BIN
textures/terrain_normals.png
Normal file
|
After Width: | Height: | Size: 910 KiB |