No mockups. The actual model.
Most "AI visualizations" online are decoration. Dots pulsing to a fake rhythm. A metaphor with no model behind it.
Neuropulse is the opposite. Every brightness, every line, every motion is a direct readout of a real WebGPU buffer mid-forward-pass. When the model thinks, you watch it think — not a representation.
Strict 1:1. Every pixel a function of a real tensor.
3.8B weights in your GPU. Attention in WGSL. Next-token sampled in your tab. No server. No API key. Close the tab and the inference stops.
Every part of the model, labeled.
The 3D scene is not a metaphor. Every glowing element maps to a specific tensor in Phi-3-mini's compute graph.
The 3,072 points of the residual stream are laid out by PCA of the model's layer-0 qkv_proj weights — so dims read into attention together sit near each other. Each point's brightness is the live value of that residual dim on every step.
Hover an attention head — the brightness you see is that head's output magnitude.
- 32 layer rings Each ring is one transformer block. Brightness tracks the post-attention plus post-FFN residual norm for that layer — watch the signal build as the prompt flows upward.
- 32 attention heads per layer Cyan neurons on the outer ring. Each lights up proportional to its head's output magnitude. 1,024 heads in total, all live.
- FFN slab The violet 8,192-neuron expansion. By far the largest compute budget in the model. You can see it pulse as the MLP activates.
-
Residual stream (3,072 dims)
The highway through the network. 3,072 points, one per dim, placed by PCA of the layer-0
qkv_projweights so functionally related dims sit near each other. Brightness on each point is the live residual value at that dim. - KV cache strips The growing memory of past tokens. Each strip is one position; height equals cache fill for that layer.
- LM head Final projection to 32,064 vocab logits. Softmax → next token. The live top-k distribution prints to the side panel as the model decodes.
Cross-checked against reference Phi-3.
"Strict 1:1" is a strong claim, so it has to be falsifiable. Neuropulse ships with a built-in test suite that diffs the WebGPU implementation against a reference HuggingFace fp16 Phi-3-mini on a fixed set of prompts cached as reference.json. Click the wrench icon inside the demo to run it — the actual numbers from your GPU print to your browser console.
Expect tiny deltas at the hidden-state level — that's the cost of int4, not drift. What matters is the last line: identical top-1 tokens vs fp16 Phi-3 on the test set. Re-run it on your own machine in under a minute.
How it's built.
Four pieces. No frameworks for the inference path, no dependency soup, no clever tricks hiding the model from you.
-
WebGPU compute & WGSL 13 pipelines, 22 buffers, 292 dispatches per token. Quantization:
q4f16_1. Hand-written attention and FFN kernels. -
MLC Phi-3-mini weights The same weights as
mlc-ai/Phi-3-mini-4k-instruct-q4f16_1-MLC, fetched directly from HuggingFace and cached in the browser's Cache API. -
Three.js scene Plain
WebGLRenderer. No bloom, no particles, no decorative shaders. Every pixel pulls from a real tensor on every frame. -
PCA layout from the model's own weights Residual points are placed by PCA of layer 0's
qkv_proj.weightcolumns; FFN points by PCA ofdown_proj.weight. Dims that get read or written together end up near each other, so the geometry is shaped by the model itself, not by hand.
What it actually is.
See it for yourself.
Open Neuropulse. Feed it a prompt. Watch the model think. First load streams ~2 GB into your GPU; next visit is instant (OPFS-cached to disk).
Launch Neuropulse →