Record: LeakyReLU² + Legal Score-First TTT + Parallel Muon — val_bpb 1.1194 (3-seed mean)#549
Merged
valerio-oai merged 3 commits intoopenai:mainfrom Mar 24, 2026
Conversation
…ed mean) LeakyReLU(0.5)² activation (-0.003 vs relu²) + legal score-first TTT (PR openai#461 recipe, 3ep SGD, all blocks unfrozen) + BigramHash(1536) on openai#414 stack with Parameter Banking + Parallel Muon (PR openai#399). 3-seed results: Seed 1337: 1.1192 bpb, 410s TTT, 15.98 MB Seed 42: 1.1200 bpb, 408s TTT, 15.88 MB Seed 2025: 1.1189 bpb, 408s TTT, 15.99 MB Mean: 1.1194 (std 0.0006) All artifacts under 16MB. All eval under 10 min. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
f6a0b0d to
8ff3e0e
Compare
ADIITJ
added a commit
to ADIITJ/parameter-golf
that referenced
this pull request
Mar 23, 2026
11L, XSA all layers, partial RoPE 16/64, LN scale, VE128 (layers 9,10), LeakyReLU(0.5)² activation, BigramHash(2048), INT6+zstd-22. Legal score-first TTT: 32K chunks, all blocks, SGD(0.002,mom=0.9), 3ep. Base: PR openai#503 (EthanYangTW) + LeakyReLU² from openai#518/openai#549 + SGD from openai#549. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
anthony-maio
added a commit
to anthony-maio/parameter-golf
that referenced
this pull request
Mar 24, 2026
Multiple top PRs (openai#535, openai#549, openai#569) demonstrate -0.0015 to -0.003 bpb from this change. LeakyReLU preserves gradient flow through negative pre-activations while maintaining the sparsity/gating benefits of squaring. At 22M params, dead neurons from hard ReLU are expensive. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Contributor
|
Looks legal, clears the 0.005 nats test, so merging into the leaderboard. Well done! |
valerio-oai
approved these changes
Mar 24, 2026
Contributor
Author
ayeee |
Contributor
Author
|
@valerio-oai just noticed there's a wrong user name in the leaderboard. |
Rajat123456789
added a commit
to Rajat123456789/parameter-golf
that referenced
this pull request
Mar 24, 2026
Four novel improvements over PR openai#549 (1.1194 BPB) base: - Full GPTQ quantization with Hessian-guided error compensation - Soft-round QAT with tanh-based temperature annealing - LoRA-based test-time training (rank-8 adapters on Q/K/V/O) - Entropy-coded compression (Huffman+LZMA adaptive selection) Made-with: Cursor
senstar-hsoleimani
added a commit
to senstar-hsoleimani/parameter-golf
that referenced
this pull request
Mar 24, 2026
Track: 10min_16mb Based on: PR openai#549 (LeakyReLU+ParallelMuon), PR openai#606 (Soft-Round+AdamW TTT), PR openai#609 (XSA-all+Full GPTQ) Changes from SOTA (openai#549): - XSA on all 11 layers (was 4) - Soft-Round QAT with tanh-based differentiable rounding (alpha 1->16) - Full GPTQ with Hessian-aware column-reordered Cholesky error compensation - MHA 8/8 (was GQA 8/4) - MLP 3.5x expansion (1792 hidden, was 3.0x/1536) - BigramHash vocabulary 8192 (was 2048) - AdamW TTT with grouped LR and cosine schedule (was SGD) - Early QAT threshold 0.5 (was late 0.15) - Selective ±1 magnitude pruning to hit size target
Contributor
|
whoops, really sorry about the wrong username -- I thought something looked wrong! Fixing it now |
sunnypatneedi
added a commit
to sunnypatneedi/parameter-golf
that referenced
this pull request
Mar 24, 2026
Run 0: PR openai#549 UNMODIFIED (merged SOTA 1.1194, verified 3-seed) Run 1: PR openai#549 + TTT_ENABLED=1 + TTT_LR=0.0005 (2 lines changed) Both have FA3→FA2→SDPA fallback for non-Hopper GPUs. Following retro: one change per run, baseline first. Expected: Run 1 should achieve ~1.094-1.104 (beats 1.1144 target). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sunnypatneedi
pushed a commit
to sunnypatneedi/parameter-golf
that referenced
this pull request
Mar 24, 2026
Documents merged SOTA of 1.1194 (PR openai#549, LeakyReLU² + Legal TTT + Parallel Muon), confirmed technique deltas, enforcement ruling on GPTQ calibration, and the path forward to beat 1.1144. https://claude.ai/code/session_01U3LXGzTkedd9ZcHF2qgW7d
sunnypatneedi
added a commit
to sunnypatneedi/parameter-golf
that referenced
this pull request
Mar 24, 2026
Run 0: PR openai#549 UNMODIFIED (merged SOTA 1.1194, verified 3-seed) Run 1: PR openai#549 + TTT_ENABLED=1 + TTT_LR=0.0005 (2 lines changed) Both have FA3→FA2→SDPA fallback for non-Hopper GPUs. Following retro: one change per run, baseline first. Expected: Run 1 should achieve ~1.094-1.104 (beats 1.1144 target). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
RichiiiTV
pushed a commit
to RichiiiTV/parameter-golf
that referenced
this pull request
Mar 24, 2026
abaybektursun
added a commit
to abaybektursun/parameter-golf
that referenced
this pull request
Mar 24, 2026
Case study: reordering training shards by model difficulty (hardest first) gives -0.0033 BPB improvement over sequential ordering. Zero architecture changes, zero compute cost, ten lines of code. Key finding: token-level statistics (KL divergence) find 0.0009 range across shards. Model perplexity finds 0.0475 range -- 100x more variation. The two metrics are uncorrelated (r = -0.056). 3-seed validated on PR openai#549 (merged openai#1): Seed 1337: 1.1217 -> 1.1183 (-0.0034) Seed 42: 1.1222 -> 1.1181 (-0.0041) Seed 2025: 1.1221 -> 1.1198 (-0.0023) Mean: 1.1220 -> 1.1187 (-0.0033) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closed
ADIITJ
added a commit
to ADIITJ/parameter-golf
that referenced
this pull request
Mar 27, 2026
Preliminary non-record run: val_bpb=1.1882 (seed 1337, 2002 steps, no torch.compile). Artifact 18.8MB (over 16MB limit) — proper rerun with torch.compile pending. Additions over PR openai#549 (SOTA 1.1194): - VRL: Value Residual Learning on all 11 layers via sigmoid gates - Full GPTQ: Hessian Cholesky int6 with 256-batch calibration - BigramHash 1536 → 3072 - Tight SWA preferred over EMA when snapshots exist Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Hilo-Hilo
added a commit
to Hilo-Hilo/parameter-golf
that referenced
this pull request
Mar 27, 2026
- shadeform_dispatch.sh: direct REST API, picks cuda12.4 OS image, flash-attn installs from pre-built wheel in 10 sec (vs 20 min build) - shadeform_cleanup.sh: DELETE instances via API - shadeform_reconcile.sh: reconcile leases via API - branch_cycle.sh: add shadeform backend case - supervisor.sh: add shadeform reconcile - start_swarm.sh: add shadeform preflight check - train_gpt.py: SOTA PR openai#549 recipe (LeakyReLU + Legal TTT + Parallel Muon) with FA3→FA2 import swap (identical API, ~10% slower) Usage: DISPATCH_BACKEND=shadeform scripts/start_swarm.sh --pipeline --workers 2 --watchers 2
4 tasks
sunnypatneedi
added a commit
to sunnypatneedi/parameter-golf
that referenced
this pull request
Mar 27, 2026
3-seed mean 0.8609 bpb (42→0.8600, 1337→0.8611, 2025→0.8616). All artifacts under 16MB. 11-gram n-gram cache with entropy-adaptive alpha and Hedge Mixer on PR openai#549 base architecture. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sunnypatneedi
added a commit
to sunnypatneedi/parameter-golf
that referenced
this pull request
Mar 27, 2026
3-seed mean 0.8609 bpb (42→0.8600, 1337→0.8611, 2025→0.8616). All artifacts under 16MB. 11-gram n-gram cache with entropy-adaptive alpha and Hedge Mixer on PR openai#549 base architecture. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
3 tasks
vivekvar-dl
pushed a commit
to vivekvar-dl/parameter-golf
that referenced
this pull request
Mar 27, 2026
Built on PR openai#549 stack. Adds document-isolated TTT (reset optimizer at BOS boundaries) and temperature scaling. Pending 8xH100 validation.
3 tasks
theLightArchitect
added a commit
to theLightArchitect/parameter-golf
that referenced
this pull request
Mar 27, 2026
Maps every top entry through BPB = L + Q + T + M: - openai#700 solved M (mixer) but has worst L (training) - openai#609 solved Q (quant) but has zero T and M (no eval pipeline) - openai#549 solved L (training) but has zero M (no mixer) - Nobody has optimized all four terms simultaneously - Theoretical optimal = 1.052 (combine best of each) - Our Track B path to 1.025 via recurrence + FiLM-only TTT + Mixer Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-Authored-By: Kevin Francis Tan <kf.tan@lightarchitects.io>
theLightArchitect
added a commit
to theLightArchitect/parameter-golf
that referenced
this pull request
Mar 27, 2026
…eframe Corrections: - T+M are combined (-0.020), not separate. PR openai#700 gets -0.073 (3.6x better) - Our Q gap (0.066) is larger than the openai#549-openai#700 total gap — Q is THE bottleneck - Added "Best Known" column comparing against best per-term, not just merged SOTA New insights added: - Kaplan width scaling, hidden ≥ 512 threshold, Goldilocks depth - MoE viability at small scale (inactive experts compress well) - Vocab expansion opportunity (mechanical BPB reduction) - Compression reframe: BPB competition = compression competition, 20 years of literature - Strategic evolution: feature bloat → simplify → Q bottleneck → compression-first approach - Theoretical optimal 1.052 = combine best of openai#549 + openai#609 + openai#700 (nobody has done this) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-Authored-By: Kevin Francis Tan <kf.tan@lightarchitects.io>
mrbese
pushed a commit
to mrbese/parameter-golf
that referenced
this pull request
Mar 27, 2026
- Fork of the openai#1 leaderboard train_gpt.py (LeakyReLU², XSA, EMA, Parallel Muon, TTT, GPTQ) with minimal changes to support BESE tokenizer - Dual tokenizer dispatch: .json loads BESE, .model loads SentencePiece - All SOTA architecture preserved, only tokenizer loading changed - Add --sota flag to runpod_v2.py to select SOTA train script
sunnypatneedi
added a commit
to sunnypatneedi/parameter-golf
that referenced
this pull request
Mar 27, 2026
3-seed mean 0.8609 bpb (42→0.8600, 1337→0.8611, 2025→0.8616). All artifacts under 16MB. 11-gram n-gram cache with entropy-adaptive alpha and Hedge Mixer on PR openai#549 base architecture. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
5 tasks
aryanbhosale
added a commit
to aryanbhosale/parameter-golf
that referenced
this pull request
Mar 28, 2026
slope 0.75 + LR 0.027 + warmdown 3700 (PR openai#977) No SWA with QAT (PR openai#989) QAT from 50% + range fix [-31,31] mHC 22-param residual mixing (PR openai#928) VE128 + no gated_attn + no value_residual (PR openai#549) LZMA preset 7 compression (PR openai#999) Muon TTT with NS3 (PR openai#999) Entropy-adaptive TTT epochs 2/3/4 (PR openai#999) Per-layer TTT LR (PR openai#995) TTT momentum 0.95 (PR openai#995)
ADIITJ
added a commit
to ADIITJ/parameter-golf
that referenced
this pull request
Mar 28, 2026
…ssion 3-seed results (1337, 42, 45): mean val_bpb=1.1264, artifact ~15.8MB. Forked from PR openai#549 (1.1194 SOTA). Adds VRL, BigramHash 3072, Tight SWA, zstd-22, sliding window eval fix. Drops Full GPTQ. TTT enabled by default.
3 tasks
eamon831
added a commit
to eamon831/parameter-golf
that referenced
this pull request
Mar 28, 2026
SOTA PR openai#549 code + JEPA (Joint-Embedding Predictive Architecture): - LatentProjector encoder/predictor/target modules - Multi-horizon future prediction (1,2,4,8 steps) - VICReg-style variance/covariance regularization - Target encoder updated via EMA (decay=0.996) - Toggled via JEPA_ENABLED env var (default: off) Waiting on RunPod credits to measure BPB + ms/step impact. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
7 tasks
caum-systems
added a commit
to caum-systems/parameter-golf
that referenced
this pull request
Mar 28, 2026
…enizer Changes from PR openai#549 record (LeakyReLU² + Legal TTT + Parallel Muon): - vocab_size: 1024 → 16384 (3.92 bytes/token = structural BPB gain) - data/tokenizer paths: sp1024 → sp16384 - SVDEmbedding(rank=32): factored embedding saves ~3.5MB compressed U(16384×32) + S(32) + V(32×512) = 540K params vs 8.4M standard - Artifact estimate: ~15.9 MB (fits 16MB limit) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
5 tasks
3 tasks
|
Note: cosine LR decay was introduced in #481. Glad to see it here! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Record: LeakyReLU² + Legal TTT + Parallel Muon — val_bpb 1.1194
val_bpb = 1.1194 (3-seed mean, std 0.0006) | ~15.95 MB | 8×H100 SXM
3-Seed Results (8×H100 80GB SXM, PyTorch 2.9.1+cu128)
Key Innovation: LeakyReLU(0.5)²
One-line activation change delivering -0.003 BPB vs standard relu²:
Preserves negative gradient flow through the MLP. Source: PR #493 by @parinzee (ablated at -0.003), PR #518 by @sofiabod.
Legal TTT (Score-First, PR #461 Framework)
Every token scored BEFORE any weight update, enforced by
torch.inference_mode():Adapted from PR #461 by @Christopher-Lee-McClendon (changed freeze=2 → freeze=0 based on our ablation showing unfreezing all blocks is optimal at 3 epochs).
Total eval: ~530s (120s standard + 409s TTT) — within 10 min limit.
Training Architecture
PR #414 stack + Parameter Banking + Parallel Muon (PR #399):
Credits
🤖 Generated with Claude Code