Skip to content

Non-record: TTT-LoRA Base — HumanAI Convention (val_bpb=1.2364)#600

Open
humanaiconvention wants to merge 1 commit intoopenai:mainfrom
humanaiconvention:submission/ttt-lora-base
Open

Non-record: TTT-LoRA Base — HumanAI Convention (val_bpb=1.2364)#600
humanaiconvention wants to merge 1 commit intoopenai:mainfrom
humanaiconvention:submission/ttt-lora-base

Conversation

@humanaiconvention
Copy link
Copy Markdown

Submission: TTT-LoRA Base

Track: track_non_record_16mb
Author: HumanAI Convention (@humanaiconvention)
val_bpb: 1.23637747
Submission size: 15,669,326 bytes

Novel contribution

This submission introduces per-document Test-Time Training via LoRA as an evaluation-time technique. During the 10-minute evaluation budget, each validation document receives its own fresh rank-128 LoRA adapters trained with Adam on the preceding chunks before predicting the next chunk.

This approach is orthogonal to all current leaderboard entries and exploits the separate evaluation budget in a way no current submission does.

Base submission note

TTT is disabled (TTT_ENABLED=0) in this submission. Local testing showed consistent improvement (−0.136 bpb on 50-doc smoke test), but the first production run revealed that the LR tuned on smoke models (0.001) is too high for a well-trained production model. LR calibration is in progress. A follow-up submission with working TTT targeting competitive val_bpb is planned pending compute credits.

Architecture changes vs baseline

  • SmearGate residual mixing gate
  • Orthogonal initialisation
  • Bigram hash embeddings (2048 buckets)
  • GQA (8q / 4kv heads)
  • SWA over final 5065 steps
  • int6 + zstd22 quantisation (5.14× compression)

Compute credits request

We are applying for OpenAI compute credits via the grant form to complete TTT LR calibration.

Non-record submission introducing per-document Test-Time Training via
rank-128 LoRA adapters. Base score val_bpb=1.23637747 with TTT disabled;
TTT LR calibration in progress for follow-up competitive submission.

Architecture: 11L dim=512 GQA SmearGate OrthoInit SWA int6+zstd22
Novel: TTT LoRA adaptation during 10-min eval budget (orthogonal to all
current leaderboard entries)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant