I build ML systems that are meant to last. From LoRA fine-tuning experiments and agentic pipelines to deployed production automation. Final-year B.E. (AI/ML) at VTU, CGPA 9.23.
My work spans LLM evaluation, self-supervised learning, representation learning, and practical MLOps. I care about reproducibility, clean architecture, and getting the metrics right before claiming anything.
|
Upload an ML paper PDF, get a runnable scaffold with training script, Docker, configs, and a reproducible ZIP — with anti-hype implementation notes surfaced automatically.
|
50+ LoRA experiments across 3 tasks and model families to answer: fine-tune or prompt? Ships a CLI tool that gives a concrete recommendation based on your task, data size, latency, and cost.
|
|
Terminal-native multi-agent reasoning arena where specialized agents (strategist, reasoner, critic, judge) debate in structured rounds and produce a scored verdict with tradeoffs.
|
Full-stack RAG system built around a recursive self-refinement loop — retrieve, answer, critique, refine query, repeat — with ablation mode, calibration tracking, and failure logging.
|
|
Research prototype for flow-guided video inpainting. Uses RAFT optical flow and cycle-consistent temporal optimization to remove or replace objects without the usual flickering artifacts.
|
Empirical study measuring how LLM answers shift under semantic paraphrasing. Introduces the Answer Invariance Score; mean score of 0.6968 across Mistral-7B on SQuAD reveals high phrasing sensitivity.
|
