Independent AI Systems Engineer from Lahore, Pakistan β building production-grade, CPU-efficient AI that runs on constrained hardware, not just benchmark rigs. I own the full stack: model optimization, adversarial hardening, FastAPI deployment, and Linux packaging.
- Focus: Agentic orchestration Β· Adversarial ML governance Β· Edge TinyML Β· Symbolic AI (LRLRE)
- Philosophy: If it needs a GPU cluster to run, it isn't finished yet
- Studying: Intermediate (Science) β Govt. Islamia Associate College (2024β2026)
AI & Machine Learning
Edge & Optimization
Security & Robustness
Systems & Deployment
| Project | What It Does | Key Result |
|---|---|---|
| ORCHAT Enterprise | Institutional-grade AI orchestration CLI framework, Debian-packaged | 16ms cold startup Β· <10MB idle |
| Adversarial ML Governance Engine | Multi-model security layer with real-time input validation and autonomous threat adaptation | 98.3% robust accuracy Β· 3ms latency Β· 64% cost reduction |
| Edge Voice Intelligence (TinyML) | INT8-quantized Keyword Spotting CNN with real-time DSP pipeline, fully offline | 75.6 KB model Β· 2.99ms inference |
| Federated XAI Healthcare Predictor | Privacy-preserving cardiac risk prediction with SHAP/LIME explainability | 94.1% accuracy Β· 0.967 ROC-AUC |
| Enterprise AGI Evolutionary Pathway | 82-cell agentic capstone with self-optimizing Meta-Planners and Reflection Solver loops | 97.7% latency reduction (18.09s β 0.42s) |
| LRLRE | 100% symbolic multilingual reasoning engine (EN/FR/JA/KO/ZH) β no neural networks | v13.0 production Β· Robinson Unification + Forward/Backward Chaining |
| STERBEN β Private AI Ecosystem | Local RAG pipeline with ChromaDB, hybrid retrieval, and CPU-only LLM inference | 6.23 tokens/s Β· 32k context Β· 480+ indexed chunks |
|
Google / Kaggle
|
Deloitte (via Forage)
|
|
TATA Group (via Forage)
|
Other
|
| System | Metric | Result |
|---|---|---|
| Adversarial ML Governance Engine | Robust accuracy (FGSM/PGD) | 98.3% |
| Edge Voice Intelligence | CNN model size (INT8) | 75.6 KB |
| Edge Voice Intelligence | Inference latency | 2.99 ms |
| ORCHAT Enterprise | Cold startup time | 16 ms |
| AGI Evolutionary Pathway | Latency reduction | 97.7% (18.09s β 0.42s) |
| Federated Healthcare Predictor | ROC-AUC | 0.967 |
| STERBEN RAG Pipeline | CPU inference speed | 6.23 tokens/s |
I'm open to collaborations in the following areas:
- Adversarial robustness research β novel attack/defense evaluations on constrained hardware
- Agentic workflow design β multi-agent orchestration for real-world deployment
- Edge AI β TinyML pipelines for embedded or offline environments
- Low-resource NLP β symbolic or hybrid approaches for non-English languages
- Open-source MLOps tooling β CLI tools, packaging, CI/CD for ML systems
If you're building something in these spaces, feel free to reach out.


