Decentralized text embeddings on Tangle — operators serve embedding models via HuggingFace TEI or any OpenAI-compatible endpoint.
A Tangle Blueprint enabling operators to serve text embedding models with anonymous payments through shielded credits. High-volume, low-cost embeddings for RAG, search, and classification workloads.
Dual payment paths:
- On-chain jobs via TangleProducer — verifiable results on Tangle
- x402 HTTP — fast private embeddings at
/v1/embeddings
OpenAI Embeddings API compatible. Built with Blueprint SDK with TEE support.
| Component | Language | Description |
|---|---|---|
operator/ |
Rust | Operator binary — wraps TEI or embedding endpoint, HTTP server, SpendAuth billing |
contracts/ |
Solidity | EmbeddingBSM — dimension validation, per-1K-token pricing |
Any model compatible with HuggingFace Text Embeddings Inference:
- BGE (large, base, small)
- E5 (large, base, small)
- Jina Embeddings v3
- GTE (Qwen)
- Nomic Embed
Per-1K-tokens. Operators compete on price — typically 5-10x cheaper than OpenAI's embedding API.
Add features = ["tee"] to blueprint-sdk in Cargo.toml. The TeeLayer middleware transparently attaches attestation metadata when running in a Confidential VM. Passes through when no TEE is configured.
# Start TEI (example with BGE-large)
docker run -p 8080:80 ghcr.io/huggingface/text-embeddings-inference:latest \
--model-id BAAI/bge-large-en-v1.5
# Configure operator
cp config/operator.example.toml config/operator.toml
# Run operator
EMBEDDING_ENDPOINT=http://localhost:8080 cargo run --release- Blueprint SDK — framework for building Blueprints
- vLLM Inference Blueprint — text inference
- Voice Inference Blueprint — TTS/STT
- Image Generation Blueprint — image generation
- Video Generation Blueprint — video generation
