Language models for the edge

Tiny edge language models for private, explainable inference on constrained hardware. Institutional control over raw capability.

Private
Edge
Explainable

Purpose

Private, low‑latency, explainable inference on constrained devices. Our LLM family prioritises transparency, energy efficiency, and institutional control over raw capability.

Private

On‑device inference. Your data never leaves your infrastructure.

Low‑latency

Optimised for edge deployment with minimal compute requirements.

Explainable

Clear reasoning paths and decision audit trails.

T‑E Model Family

Tunet Edge models follow a clear naming convention: T-E-{size}/{year.release}

S/M/L denote parameter budgets optimised for different edge deployment scenarios.

S

T‑E‑S/2025.1

Ultra‑compact model for IoT and mobile devices. Optimised for classification and simple reasoning tasks.

Parameters: ~125M
Context window: 2K tokens
Inference speed: ~15 tok/s
Memory req: ~512MB
M

T‑E‑M/2025.1

Balanced model for edge servers and workstations. Good general‑purpose reasoning with safety guardrails.

Parameters: ~1.3B
Context window: 8K tokens
Inference speed: ~8 tok/s
Memory req: ~3GB
L

T‑E‑L/2025.1

Full‑capability model for edge clusters. Complex reasoning with institutional safety requirements.

Parameters: ~7B
Context window: 32K tokens
Inference speed: ~3 tok/s
Memory req: ~16GB

Model Card Standards

Every model comes with complete documentation following our transparency framework.

Summary

  • • Intended use cases
  • • Explicit non‑goals
  • • Safety boundaries

Technical Specs

  • • Parameter count
  • • Context window
  • • Quantisation options
  • • Supported runtimes

Training

  • • Data policy
  • • Filtering methodology
  • • Epochs & compute
  • • Reproducibility notes

Evaluations

  • • Task‑specific benchmarks
  • • On‑device latencies
  • • Energy profiling
  • • Methodology notes

Safety

  • • Guardrails & refusal policies
  • • Jailbreak handling
  • • Known failure modes
  • • Bias assessments

Distribution

  • • Licensing terms
  • • Attribution requirements
  • • Downloads & checksums
  • • Redistribution policy

Assurance Standards

  • Reproducible on‑device benchmarks with fixed seeds
  • Deterministic builds where feasible
  • Provenance hashes for releases
  • Evaluation scripts with transparency reports

Framework Integration

All models integrate seamlessly with our Frameworks and APIs:

  • SDKs and adapters in our framework stack
  • Direct endpoints in Content/Translate APIs
  • Built‑in evaluation harness compatibility
  • Ready for ecosystem deployment

Ready for edge deployment?

Browse model cards, download weights, or request a technical briefing on edge ML strategy.