New Jina AI small language models deliver unmatched quality and efficiency on search and semantic tasks
Elastic , the Search AI Company, announced the availability of jina-embeddings-v5-text, a family of two small, Elasticsearch-native multilingual embedding models at 0.2B and 0.6B parameters that deliver state-of-the-art performance across key search and semantic tasks.
Despite their compact size, they outperform significantly larger models with 7B to 14B parameters and achieve best-in-class results on the MMTEB (Multilingual MTEB) benchmark among models of comparable size and purpose. Their small footprint enables outstanding hybrid search at lower infrastructure cost, faster query response, and new deployment scenarios where memory and compute budgets are tight – including edge devices and resource-constrained environments.
Marketing Technology News: MarTech Interview with Omri Shtayer, Vice President of Data Products and DaaS at Similarweb
jina-embeddings-v5-text are available through multiple channels: as open-weight models on HuggingFace for self-hosted deployment via vLLM, llama.cpp, or MLX, and on Elastic Inference Service (EIS), a GPU-accelerated inference-as-a-service that makes it easy to run fast, high-quality inference without complex setup. By bringing the Jina v5 family to EIS, users get a complete data platform that consolidates state-of-the-art multilingual embedding models, a high-performance vector database, and more into one unified enterprise stack across cloud and on-premises.
Marketing Technology News: Martech Architecture For Small Language Models: Building Governable AI Systems At Scale
“Vector search, RAG, and AI agents depend on high-quality retrieval,” said Steve Kearns, general manager, Search, Elastic. “With the addition of the Jina v5’s multilingual embeddings, Elasticsearch continues to be the platform of choice for end-to-end context engineering.”
The family includes two models, jina-embeddings-v5-text-small (239M parameters) and jina-embeddings-v5-text-nano (677M parameters). Both models are optimized for four common tasks in search and agentic applications:
- Retrieval: Allowing users to query with natural language and find the most relevant documents
- Text Matching: Allowing users to find duplicates in their data, and align paraphrases or translations
- Classification: Allowing users to categorize documents, detect sentiments, and find anomalies
- Clustering: Allowing users to group documents by topic, subject, or meaning











