
Meet Lumee Model Family
Our models power conversations, agents, copilots, retrieval, moderation, and multimodal apps — in the wild.
01
Our Model Library

Lumees 8B-Base
A powerful base model trained from scratch for enterprise-scale language applications. With an extended 128,000-token context window and multilingual support, Lumi-7B is built for teams and platforms that require scalable performance across long documents, complex queries, and advanced retrieval tasks.

Lumees-8B-Chat
Fine-tuned for smooth, natural dialogue and long-form conversations, Lumee-7B-Chat brings the base Lumee model into assistant-ready form. It combines supervised training and reinforcement learning (RLAIF) to support trustworthy, helpful, and safe interactions — all within a 128,000-token context window.

Lumees-8B-Code
Fine-tuned for smooth, natural dialogue and long-form conversations, Lumee-7B-Chat brings the base Lumee model into assistant-ready form. It combines supervised training and reinforcement learning (RLAIF) to support trustworthy, helpful, and safe interactions — all within a 128,000-token context window.

Lumees-8B-Edge
Fine-tuned for smooth, natural dialogue and long-form conversations, Lumee-7B-Chat brings the base Lumee model into assistant-ready form. It combines supervised training and reinforcement learning (RLAIF) to support trustworthy, helpful, and safe interactions — all within a 128,000-token context window.

Lumees-8B-Embed
Fine-tuned for smooth, natural dialogue and long-form conversations, Lumee-7B-Chat brings the base Lumee model into assistant-ready form. It combines supervised training and reinforcement learning (RLAIF) to support trustworthy, helpful, and safe interactions — all within a 128,000-token context window.

Lumees-8B-Instruct
Fine-tuned for smooth, natural dialogue and long-form conversations, Lumee-7B-Chat brings the base Lumee model into assistant-ready form. It combines supervised training and reinforcement learning (RLAIF) to support trustworthy, helpful, and safe interactions — all within a 128,000-token context window.

Lumees-8B-Moderate
Fine-tuned for smooth, natural dialogue and long-form conversations, Lumee-7B-Chat brings the base Lumee model into assistant-ready form. It combines supervised training and reinforcement learning (RLAIF) to support trustworthy, helpful, and safe interactions — all within a 128,000-token context window.

Lumees-MM-8B
Fine-tuned for smooth, natural dialogue and long-form conversations, Lumee-7B-Chat brings the base Lumee model into assistant-ready form. It combines supervised training and reinforcement learning (RLAIF) to support trustworthy, helpful, and safe interactions — all within a 128,000-token context window.

Lumees-VL-8B
Fine-tuned for smooth, natural dialogue and long-form conversations, Lumee-7B-Chat brings the base Lumee model into assistant-ready form. It combines supervised training and reinforcement learning (RLAIF) to support trustworthy, helpful, and safe interactions — all within a 128,000-token context window.