OLMoE 1B 7B 0125 Instruct Enron by Tomasal

 ยป  All LLMs  ยป  Tomasal  ยป  OLMoE 1B 7B 0125 Instruct Enron   URL Share it on

OLMoE 1B 7B 0125 Instruct Enron is an open-source language model by Tomasal. Features: 1b LLM, VRAM: 27.7GB, Context: 4K, License: apache-2.0, Instruction-Based, LLM Explorer Score: 0.19.

  Arxiv:2106.09685 Base model:allenai/olmoe-1b-7b... Base model:finetune:allenai/ol...   Conversational   Dataset:llm-pbe/enron-email   Endpoints compatible   Enron   Finetuned   Instruct   Olmoe   Region:us   Safetensors   Sharded   Tensorflow

OLMoE 1B 7B 0125 Instruct Enron Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
OLMoE 1B 7B 0125 Instruct Enron (Tomasal/OLMoE-1B-7B-0125-Instruct-enron)
๐ŸŒŸ Advertise your project ๐Ÿš€

OLMoE 1B 7B 0125 Instruct Enron Parameters and Internals

LLM NameOLMoE 1B 7B 0125 Instruct Enron
Repository ๐Ÿค—https://huggingface.co/Tomasal/OLMoE-1B-7B-0125-Instruct-enron 
Model NameOLMoE-1B-7B-0125-Instruct-enron
Base Model(s)  OLMoE 1B 7B 0125 Instruct   allenai/OLMoE-1B-7B-0125-Instruct
Model Size1b
Required VRAM27.7 GB
Updated2026-04-11
MaintainerTomasal
Model Typeolmoe
Instruction-BasedYes
Model Files  5.0 GB: 1-of-6   5.0 GB: 2-of-6   5.0 GB: 3-of-6   5.0 GB: 4-of-6   5.0 GB: 5-of-6   2.7 GB: 6-of-6
Model ArchitectureOlmoeForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.51.3
Tokenizer ClassGPTNeoXTokenizer
Padding Token<pad>
Vocabulary Size50304
Torch Data Typefloat32

Best Alternatives to OLMoE 1B 7B 0125 Instruct Enron

Best Alternatives
Context / RAM
Downloads
Likes
OLMoE 1B 7B 0125 Instruct4K / 13.8 GB5361261
OLMoE 1B 7B 0924 Instruct4K / 13.8 GB1734895
Olmoe Upscale4K / 20.6 GB60
OLMoE 1B 7B 0125 DPO4K / 13.8 GB312
Note: green Score (e.g. "73.2") means that the model is better than Tomasal/OLMoE-1B-7B-0125-Instruct-enron.

Rank the OLMoE 1B 7B 0125 Instruct Enron Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a