Starling LM 7B Beta Openvino Int8 by fakezeta

 ยป  All LLMs  ยป  fakezeta  ยป  Starling LM 7B Beta Openvino Int8   URL Share it on

Starling LM 7B Beta Openvino Int8 is an open-source language model by fakezeta. Features: 7b LLM, VRAM: 7.3GB, Context: 8K, License: apache-2.0, LLM Explorer Score: 0.13.

  Arxiv:1909.08593   Autotrain compatible   Conversational   Dataset:berkeley-nest/nectar   En   Endpoints compatible   Mistral   Openvino   Region:us   Reward model   Rlaif   Rlhf

Starling LM 7B Beta Openvino Int8 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Starling LM 7B Beta Openvino Int8 (fakezeta/Starling-LM-7B-beta-openvino-int8)
๐ŸŒŸ Advertise your project ๐Ÿš€

Starling LM 7B Beta Openvino Int8 Parameters and Internals

Model Type 
Language Model
Use Cases 
Areas:
Research, RLHF/RLAIF applications
Primary Use Cases:
Language Model usage with custom prompt templates
Limitations:
Performance degrades if not using the provided chat template
Considerations:
Set temperature = 0 to reduce verbosity
Additional Notes 
Available for free test on LMSYS Chatbot Arena
Supported Languages 
languages_supported (English), proficiency ()
Training Details 
Data Sources:
berkeley-nest/Nectar
Methodology:
Finetuned with RLHF / RLAIF using PPO
Context Length:
8192
Model Architecture:
Model is a finetuned variant of Openchat-3.5-0106 based on Mistral-7B-v0.1
Input Output 
Input Format:
Specific chat template as Openchat-3.5-0106 with GPT-4 and Code conversation formats
Accepted Modalities:
Text
Output Format:
Text responses
Performance Tips:
Use the exact prompt template to prevent performance degradation
LLM NameStarling LM 7B Beta Openvino Int8
Repository ๐Ÿค—https://huggingface.co/fakezeta/Starling-LM-7B-beta-openvino-int8 
Model Size7b
Required VRAM7.3 GB
Updated2025-10-28
Maintainerfakezeta
Model Typemistral
Model Files  7.3 GB
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.39.3
Tokenizer ClassLlamaTokenizer
Padding Token<|end_of_turn|>
Vocabulary Size32002

Best Alternatives to Starling LM 7B Beta Openvino Int8

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB25420
MegaBeam Mistral 7B 512K512K / 14.4 GB844453
SpydazWeb AI HumanAI RP512K / 14.4 GB141
SpydazWeb AI HumanAI 002512K / 14.4 GB181
...daz Web AI ChatML 512K Project512K / 14.5 GB120
MegaBeam Mistral 7B 300K282K / 14.4 GB377916
MegaBeam Mistral 7B 300K282K / 14.4 GB808216
Hebrew Mistral 7B 200K256K / 30 GB125115
Astral 256K 7B V2250K / 14.4 GB50
Astral 256K 7B250K / 14.4 GB50
Note: green Score (e.g. "73.2") means that the model is better than fakezeta/Starling-LM-7B-beta-openvino-int8.

Rank the Starling LM 7B Beta Openvino Int8 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52473 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a