MetaMath Cybertron Starling by Q-bert

 ยป  All LLMs  ยป  Q-bert  ยป  MetaMath Cybertron Starling   URL Share it on

  Autotrain compatible Base model:berkeley-nest/starl... Base model:merge:berkeley-nest... Base model:merge:q-bert/metama... Base model:q-bert/metamath-cyb...   Dataset:meta-math/metamathqa   En   Endpoints compatible   Math   Merge   Mistral   Region:us   Safetensors   Sharded   Tensorflow

MetaMath Cybertron Starling Benchmarks

MetaMath Cybertron Starling (Q-bert/MetaMath-Cybertron-Starling)
๐ŸŒŸ Advertise your project ๐Ÿš€

MetaMath Cybertron Starling Parameters and Internals

Model Type 
text-generation
Supported Languages 
en (high)
Input Output 
Input Format:
ChatML format
LLM NameMetaMath Cybertron Starling
Repository ๐Ÿค—https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling 
Base Model(s)  MetaMath Cybertron   Starling LM 7B Alpha   Q-bert/MetaMath-Cybertron   berkeley-nest/Starling-LM-7B-alpha
Model Size7b
Required VRAM14.4 GB
Updated2025-08-19
MaintainerQ-bert
Model Typemistral
Model Files  9.9 GB: 1-of-2   4.5 GB: 2-of-2
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licensecc-by-nc-4.0
Context Length32768
Model Max Length32768
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typebfloat16

Quantized Models of the MetaMath Cybertron Starling

Model
Likes
Downloads
VRAM
...taMath Cybertron Starling GGUF0452 GB
...taMath Cybertron Starling GGUF121513 GB
...taMath Cybertron Starling GPTQ164 GB
...etaMath Cybertron Starling AWQ184 GB

Best Alternatives to MetaMath Cybertron Starling

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB20118
MegaBeam Mistral 7B 512K512K / 14.4 GB868350
SpydazWeb AI HumanAI RP512K / 14.4 GB941
SpydazWeb AI HumanAI 002512K / 14.4 GB181
...daz Web AI ChatML 512K Project512K / 14.5 GB120
MegaBeam Mistral 7B 300K282K / 14.4 GB377916
MegaBeam Mistral 7B 300K282K / 14.4 GB768116
Hebrew Mistral 7B 200K256K / 30 GB121115
Astral 256K 7B V2250K / 14.4 GB50
Astral 256K 7B250K / 14.4 GB50
Note: green Score (e.g. "73.2") means that the model is better than Q-bert/MetaMath-Cybertron-Starling.

Rank the MetaMath Cybertron Starling Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50751 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124