MAmmoTH 7B AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  MAmmoTH 7B AWQ   URL Share it on

  Arxiv:2309.05653   4-bit   Autotrain compatible   Awq Base model:quantized:tiger-lab... Base model:tiger-lab/mammoth-7...   Dataset:tiger-lab/mathinstruct   En   Llama   Quantized   Region:us   Safetensors
Model Card on HF ๐Ÿค—: https://huggingface.co/TheBloke/MAmmoTH-7B-AWQ 

MAmmoTH 7B AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
MAmmoTH 7B AWQ (TheBloke/MAmmoTH-7B-AWQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

MAmmoTH 7B AWQ Parameters and Internals

Model Type 
llama
Use Cases 
Areas:
research, educational software, tutoring systems
Primary Use Cases:
solve general math problems
Limitations:
Performance varies based on complexity and specifics of math problems., Not all mathematical fields are comprehensively covered.
Training Details 
Data Sources:
https://huggingface.co/datasets/TIGER-Lab/MathInstruct
Methodology:
Hybrid use of chain-of-thought (CoT) and program-of-thought (PoT) rationales with extensive coverage of diverse mathematical fields using Llama-2 as the base model.
Context Length:
4096
Input Output 
Input Format:
Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response:
Accepted Modalities:
text
LLM NameMAmmoTH 7B AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/MAmmoTH-7B-AWQ 
Model NameMAmmoTH 7B
Model CreatorTIGER-Lab
Base Model(s)  MAmmoTH 7B   TIGER-Lab/MAmmoTH-7B
Model Size7b
Required VRAM3.9 GB
Updated2025-09-11
MaintainerTheBloke
Model Typellama
Model Files  3.9 GB
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureLlamaForCausalLM
Licensemit
Context Length4096
Model Max Length4096
Transformers Version4.29.1
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32001
Torch Data Typefloat32

Best Alternatives to MAmmoTH 7B AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Smaugv0.1 AWQ195K / 19.3 GB41
Yarn Llama 2 7B 64K AWQ64K / 3.9 GB350
Calm2 7B Chat AWQ32K / 4.4 GB91
Llama 2 7B 32K Instruct AWQ32K / 3.9 GB132
... SWE Llama 7B Updated 4bit AWQ16K / 3.9 GB60
CodeLlama 7B Instruct AWQ16K / 3.9 GB15644
...Llama 7B Python Hf W4 G128 AWQ16K / 3.9 GB12050
Pandalyst 7B V1.2 AWQ16K / 3.9 GB41
Tora Code 7B V1.0 AWQ16K / 3.9 GB60
...eechless Tora Code 7B V1.0 AWQ16K / 3.9 GB41
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/MAmmoTH-7B-AWQ.

Rank the MAmmoTH 7B AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51352 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124