AmberChat GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  AmberChat GPTQ   URL Share it on

  4-bit   Base model:llm360/amberchat Base model:quantized:llm360/am... Dataset:icybee/share gpt 90k v... Dataset:wizardlm/wizardlm evol...   En   Gptq   Instruct   Llama   Quantized   Region:us   Safetensors
Model Card on HF ๐Ÿค—: https://huggingface.co/TheBloke/AmberChat-GPTQ 

AmberChat GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
AmberChat GPTQ (TheBloke/AmberChat-GPTQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

AmberChat GPTQ Parameters and Internals

Model Type 
Language model with the same architecture as LLaMA-7B
Additional Notes 
Model includes quantization options provided by TheBloke.
Supported Languages 
English (NLP)
Training Details 
Data Sources:
WizardLM/WizardLM_evol_instruct_V2_196k, icybee/share_gpt_90k_v1
Data Volume:
Total: 233k rows
Context Length:
2048
Input Output 
Input Format:
A chat prompt with format: 'USER: {prompt} ASSISTANT: '
Accepted Modalities:
text
Output Format:
text generation
LLM NameAmberChat GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/AmberChat-GPTQ 
Model NameAmberChat
Model CreatorLLM360
Base Model(s)  AmberChat   LLM360/AmberChat
Model Size6.7b
Required VRAM3.9 GB
Updated2025-12-13
MaintainerTheBloke
Model Typeamber
Instruction-BasedYes
Model Files  3.9 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length2048
Model Max Length2048
Transformers Version4.36.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to AmberChat GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
...pseek Coder 6.7B Instruct GPTQ16K / 3.9 GB36525
Magicoder S DS 6.7B GPTQ16K / 3.9 GB107
Finance Chat GPTQ4K / 3.9 GB283
Law Chat GPTQ4K / 3.9 GB174
...LI LlumiX 32K Instruct F16 0.232K / 13.5 GB21
...LI LlumiX 32K Instruct F16 0.132K / 13.5 GB50
...rpreter DS 6.7B 6.0bpw H6 EXL216K / 5.2 GB92
...rpreter DS 6.7B 4.0bpw H6 EXL216K / 3.6 GB81
...rpreter DS 6.7B 8.0bpw H8 EXL216K / 6.9 GB12
...coder S DS 6.7B 4.0bpw H6 EXL216K / 3.6 GB05
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/AmberChat-GPTQ.

Rank the AmberChat GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51611 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124