C4ai Command R Plus GPTQ by alpindale

 ยป  All LLMs  ยป  alpindale  ยป  C4ai Command R Plus GPTQ   URL Share it on

  4-bit   Ar   Autotrain compatible   Cohere   Conversational   De   En   Endpoints compatible   Es   Fr   Gptq   It   Ja   Ko   Pt   Quantized   Region:us   Safetensors   Sharded   Tensorflow   Zh

C4ai Command R Plus GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
C4ai Command R Plus GPTQ (alpindale/c4ai-command-r-plus-GPTQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

C4ai Command R Plus GPTQ Parameters and Internals

Model Type 
Multilingual, Autoregressive, Language model
Use Cases 
Areas:
Research, Commercial applications
Applications:
Reasoning, Summarization, Question answering, Tool use automation
Primary Use Cases:
Reasoning tasks, Summarization tasks, Question answering
Limitations:
May not perform well with unrecognized input structures, Deviations from prompt templates may reduce performance
Considerations:
Recommended to adhere to prompt structures for optimal performance.
Additional Notes 
Command R+ optimized for multilingual and complex task automation through multiple tool use.
Supported Languages 
English (Advanced), French (Advanced), Spanish (Advanced), Italian (Advanced), German (Advanced), Brazilian Portuguese (Advanced), Japanese (Advanced), Korean (Advanced), Simplified Chinese (Advanced), Arabic (Advanced), Russian (Pre-training included), Polish (Pre-training included), Turkish (Pre-training included), Vietnamese (Pre-training included), Dutch (Pre-training included), Czech (Pre-training included), Indonesian (Pre-training included), Ukrainian (Pre-training included), Romanian (Pre-training included), Greek (Pre-training included), Hindi (Pre-training included), Hebrew (Pre-training included), Persian (Pre-training included)
Training Details 
Data Sources:
Open internet sourced datasets
Methodology:
Mixture of supervised fine-tuning and preference training for alignment with human preferences.
Context Length:
128000
Model Architecture:
Optimized transformer architecture
Responsible Ai Considerations 
Mitigation Strategies:
Includes monitoring of harmful outputs and optimization for safe responses through preference fine-tuning.
Input Output 
Input Format:
Text input only
Accepted Modalities:
Text
Output Format:
Text generation only
Performance Tips:
Use specified prompt templates to maintain performance levels.
LLM NameC4ai Command R Plus GPTQ
Repository ๐Ÿค—https://huggingface.co/alpindale/c4ai-command-r-plus-GPTQ 
Model Size16.6b
Required VRAM58.6 GB
Updated2025-08-18
Maintaineralpindale
Model Typecohere
Model Files  6.3 GB: 1-of-12   5.0 GB: 2-of-12   5.0 GB: 3-of-12   4.9 GB: 4-of-12   4.9 GB: 5-of-12   4.9 GB: 6-of-12   4.9 GB: 7-of-12   4.9 GB: 8-of-12   4.9 GB: 9-of-12   4.9 GB: 10-of-12   4.9 GB: 11-of-12   3.1 GB: 12-of-12
Supported Languagesen fr de es it pt ja ko zh ar
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureCohereForCausalLM
Licensecc-by-nc-4.0
Context Length8192
Model Max Length8192
Transformers Version4.40.0.dev0
Tokenizer ClassCohereTokenizer
Padding Token<PAD>
Vocabulary Size256000
Torch Data Typefloat16

Rank the C4ai Command R Plus GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50729 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124