Llama 3.1 Minitron 4B Width Base Chatml by IntervitensInc

 ยป  All LLMs  ยป  IntervitensInc  ยป  Llama 3.1 Minitron 4B Width Base Chatml   URL Share it on

Llama 3.1 Minitron 4B Width Base Chatml is an open-source language model by IntervitensInc. Features: 4b LLM, VRAM: 9GB, Context: 128K, License: other, LLM Explorer Score: 0.15.

  Arxiv:2009.03300   Arxiv:2407.14679   Llama   Region:us   Safetensors   Sharded   Tensorflow

Llama 3.1 Minitron 4B Width Base Chatml Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3.1 Minitron 4B Width Base Chatml (IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml)
๐ŸŒŸ Advertise your project ๐Ÿš€

Llama 3.1 Minitron 4B Width Base Chatml Parameters and Internals

Model Type 
Base, Text-to-Text
Use Cases 
Areas:
Research, Commercial applications
Applications:
Language generation,
Primary Use Cases:
Natural language generation tasks
Limitations:
May amplify biases contained in the training data, Potential for inaccuracies in generated responses
Considerations:
Developers to address unforeseen misuse and ensure compliance with industry standards.
Additional Notes 
Continued support will be added to the `transformers` library.
Supported Languages 
English (fluent), Multilingual (basic)
Training Details 
Data Sources:
English and multilingual text, code from webpages, dialogue, articles
Data Volume:
94 billion tokens
Methodology:
Pruning with continued training via distillation
Context Length:
8000
Training Time:
July 29, 2024 - Aug 3, 2024
Hardware Used:
NVIDIA hardware (specific microarchitectures mentioned)
Model Architecture:
Transformer Decoder
Safety Evaluation 
Methodologies:
5-shot and zero-shot performance evaluation, Code generation performance
Findings:
Shows potential bias and possibility of generating undesirable text
Risk Categories:
bias, toxicity, inaccuracy
Ethical Considerations:
NVIDIA promotes Trustworthy AI and requires adhering to their terms of service
Responsible Ai Considerations 
Fairness:
Developers must ensure fair and unbiased results in their applications.
Transparency:
Models should be used with understanding of potential biases.
Accountability:
Users must take responsibility for deploying the model appropriately.
Mitigation Strategies:
NVIDIA encourages use with internal model teams and reporting vulnerabilities.
Input Output 
Input Format:
String
Accepted Modalities:
Text
Output Format:
String
Performance Tips:
Optimal performance with prompt lengths within 8k characters.
LLM NameLlama 3.1 Minitron 4B Width Base Chatml
Repository ๐Ÿค—https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml 
Model Size4b
Required VRAM9 GB
Updated2026-01-08
MaintainerIntervitensInc
Model Typellama
Model Files  5.0 GB: 1-of-2   4.0 GB: 2-of-2
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length131072
Model Max Length131072
Transformers Version4.45.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama 3.1 Minitron 4B Width Base Chatml

Best Alternatives
Context / RAM
Downloads
Likes
4Bcpt256K / 8.8 GB50
HoldMy4BKTO256K / 8.8 GB50
Xgen Small 4B Instruct R256K / 17.7 GB183
Xgen Small 4B Base R256K / 17.7 GB142
SJT 4B146K / 7.6 GB50
...lama 3.1 Nemotron Nano 4B V1.1128K / 9 GB20991113
Impish LLAMA 4B128K / 9 GB113342
Nemotron W 4b MagLight 0.1128K / 9.2 GB133
Loxa 4B128K / 16 GB260
Nemotron W 4b Halo 0.1128K / 9.2 GB33
Note: green Score (e.g. "73.2") means that the model is better than IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml.

Rank the Llama 3.1 Minitron 4B Width Base Chatml Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a