| Model Type | 
 | ||||||
| Additional Notes | 
 | ||||||
| Supported Languages | 
 | ||||||
| Training Details | 
 | 
| LLM Name | Ahxt Llama2 Xs 460M Experimental Ptbr Instruct | 
| Repository ๐ค | https://huggingface.co/cnmoro/ahxt_llama2_xs_460M_experimental_ptbr_instruct | 
| Model Size | 460m | 
| Required VRAM | 0.9 GB | 
| Updated | 2025-10-18 | 
| Maintainer | cnmoro | 
| Model Type | llama | 
| Instruction-Based | Yes | 
| Model Files | |
| Supported Languages | en pt | 
| Model Architecture | LlamaForCausalLM | 
| Context Length | 2048 | 
| Model Max Length | 2048 | 
| Transformers Version | 4.35.2 | 
| Tokenizer Class | GPT2Tokenizer | 
| Padding Token | " | 
| Vocabulary Size | 50304 | 
| Torch Data Type | bfloat16 | 
| Errors | replace | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| TeenyTinyLlama 460M | 2K / 1.9 GB | 1098 | 11 | 
| TeenyTinyLlama 460M Chat | 2K / 0 GB | 49 | 3 | 
| TeenyTinyLlama 460M AWQ | 2K / 0.3 GB | 12 | 1 | 
| TeenyTinyLlama 460M Chat AWQ | 2K / 0.3 GB | 10 | 1 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐