LLM Name | Opt 350m Pubmed Qa Llama 2 7B Chat Hf Uld Loss |
Repository ๐ค | https://huggingface.co/Nicolas-BZRD/opt-350m_pubmed_qa_Llama-2-7b-chat-hf_uld_loss |
Model Size | 7b |
Required VRAM | 1.3 GB |
Updated | 2025-09-12 |
Maintainer | Nicolas-BZRD |
Model Type | opt |
Model Files | |
Model Architecture | OPTForCausalLM |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.36.2 |
Vocabulary Size | 50272 |
Torch Data Type | float32 |
Activation Function | relu |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...lama 2 7B Chat Hf Text Teacher | 2K / 1.3 GB | 5 | 0 |
...tral 7B Instruct V0.2 Uld Loss | 2K / 1.3 GB | 5 | 0 |
... 7B Instruct V0.2 Text Teacher | 2K / 1.3 GB | 5 | 0 |
... 7B Instruct V0.2 Text Teacher | 2K / 1.3 GB | 5 | 0 |
Chopt 2 7b | 2K / 5.3 GB | 1715 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐