WiNGPT2 Llama 3 8B Chat by winninghealth

 ยป  All LLMs  ยป  winninghealth  ยป  WiNGPT2 Llama 3 8B Chat   URL Share it on

  Autotrain compatible   Conversational   En   Endpoints compatible   Llama   Medical   Region:us   Safetensors   Sharded   Tensorflow   Zh

WiNGPT2 Llama 3 8B Chat Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
WiNGPT2 Llama 3 8B Chat (winninghealth/WiNGPT2-Llama-3-8B-Chat)
๐ŸŒŸ Advertise your project ๐Ÿš€

WiNGPT2 Llama 3 8B Chat Parameters and Internals

Model Type 
medical, vertical domain
Use Cases 
Areas:
medical, healthcare
Applications:
AI doctor consultations, medical knowledge Q&A
Primary Use Cases:
general medical question answering, diagnostic support, medication and health advice
Limitations:
Does not substitute consultation with a medical professional
Considerations:
Evaluate the information independently.
Additional Notes 
WiNGPT serves as an AI medical assistant with translation capabilities.
Supported Languages 
languages_supported (en, zh), proficiency ()
Training Details 
Data Sources:
20GB pre-training data, 500,000 fine-tuning alignment data
Data Volume:
20G pre-training and 500k fine-tuning align.
Methodology:
Continued Pre-training and Fine-tuning/Alignment
Context Length:
8192
Hardware Used:
A100*8
Model Architecture:
WiNGPT-Llama-3-8B
Release Notes 
Version:
WiNGPT2-Llama-3-8B-Base and WiNGPT2-Llama-3-8B-Chat
Date:
2024-04-23
Notes:
Chinese enhanced/multilingual with evaluation results.
Version:
WiNGPT2-7B-Chat-4bit
Date:
2024-03-05
Notes:
Open-sourced 7B/14B-Chat-4bit model weights.
Version:
WiNGPT2-14B-Base and WiNGPT2-14B-Chat
Date:
2023-12-12
Notes:
Open-sourced 14B model weights.
Version:
WiNGPT2-7B-Base and WiNGPT2-7B-Chat
Date:
2023-09-26
Notes:
Open-sourced 7B model weights.
LLM NameWiNGPT2 Llama 3 8B Chat
Repository ๐Ÿค—https://huggingface.co/winninghealth/WiNGPT2-Llama-3-8B-Chat 
Model Size8b
Required VRAM16.1 GB
Updated2025-09-14
Maintainerwinninghealth
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Supported Languagesen zh
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.40.0
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typebfloat16

Quantized Models of the WiNGPT2 Llama 3 8B Chat

Model
Likes
Downloads
VRAM
WiNGPT2 Llama 3 8B Chat AWQ055 GB

Best Alternatives to WiNGPT2 Llama 3 8B Chat

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB3667120
UltraLong Thinking4192K / 16.1 GB1123
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB116515
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB643452
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
Zero Llama 3.1 8B Beta61048K / 16.1 GB2121
...xis Bookwriter Llama3.1 8B Sft1048K / 16.1 GB744
...dger Nu Llama 3.1 8B UltraLong1048K / 16.2 GB173
Note: green Score (e.g. "73.2") means that the model is better than winninghealth/WiNGPT2-Llama-3-8B-Chat.

Rank the WiNGPT2 Llama 3 8B Chat Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51368 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124