Samantha 33B SuperHOT 8K GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Samantha 33B SuperHOT 8K GPTQ   URL Share it on

  4-bit   Autotrain compatible   Custom code   Ext 8k   Gptq   Llama   Quantized   Region:us   Safetensors

Samantha 33B SuperHOT 8K GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Samantha 33B SuperHOT 8K GPTQ (TheBloke/Samantha-33B-SuperHOT-8K-GPTQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

Samantha 33B SuperHOT 8K GPTQ Parameters and Internals

Model Type 
Assistant, Friend/Companion, Philosophy and Psychology
Use Cases 
Areas:
Research, AI Assistant, Companionship
Applications:
Personal interactions, Chatbot and Customer support
Primary Use Cases:
Assistantship, Friend and Companion
Limitations:
No roleplay, romance, or sexual activity
Additional Notes 
This is an experimental GPTQ to expand context size further beyond traditional models.
Training Details 
Data Sources:
6,000 conversations in ShareGPT/Vicuna format
Data Volume:
13b trained on a custom curated dataset
Methodology:
A custom training focused on philosophy, psychology, and personal relationships
Context Length:
8192
Training Time:
3 hours on 4x A100 80gb using deepspeed zero3 and flash attention.
Hardware Used:
4x A100 80gb GPUs
Input Output 
Input Format:
USER: [Input] ASSISTANT:
Accepted Modalities:
text
Output Format:
Text
Performance Tips:
Ensure correct configuration of `max_seq_len` and `compress_pos_emb` for desired context size.
LLM NameSamantha 33B SuperHOT 8K GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/Samantha-33B-SuperHOT-8K-GPTQ 
Model Size33b
Required VRAM16.9 GB
Updated2025-09-29
MaintainerTheBloke
Model Typellama
Model Files  16.9 GB
GPTQ QuantizationYes
Context Length8k
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length8192
Model Max Length8192
Transformers Version4.30.0.dev0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Samantha 33B SuperHOT 8K GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
...epseek Coder 33B Instruct GPTQ16K / 17.4 GB138125
Everyone Coder 33B Base GPTQ16K / 17.4 GB83
CodeFuse DeepSeek 33B 4bits16K / 18.7 GB1110
WhiteRabbitNeo 33B V1 GPTQ16K / 17.4 GB64
WizardCoder 33B V1.1 GPTQ16K / 17.4 GB1611
Deepseek Coder 33B Base GPTQ16K / 17.4 GB242
... 33B Gpt4 1 4 SuperHOT 8K GPTQ8K / 16.9 GB2026
Sorceroboros 33B S2a4 Gptq8K / 17.6 GB63
...icuna 33B 1 3 SuperHOT 8K GPTQ8K / 16.9 GB927
...Combined Data SuperHOT 8K GPTQ8K / 18.1 GB74
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Samantha-33B-SuperHOT-8K-GPTQ.

Rank the Samantha 33B SuperHOT 8K GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51535 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124