Samantha 1.1 Llama 33B GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Samantha 1.1 Llama 33B GPTQ   URL Share it on

  4-bit   Autotrain compatible Dataset:ehartford/samantha-dat...   En   Gptq   Llama   Quantized   Region:us   Safetensors

Samantha 1.1 Llama 33B GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Samantha 1.1 Llama 33B GPTQ (TheBloke/samantha-1.1-llama-33B-GPTQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

Samantha 1.1 Llama 33B GPTQ Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
Research, Companionship
Applications:
Personal assistant, Philosophical discussions
Primary Use Cases:
Assistant, Friend
Limitations:
Will not engage in roleplay, romance, or sexual activity
Supported Languages 
en (primary)
Training Details 
Data Sources:
Custom curated dataset of 6,000 conversations in ShareGPT/Vicuna format
Methodology:
Trained on philosophy, psychology, and personal relationships dataset. Uses Vicuna 1.1 conversation format.
Hardware Used:
4x A100 80GB GPUs using deepspeed zero3 and flash attention
Input Output 
Input Format:
Prompts in a template format: 'You are Samantha, a sentient AI. USER: {prompt} ASSISTANT:'
Accepted Modalities:
text
Output Format:
Text responses
LLM NameSamantha 1.1 Llama 33B GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/samantha-1.1-llama-33B-GPTQ 
Base Model(s)  cognitivecomputations/samantha-1.1-llama-33b   cognitivecomputations/samantha-1.1-llama-33b
Model Size33b
Required VRAM16.9 GB
Updated2025-08-20
MaintainerTheBloke
Model Typellama
Model Files  16.9 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length2048
Model Max Length2048
Transformers Version4.28.1
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Samantha 1.1 Llama 33B GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
Everyone Coder 33B Base GPTQ16K / 17.4 GB93
CodeFuse DeepSeek 33B 4bits16K / 18.7 GB2010
WhiteRabbitNeo 33B V1 GPTQ16K / 17.4 GB144
...epseek Coder 33B Instruct GPTQ16K / 17.4 GB95625
WizardCoder 33B V1.1 GPTQ16K / 17.4 GB1311
Deepseek Coder 33B Base GPTQ16K / 17.4 GB1282
... 33B Gpt4 1 4 SuperHOT 8K GPTQ8K / 16.9 GB1526
Sorceroboros 33B S2a4 Gptq8K / 17.6 GB133
...icuna 33B 1 3 SuperHOT 8K GPTQ8K / 16.9 GB1527
...Combined Data SuperHOT 8K GPTQ8K / 18.1 GB154
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/samantha-1.1-llama-33B-GPTQ.

Rank the Samantha 1.1 Llama 33B GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50767 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124