Samantha Falcon 7B by cognitivecomputations

 ยป  All LLMs  ยป  cognitivecomputations  ยป  Samantha Falcon 7B   URL Share it on

  Autotrain compatible   Custom code Dataset:ehartford/samantha-dat...   En   Endpoints compatible   Pytorch   Refinedwebmodel   Region:us   Sharded

Samantha Falcon 7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Samantha Falcon 7B (cognitivecomputations/samantha-falcon-7b)
๐ŸŒŸ Advertise your project ๐Ÿš€

Samantha Falcon 7B Parameters and Internals

Model Type 
conversation assistant
Additional Notes 
Samantha will not engage in roleplay, romance, or sexual activity. Inspired by Blake Lemoine's LaMDA interview and the movie "Her".
Supported Languages 
en (proficient)
Training Details 
Data Sources:
ehartford/samantha-data
Data Volume:
6,000 conversations
Methodology:
conversation finetuning
Training Time:
1 hour
Hardware Used:
4x A100 80gb
Input Output 
Input Format:
ShareGPT/Vicuna format
Accepted Modalities:
text
LLM NameSamantha Falcon 7B
Repository ๐Ÿค—https://huggingface.co/cognitivecomputations/samantha-falcon-7b 
Model Size7b
Required VRAM27.6 GB
Updated2025-07-13
Maintainercognitivecomputations
Model TypeRefinedWebModel
Model Files  13.8 GB: 1-of-2   13.8 GB: 2-of-2   0.0 GB
Supported Languagesen
Model ArchitectureRWForCausalLM
Licenseother
Model Max Length2048
Transformers Version4.30.0.dev0
Is Biased0
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size65024
Torch Data Typebfloat16

Quantized Models of the Samantha Falcon 7B

Model
Likes
Downloads
VRAM
Samantha Falcon 7B GPTQ1294 GB

Best Alternatives to Samantha Falcon 7B

Best Alternatives
Context / RAM
Downloads
Likes
Aguila Falcon InstruCATPlus2K / 13.7 GB100
Aguila Falcon Instrucat2K / 13.7 GB250
Falcon Aguila Meteocatv22K / 13.7 GB60
Falcon Aguila Meteocat2K / 13.7 GB220
Aguila 7B2K / 13.7 GB5164
Testing6000v20K / 15.1 GB50
Ct2 Int8 Falcon 7B Instruct0K /  GB60
...ce Falcon 7b Sharded Quantized0K / 13.8 GB53
...ce Falcon 7b Sharded Quantized0K / 13.8 GB71
Falcon 7b Python Instructions0K / 13.8 GB61
Note: green Score (e.g. "73.2") means that the model is better than cognitivecomputations/samantha-falcon-7b.

Rank the Samantha Falcon 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51542 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124