Todd Proxy LoRA 7B by autobots

 ยป  All LLMs  ยป  autobots  ยป  Todd Proxy LoRA 7B   URL Share it on

  Adapter   Finetuned   Lora   Region:us

Todd Proxy LoRA 7b Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Todd Proxy LoRA 7B (autobots/Todd_Proxy_LoRA_7b)
๐ŸŒŸ Advertise your project ๐Ÿš€

Todd Proxy LoRA 7B Parameters and Internals

Additional Notes 
Significant changes in interaction. V1 indicated overfitting with increase in 'master' terminology, V2 improvement noted in model strength with dedicated training set and settings adjustments.
Training Details 
Data Sources:
100k dumped messages from 'chan' todd proxy
Methodology:
Trained for 3 epochs on 4-bit precision. Overfitting after 3, ideally should have been 2 epochs. V2 version trained at higher rank and longer context (512) on unique data with removed content warnings.
Context Length:
512
Model Architecture:
llama-7b architecture with 4-bit and FP16 testing.
Release Notes 
Version:
V1
Notes:
Trained with non-deduped dataset causing overfitting, more dependence on 'master' terminology.
Version:
V2
Notes:
Trained on higher rank, longer context with unique data, excluding content-related issues, resulting in stronger model performance.
LLM NameTodd Proxy LoRA 7b
Repository ๐Ÿค—https://huggingface.co/autobots/Todd_Proxy_LoRA_7b 
Model Size7b
Required VRAM0.1 GB
Updated2025-09-23
Maintainerautobots
Model Files  0.1 GB
Model ArchitectureAdapter
Is Biasednone
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesq_proj|v_proj
LoRA Alpha64
LoRA Dropout0.05
R Param32

Quantized Models of the Todd Proxy LoRA 7B

Model
Likes
Downloads
VRAM
LLaMA 7B 2bit5142 GB
Llama 7B 8bit0107 GB
LLaMA 7B 4bit 32g1154 GB
Llama 7B 4bit Act223 GB
Llama 7B 4bit13153 GB

Best Alternatives to Todd Proxy LoRA 7B

Best Alternatives
Context / RAM
Downloads
Likes
Qwen Megumin0K / 0.1 GB51
Uk Fraud Chatbot Llama20K / 0.4 GB50
...s 25 Mistral 7B Irca DPO Pairs0K / 0.1 GB50
Qwen1.5 7B Chat Sa V0.10K / 0 GB50
Zephyr 7B Ipo 0K 15K I10K / 0.7 GB60
Hr Other 7B Lora0K / 0.2 GB300
Deepseek Llm 7B Chat Sa V0.10K / 0 GB50
Deepthink Reasoning Adapter0K / 0.2 GB43
... Days Of Sodom LoRA Mistral 7B0K / 0.2 GB50
Mistral 7B Instruct Sa V0.10K / 0 GB50
Note: green Score (e.g. "73.2") means that the model is better than autobots/Todd_Proxy_LoRA_7b.

Rank the Todd Proxy LoRA 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51545 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124