Rogue Rose 103B V0.2 GPTQ is an open-source language model by TheBloke. Features: 103b LLM, VRAM: 52.5GB, Context: 4K, License: llama2, Quantized, LLM Explorer Score: 0.11.
Rogue Rose 103B V0.2 GPTQ Parameters and Internals
Model Type
llama
Use Cases
Limitations:
May have limitations in scene logic
Additional Notes
Recommended sampler settings from Reddit guide. Experiment with prompt and system prompt for better results.
Supported Languages
en (English)
Training Details
Context Length:
4096
Hardware Used:
Massed Compute
Model Architecture:
Frankenmerge of two custom 70b merges with 120 layers.
Input Output
Input Format:
Vicuna-Short: 'You are a helpful AI assistant. USER: {prompt} ASSISTANT: '
Accepted Modalities:
text
Output Format:
Varies depending on usage and template
Performance Tips:
Try using the new Min-P sampler method. Recommended settings: temperature at high levels, repetition penalty higher than normal, presence penalty higher than normal.
Release Notes
Version:
3.2 bpw
Notes:
Fits within 48 GB of VRAM at 8192 context.
Version:
3.5 bpw (PENDING)
Notes:
Barely fits within 48 GB of VRAM at ~4096 context using the 8-bit cache setting.
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Rogue-Rose-103b-v0.2-GPTQ.
Rank the Rogue Rose 103B V0.2 GPTQ Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.