Rwkv Raven 14B by RWKV

 ยป  All LLMs  ยป  RWKV  ยป  Rwkv Raven 14B   URL Share it on

  Autotrain compatible   Dataset:eleutherai/pile   Endpoints compatible   Pytorch   Region:us   Rwkv   Sharded
Model Card on HF ๐Ÿค—: https://huggingface.co/RWKV/rwkv-raven-14b 

Rwkv Raven 14B Benchmarks

Rwkv Raven 14B (RWKV/rwkv-raven-14b)
๐ŸŒŸ Advertise your project ๐Ÿš€

Rwkv Raven 14B Parameters and Internals

Model Type 
RNN, Transformer
Additional Notes 
RWKV is an RNN with transformer-level LLM performance. It is designed for fast inference, VRAM efficiency, and 'infinite' context length. The model allows for free sentence embedding and fast training.
Input Output 
Input Format:
text
Accepted Modalities:
text
Output Format:
text
Performance Tips:
The 'Raven' models need to be prompted in a specific way. Learn more about that in the integration blog post.
LLM NameRwkv Raven 14B
Repository ๐Ÿค—https://huggingface.co/RWKV/rwkv-raven-14b 
Model Size14b
Required VRAM56.9 GB
Updated2025-07-25
MaintainerRWKV
Model Typerwkv
Model Files  2.0 GB: 1-of-30   2.0 GB: 2-of-30   2.0 GB: 3-of-30   2.0 GB: 4-of-30   1.7 GB: 5-of-30   1.9 GB: 6-of-30   2.0 GB: 7-of-30   2.0 GB: 8-of-30   2.0 GB: 9-of-30   1.7 GB: 10-of-30   1.9 GB: 11-of-30   2.0 GB: 12-of-30   2.0 GB: 13-of-30   2.0 GB: 14-of-30   1.7 GB: 15-of-30   1.9 GB: 16-of-30   2.0 GB: 17-of-30   2.0 GB: 18-of-30   2.0 GB: 19-of-30   1.7 GB: 20-of-30   1.9 GB: 21-of-30   2.0 GB: 22-of-30   2.0 GB: 23-of-30   2.0 GB: 24-of-30   1.7 GB: 25-of-30   1.9 GB: 26-of-30   2.0 GB: 27-of-30   2.0 GB: 28-of-30   1.9 GB: 29-of-30   1.0 GB: 30-of-30
Model ArchitectureRwkvForCausalLM
Transformers Version4.29.0.dev0
Tokenizer ClassGPTNeoXTokenizer
Vocabulary Size50277
Torch Data Typefloat32

Best Alternatives to Rwkv Raven 14B

Best Alternatives
Context / RAM
Downloads
Likes
Rwkv 4 14B Pile0K / 56.9 GB10303
Rwkv 4 Raven 14B0K / 56.9 GB72
Rwkv 14B Wizardlm0K / 28.3 GB59

Rank the Rwkv Raven 14B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50035 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124