EVA Qwen2.5 14B V0.1 is an open-source language model by EVA-UNIT-01. Features: 14b LLM, VRAM: 29.7GB, Context: 128K, License: apache-2.0, Instruction-Based, LLM Explorer Score: 0.16.
Base model:finetune:qwen/qwen2... Base model:qwen/qwen2.5-14bDataset:allura-org/celeste-1.x...Dataset:allura-org/shortstorie...Dataset:anthracite-org/kalo-op...Dataset:epiculous/synthrp-gens...Dataset:epiculous/synthstruct-...Dataset:gryphe/chatgpt-4o-writ...Dataset:gryphe/sonnet3.5-charc...Dataset:gryphe/sonnet3.5-slimo...Dataset:nopm/opus writingstruc...Dataset:nothingiisreal/reddit-... Instruct Qwen2 Region:us Safetensors Sharded Tensorflow
Full-parameter finetune of Qwen2.5-7B on mixture of synthetic and natural data
Input Output
Input Format:
ChatML
Performance Tips:
Using quantized KV cache with Qwen2.5 is not recommended. Recommended sampler values are: Temperature 0.87, Top-P 0.81, Min-P 0.0025, Repetition Penalty 1.03. Temperature lower than 1 is recommended, but can perform okay with higher temp and Min-P.
Release Notes
Version:
0.1
Notes:
Dataset was deduped and cleaned from version 0.0, sequence length increased. Model is stabler, problems with handling short inputs and min_p sampling seem resolved. This is epoch 2.7 checkpoint. Problems with quantized KV cache. Recommended sampler values added.
Note: green Score (e.g. "73.2") means that the model is better than EVA-UNIT-01/EVA-Qwen2.5-14B-v0.1.
Rank the EVA Qwen2.5 14B V0.1 Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.