Stella En 1.5B V5 is an open-source language model by dunzhang. Features: 1.5b LLM, VRAM: 6.2GB, Context: 128K, License: mit, LLM Explorer Score: 0.28.
The model comes with a series of `2_Dense_{dims}` folders for versatile output dimensions with minor performance differences across them. It requires a prompt for the best results in specific tasks.
Training Details
Methodology:
The models are trained based on `Alibaba-NLP/gte-large-en-v1.5` and `Alibaba-NLP/gte-Qwen2-1.5B-instruct` and use [MRL](https://arxiv.org/abs/2205.13147) methodology.
Context Length:
512
Input Output
Input Format:
Given a web search query, retrieve relevant passages that answer the query: {query}
Output Format:
Vectors with specified dimension, capable of complex similarity and retrieval tasks.
Performance Tips:
Use 1024 dimensions for a balance between performance and computational cost.
Note: green Score (e.g. "73.2") means that the model is better than dunzhang/stella_en_1.5B_v5.
Rank the Stella En 1.5B V5 Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 51643 in total.