Alpaca Lora 7B Onnx Fp32 With Past by nenkoru

 ยป  All LLMs  ยป  nenkoru  ยป  Alpaca Lora 7B Onnx Fp32 With Past   URL Share it on

  Autotrain compatible   Endpoints compatible   Llama   Lora   Onnx   Region:us

Alpaca Lora 7B Onnx Fp32 With Past Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Alpaca Lora 7B Onnx Fp32 With Past (nenkoru/alpaca-lora-7b-onnx-fp32-with-past)
๐ŸŒŸ Advertise your project ๐Ÿš€

Alpaca Lora 7B Onnx Fp32 With Past Parameters and Internals

Model Type 
Auto-regressive, Transformer
Use Cases 
Primary Use Cases:
Research on large language models, Question answering, Natural language understanding, Reading comprehension
Limitations:
Not trained with human feedback, Should not be used in downstream applications without risk evaluation
Considerations:
In-depth evaluation and mitigation of risks necessary before deployment in applications.
Supported Languages 
en (high proficiency), bg (low to medium proficiency), ca (low to medium proficiency), cs (low to medium proficiency), da (low to medium proficiency), de (medium proficiency), es (medium proficiency), fr (medium proficiency), hr (low to medium proficiency), hu (low to medium proficiency), it (low to medium proficiency), nl (low to medium proficiency), pl (low to medium proficiency), pt (low to medium proficiency), ro (low to medium proficiency), ru (low to medium proficiency), sl (low to medium proficiency), sr (low to medium proficiency), sv (low to medium proficiency), uk (low to medium proficiency)
Training Details 
Data Sources:
CCNet, C4, GitHub, Wikipedia, Books, ArXiv, Stack Exchange
Data Volume:
See the paper for more details about the training set and corresponding preprocessing.
Training Time:
Model Architecture:
Transformer-based with various parameter sizes
Responsible Ai Considerations 
Fairness:
The model is evaluated on RAI datasets to measure biases exhibited for various aspects like gender, religion, race, and more.
Transparency:
The model's architecture and training datasets are made public.
Accountability:
Intentions for researchers in understanding and improving model biases and limitations.
Mitigation Strategies:
Filtered data using proximity scoring to Wikipedia and FastText classifiers.
LLM NameAlpaca Lora 7B Onnx Fp32 With Past
Repository ๐Ÿค—https://huggingface.co/nenkoru/alpaca-lora-7b-onnx-fp32-with-past 
Model Size7b
Updated2025-09-20
Maintainernenkoru
Model Typellama
Model ArchitectureLlamaForCausalLM
Transformers Version4.28.0.dev0
Vocabulary Size32000
LoRA ModelYes
Torch Data Typefloat32

Quantized Models of the Alpaca Lora 7B Onnx Fp32 With Past

Model
Likes
Downloads
VRAM
...ca Lora 7B Onnx Fp16 With Past370 GB

Best Alternatives to Alpaca Lora 7B Onnx Fp32 With Past

Best Alternatives
Context / RAM
Downloads
Likes
A6 L1024K / 16.1 GB2010
A3.41024K / 16.1 GB130
A5.41024K / 16.1 GB120
A2.41024K / 16.1 GB120
M1024K / 16.1 GB1270
1571024K / 16.1 GB1010
1241024K / 16.1 GB930
1621024K / 16.1 GB600
2 Very Sci Fi1024K / 16.1 GB3170
1181024K / 16.1 GB150
Note: green Score (e.g. "73.2") means that the model is better than nenkoru/alpaca-lora-7b-onnx-fp32-with-past.

Rank the Alpaca Lora 7B Onnx Fp32 With Past Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51483 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124