XFinder Llama38it by IAAR-Shanghai

 ยป  All LLMs  ยป  IAAR-Shanghai  ยป  XFinder Llama38it   URL Share it on

  Autotrain compatible   Conversational Dataset:iaar-shanghai/kaf-data...   En   Instruct   Llama   Pytorch   Region:us   Sharded

XFinder Llama38it Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
XFinder Llama38it (IAAR-Shanghai/xFinder-llama38it)
๐ŸŒŸ Advertise your project ๐Ÿš€

XFinder Llama38it Parameters and Internals

Model Type 
key answer extraction
Use Cases 
Areas:
evaluation of LLMs
Applications:
LLM evaluation
Primary Use Cases:
key answer extraction from LLM outputs
Additional Notes 
The model enhances the reliability of LLM evaluations and surpasses traditional RegEx-based extraction methods.
Training Details 
Data Sources:
Key Answer Finder (KAF) dataset
Data Volume:
26.9K samples
Methodology:
Fine-tuning
LLM NameXFinder Llama38it
Repository ๐Ÿค—https://huggingface.co/IAAR-Shanghai/xFinder-llama38it 
Model Size8b
Required VRAM16.1 GB
Updated2025-07-19
MaintainerIAAR-Shanghai
Model Typellama
Instruction-BasedYes
Model Files  2.0 GB: 1-of-9   1.9 GB: 2-of-9   2.0 GB: 3-of-9   1.9 GB: 4-of-9   2.0 GB: 5-of-9   1.9 GB: 6-of-9   2.0 GB: 7-of-9   1.3 GB: 8-of-9   1.1 GB: 9-of-9
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensecc-by-nc-nd-4.0
Context Length8192
Model Max Length8192
Transformers Version4.39.3
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typefloat16

Best Alternatives to XFinder Llama38it

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB29090115
UltraLong Thinking4192K / 16.1 GB9302
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB112215
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB186247
Zero Llama 3.1 8B Beta61048K / 16.1 GB8751
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
...dger Nu Llama 3.1 8B UltraLong1048K / 16.2 GB43
....1 1million Ctx Dark Planet 8B1048K / 32.3 GB72
Note: green Score (e.g. "73.2") means that the model is better than IAAR-Shanghai/xFinder-llama38it.

Rank the XFinder Llama38it Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 49866 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124