Bloom 560M Finetuned Fraud by jslin09

 »  All LLMs  »  jslin09  »  Bloom 560M Finetuned Fraud   URL Share it on

Bloom 560M Finetuned Fraud is an open-source language model by jslin09. Features: 560m LLM, VRAM: 2.2GB, License: bigscience-bloom-rail-1.0, Fine-Tuned, LLM Explorer Score: 0.07.

  Bloom Dataset:jslin09/fraud case ver...   Endpoints compatible   Finetuned   Legal   Pytorch   Region:us   Safetensors   Zh

Bloom 560M Finetuned Fraud Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Bloom 560M Finetuned Fraud Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
Legal research, Drafting legal arguments, Training AI on Chinese-language legal corpus
Applications:
Generating legal document drafts in Chinese, Fraud case text augmentation
Primary Use Cases:
Automated drafting of criminal fact paragraphs in fraud and theft cases
Limitations:
Limited non-generalized application outside of specified legal context, Performance on broader legal text tasks not guaranteed
Considerations:
Utilize draft text responsibly in conjunction with human legal expertise. Careful validation required.
Additional Notes 
Designed to handle legality texts focused on Chinese verbiage and legal systems.
Supported Languages 
zh (high)
Training Details 
Data Sources:
JsLin09/Fraud_Case_Verdicts
Data Volume:
74823 judgments (判決、裁定)
Methodology:
Fine-tuned the BLOOM 560m model using public fraud case verdicts
Model Architecture:
BLOOM-based architecture, fine-tuned for specific legal text generation tasks
Responsible Ai Considerations 
Fairness:
The model is trained on unbiased public legal document datasets.
Transparency:
Open model with source and research details provided.
Accountability:
Developers provide disclaimer due to legal consequences of any generated text.
Mitigation Strategies:
Model is designed and documented with aware usage instructions, but no specific risk mitigation strategies implemented.
Input Output 
Input Format:
Legal incident summary or case fact text in Mandarin
Accepted Modalities:
text
Output Format:
Continued legal case drafts and crime facts
Performance Tips:
Ensure the prompts are clear to adhere to legal-jargon context for improved output quality.
LLM NameBloom 560M Finetuned Fraud
Repository 🤗https://huggingface.co/jslin09/bloom-560m-finetuned-fraud 
Model Size560m
Required VRAM2.2 GB
Updated2026-03-29
Maintainerjslin09
Model Typebloom
Model Files  2.2 GB   2.2 GB   0.0 GB
Supported Languageszh
Model ArchitectureBloomForCausalLM
Licensebigscience-bloom-rail-1.0
Transformers Version4.26.1
Tokenizer ClassBloomTokenizer
Padding Token<pad>
Vocabulary Size250880
Torch Data Typefloat32

Best Alternatives to Bloom 560M Finetuned Fraud

Best Alternatives
Context / RAM
Downloads
Likes
Bloomz 560M0K / 1.1 GB1100330137
Train Test Bloom5600K / 2.2 GB50
Bloom 560M0K / 1.1 GB219022371
Bloomz 560M Sft Chat0K / 1.1 GB94510
Promt Generator0K / 2.2 GB140942
Bloom 560M RLHF V20K / 1.1 GB10453
Bloom 560M RLHF0K / 1.1 GB10571
Train Test0K / 2.2 GB300
Guitester0K / 2.2 GB50
Product Description Fr0K / 2.2 GB50
Note: green Score (e.g. "73.2") means that the model is better than jslin09/bloom-560m-finetuned-fraud.

Rank the Bloom 560M Finetuned Fraud Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53089 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a