Medit Xxl by grammarly

 ยป  All LLMs  ยป  grammarly  ยป  Medit Xxl   URL Share it on

  Arxiv:2402.16472   Ar   Dataset:facebook/asset Dataset:matejklemen/falko merl...   Dataset:paws   Dataset:paws-x   Dataset:wi locness   De   En   Endpoints compatible   Es   Ja   Ko   Lora   Region:us   Zh
Model Card on HF ๐Ÿค—: https://huggingface.co/grammarly/medit-xxl 

Medit Xxl Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Medit Xxl (grammarly/medit-xxl)
๐ŸŒŸ Advertise your project ๐Ÿš€

Medit Xxl Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
research
Applications:
multilingual text editing, instruction tuning
Additional Notes 
Supports both multiligual and cross-lingual text revisions
Supported Languages 
Arabic (NLP), Chinese (NLP), English (NLP), German (NLP), Japanese (NLP), Korean (NLP), Spanish (NLP)
Training Details 
Data Sources:
mEdIT dataset
Methodology:
fine-tuning the MBZUAI/bactrian-x-llama-13b-lora model
Model Architecture:
Not specified
Input Output 
Input Format:
Adherence to specific instruction format is required
Accepted Modalities:
text
Output Format:
Edited version of the input text following the instruction
Performance Tips:
The instruction and task description should follow the specified format
LLM NameMedit Xxl
Repository ๐Ÿค—https://huggingface.co/grammarly/medit-xxl 
Required VRAM0 GB
Updated2025-09-08
Maintainergrammarly
Model Files  0.0 GB
Supported Languagesen de es ar ja ko zh
Model ArchitectureAutoModel
Licensecc-by-nc-sa-4.0
Is Biasednone
Tokenizer ClassLlamaTokenizer
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesv_proj|q_proj|o_proj|k_proj
LoRA Alpha16
LoRA Dropout0.05
R Param8

Best Alternatives to Medit Xxl

Best Alternatives
Context / RAM
Downloads
Likes
Distil Longformer Base 40964K / 0.4 GB60
Daedalus 11K /  GB41
Tiny Random Detr1K / 0.2 GB120
Opengpt2 Pytorch Backward1K / 6 GB141
Opengpt2 Pytorch Forward1K / 6 GB61
Finsent Transformer0.5K / 0.4 GB21
Bert Chinese L 12 H 768 A 120.5K / 0.4 GB41
Simbert Chinese Tiny0.5K / 0 GB60
Simbert Chinese Base0.5K / 0.4 GB60
Bert Tiny0.5K / 0 GB10907259126
Note: green Score (e.g. "73.2") means that the model is better than grammarly/medit-xxl.

Rank the Medit Xxl Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51187 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124