Ct2fast Codet5p 770M Py by michaelfeil

 ยป  All LLMs  ยป  michaelfeil  ยป  Ct2fast Codet5p 770M Py   URL Share it on

  Arxiv:2305.07922   Ctranslate2   Endpoints compatible   Float16   Int8   Region:us

Ct2fast Codet5p 770M Py Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Ct2fast Codet5p 770M Py (michaelfeil/ct2fast-codet5p-770m-py)
๐ŸŒŸ Advertise your project ๐Ÿš€

Ct2fast Codet5p 770M Py Parameters and Internals

Model Type 
encoder-decoder
Use Cases 
Areas:
code understanding, code generation
Applications:
text-to-code retrieval, code completion, retrieval-augmented generation
Primary Use Cases:
Python code generation
Limitations:
zero-shot setting performance
Supported Languages 
python (high)
Training Details 
Data Sources:
github-code dataset
Data Volume:
permissive subset of the deduplicated version
Methodology:
spanning denoising, causal language modeling, contrastive learning, and text-code matching
Model Architecture:
shallow encoder and deep decoder
Input Output 
Accepted Modalities:
text
Output Format:
text
Performance Tips:
Specify compute_type 'int8_float16' for CUDA devices.
Release Notes 
Version:
unknown
Date:
2023-05-20
Notes:
Converted using ct2-transformers-converter
LLM NameCt2fast Codet5p 770M Py
Repository ๐Ÿค—https://huggingface.co/michaelfeil/ct2fast-codet5p-770m-py 
Model Size770m
Required VRAM1.5 GB
Updated2025-09-23
Maintainermichaelfeil
Model Files  1.5 GB
Model ArchitectureAutoModel
Licensebsd-3-clause
Model Max Length512
Tokenizer ClassRobertaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Errorsreplace

Best Alternatives to Ct2fast Codet5p 770M Py

Best Alternatives
Context / RAM
Downloads
Likes
Codet5p 770M Py Ct2 Int80K / 0.7 GB80
Ct2fast Codet5p 770M0K / 1.5 GB74
Note: green Score (e.g. "73.2") means that the model is better than michaelfeil/ct2fast-codet5p-770m-py.

Rank the Ct2fast Codet5p 770M Py Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51534 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124