Azma Deepseek Coder 1.3B Instruct Structured Output Peft Merge by AswanthCManoj

 »  All LLMs  »  AswanthCManoj  »  Azma Deepseek Coder 1.3B Instruct Structured Output Peft Merge   URL Share it on

Azma Deepseek Coder 1.3B Instruct Structured Output Peft Merge is an open-source language model by AswanthCManoj. Features: 1.3b LLM, VRAM: 2.7GB, Context: 16K, Instruction-Based, Code Generating, LLM Explorer Score: 0.11.

  Autotrain compatible   Codegen   Conversational   Endpoints compatible   Instruct   Llama   Region:us   Safetensors

Azma Deepseek Coder 1.3B Instruct Structured Output Peft Merge Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Azma Deepseek Coder 1.3B Instruct Structured Output Peft Merge Parameters and Internals

LLM NameAzma Deepseek Coder 1.3B Instruct Structured Output Peft Merge
Repository 🤗https://huggingface.co/AswanthCManoj/azma-deepseek-coder-1.3b-instruct-structured-output-peft-merge 
Model Size1.3b
Required VRAM2.7 GB
Updated2025-09-23
MaintainerAswanthCManoj
Model Typellama
Instruction-BasedYes
Model Files  2.7 GB
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Context Length16384
Model Max Length16384
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Tokenstic
Vocabulary Size32256
Torch Data Typefloat16

Best Alternatives to Azma Deepseek Coder 1.3B Instruct Structured Output Peft Merge

Best Alternatives
Context / RAM
Downloads
Likes
Deepseek Coder 1.3B Instruct16K / 2.7 GB59571160
Speechless Coder Ds 1.3B16K / 2.7 GB9260
...c Deepseek Coder 1.3B Instruct16K / 5.4 GB50
Hpc Coder V2.1.3B16K / 2.7 GB435
... 1.3B Instruct Trt Int4 G64 Hf16K / 0.9 GB50
Datascience Coder 1.3B16K / 2.7 GB161
...pseek Coder 1.3B Instruct GPTQ16K / 0.9 GB1237
...epseek Coder 1.3B Instruct AWQ8K / 0.9 GB1385
...Coder 1.3B Function Calling V116K / 2.7 GB9462
Note: green Score (e.g. "73.2") means that the model is better than AswanthCManoj/azma-deepseek-coder-1.3b-instruct-structured-output-peft-merge.

Rank the Azma Deepseek Coder 1.3B Instruct Structured Output Peft Merge Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53254 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a