Ana V1 M7 AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Ana V1 M7 AWQ   URL Share it on

  4-bit   Awq Base model:quantized:sao10k/an...   Base model:sao10k/ana-v1-m7   En   Mistral   Quantized   Region:us   Safetensors
Model Card on HF ๐Ÿค—: https://huggingface.co/TheBloke/Ana-v1-m7-AWQ 

Ana V1 M7 AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Ana V1 M7 AWQ (TheBloke/Ana-v1-m7-AWQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

Ana V1 M7 AWQ Parameters and Internals

Model Type 
mistral
Additional Notes 
Ana - A model solely focused on the RP / ERP Experience. Little to no Censorship *during* Roleplay. May be kinda censored at 0 Context. Does not affect Roleplays. Little to no Positivity Bias *during* Roleplay, from my tests at least. Verbose, Kinda Smart, Slightly Horny by default. Purely experimental - I do not know what I am doing, this is for fun. A merge + train. Final qLoRA train took 3 hours on a 4090. No 70Bs because the 4090 broke down, falling back to 3060 12gb, runpod training them for now.
Supported Languages 
en (Proficient)
LLM NameAna V1 M7 AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/Ana-v1-m7-AWQ 
Model NameAna v1 m7
Model CreatorSaofiq
Base Model(s)  Ana V1 M7   Sao10K/Ana-v1-m7
Model Size7.2b
Required VRAM4.2 GB
Updated2026-02-06
MaintainerTheBloke
Model Typemistral
Model Files  4.2 GB
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureMistralForCausalLM
Licensecc-by-nc-4.0
Context Length32768
Model Max Length32768
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Ana V1 M7 AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Hc Mistral Alpaca Merged AWQ32K / 4.2 GB90
Phoenix AWQ32K / 4.2 GB131
Mistral Ft Optimized 1227 AWQ32K / 4.2 GB51
Metis 0.5 AWQ32K / 4.2 GB21
XDAN L1 Chat RL V1 AWQ32K / 4.2 GB61
Apricot Wildflower 20 AWQ32K / 4.2 GB31
OpenZephyrChat V0.2 AWQ32K / 4.2 GB83
OpenZephyrChat AWQ32K / 4.2 GB42
...2.5 Neural Chat V3.3 Slerp AWQ32K / 4.2 GB21
Tess XS V1.1 AWQ32K / 4.2 GB51
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Ana-v1-m7-AWQ.

Rank the Ana V1 M7 AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51611 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124