MemGPT DPO MoE Test by starsnatched

 ยป  All LLMs  ยป  starsnatched  ยป  MemGPT DPO MoE Test   URL Share it on

MemGPT DPO MoE Test is an open-source language model by starsnatched. Features: 12.9b LLM, VRAM: 25.8GB, Context: 32K, License: apache-2.0, MoE, Instruction-Based, LLM Explorer Score: 0.12.

  Autotrain compatible   En   Endpoints compatible   Function   Function calling   Instruct   Memgpt   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

MemGPT DPO MoE Test Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
MemGPT DPO MoE Test (starsnatched/MemGPT-DPO-MoE-test)
๐ŸŒŸ Advertise your project ๐Ÿš€

MemGPT DPO MoE Test Parameters and Internals

Model Type 
language model, transformer decoder
Use Cases 
Primary Use Cases:
base model for MemGPT agents
Limitations:
unreliable, unsafe, or biased behaviors
Considerations:
Double-check the results produced.
Supported Languages 
en (primary)
Training Details 
Methodology:
Mixture of Experts (MoE) with 2 experts per token.
Context Length:
8192
Hardware Used:
2x A100 80GB GPUs
Model Architecture:
transformer decoder
Input Output 
Input Format:
ChatML
Accepted Modalities:
text
LLM NameMemGPT DPO MoE Test
Repository ๐Ÿค—https://huggingface.co/starsnatched/MemGPT-DPO-MoE-test 
Model Size12.9b
Required VRAM25.8 GB
Updated2024-10-25
Maintainerstarsnatched
Model Typemixtral
Instruction-BasedYes
Model Files  5.0 GB: 1-of-6   4.9 GB: 2-of-6   5.0 GB: 3-of-6   5.0 GB: 4-of-6   4.9 GB: 5-of-6   1.0 GB: 6-of-6
Supported Languagesen
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.2
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to MemGPT DPO MoE Test

Best Alternatives
Context / RAM
Downloads
Likes
Inf Silent Kunoichi V0.1 2x7B32K / 25.6 GB50
Inf Silent Kunoichi V0.2 2x7B32K / 25.6 GB101
NearalMistral 2x7B32K / 25.8 GB521
Megatron V3 2x7B32K / 25.8 GB1083
MergedExpert 2x8b32K / 25.8 GB50
MergedExperts 2x8b32K / 25.8 GB50
MistarlingMaid 2x7B Base32K / 25.8 GB50
...afted Hermetic Platypus C 2x7B32K / 25.8 GB970
...tral 7B Instruct V0.2 2x7B MoE32K / 25.8 GB11314
...tral Instruct MoE Experimental32K / 25.8 GB82
Note: green Score (e.g. "73.2") means that the model is better than starsnatched/MemGPT-DPO-MoE-test.

Rank the MemGPT DPO MoE Test Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52473 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a