ProofGPT V0.1 by hoskinson-center

 ยป  All LLMs  ยป  hoskinson-center  ยป  ProofGPT V0.1   URL Share it on

ProofGPT V0.1 is an open-source language model by hoskinson-center. Features: LLM, VRAM: 2.9GB, Context: 2K, License: mit, HF Score: 29.9, LLM Explorer Score: 0.1, Arc: 22.9, HellaSwag: 28.7, MMLU: 26, TruthfulQA: 51.6, WinoGrande: 50.4, GSM8K: 0.1.

Dataset:hoskinson-center/proof...   En   Endpoints compatible   Gpt neox   Pytorch   Region:us

ProofGPT V0.1 Benchmarks

ProofGPT V0.1 (hoskinson-center/proofGPT-v0.1)
๐ŸŒŸ Advertise your project ๐Ÿš€

ProofGPT V0.1 Parameters and Internals

Model Type 
text generation, causal-lm
Additional Notes 
Initialized training with pythia-1.3b weights. Previous commits were trained on proof-pile v1.0.
Training Details 
Data Sources:
hoskinson-center/proof-pile
Model Architecture:
GPT-NeoX
Release Notes 
Version:
3bcdc4e
Notes:
Replaced the weights with model trained on proof-pile v1.1.
Version:
9695b51
Notes:
Updated the tokenizer to have bos, eos, and unk tokens.
LLM NameProofGPT V0.1
Repository ๐Ÿค—https://huggingface.co/hoskinson-center/proofGPT-v0.1 
Required VRAM2.9 GB
Updated2026-03-30
Maintainerhoskinson-center
Model Typegpt_neox
Model Files  2.9 GB
Supported Languagesen
Model ArchitectureGPTNeoXForCausalLM
Licensemit
Context Length2048
Model Max Length2048
Transformers Version4.24.0
Tokenizer ClassGPTNeoXTokenizer
Padding Token<|endoftext|>
Vocabulary Size50304
Torch Data Typefloat16

Best Alternatives to ProofGPT V0.1

Best Alternatives
Context / RAM
Downloads
Likes
Catlm8K / 7.8 GB1834
...Prover 14final Checkpoint 58304K / 14.9 GB50
Neox Musenet Untrained4K / 7.3 GB50
Stabillm Instruct De4K / 31.8 GB70
Open Calm Large2K / 1.8 GB84010
MonoCoder OMP2K / 3.6 GB270
Open Calm Small2K / 0.4 GB458020
KULLM RLHF2K / 25.8 GB33
Step3 Mk72K / 25.8 GB60
Ppo Model2K / 2.7 GB60
Note: green Score (e.g. "73.2") means that the model is better than hoskinson-center/proofGPT-v0.1.

Rank the ProofGPT V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a