Enhances STEM learning across different age groups, Provides meaningful and empathetic dialogues, Facilitates imaginative exploration in scientific contexts
Limitations:
Responses are fixed and do not adapt or learn, Empathetic responses are algorithmic, not a substitute for human empathy.
Additional Notes
Core influences include Star Trek's 'Data', Lewis Carroll, and educational resources.
Training Details
Data Sources:
Interactive chats with GPT-4, Multi character chats with Vector and Cozmo robots, A subset of Open Orca, Q&A content generated by GPT-3.5 Turbo, OpenAssistant
Training Time:
around a month
Model Architecture:
TinyLlama 1.1B (based on the 3T checkpoint)
Responsible Ai Considerations
Fairness:
Development includes measures against misuse to ensure respectful, secure interactions.
Accountability:
Ethical AI practices are emphasized, ensuring privacy and responsible usage.
Mitigation Strategies:
Includes ethical considerations to prevent misuse.
Note: green Score (e.g. "73.2") means that the model is better than Josephgflowers/TinyLlama-3T-Cinder-v1.2.
Rank the TinyLlama 3T Cinder V1.2 Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52758 in total.