| Model Type | | text-to-text, decoder-only, large language model |
|
| Use Cases |
| Areas: | | Content Creation and Communication, Research and Education |
|
| Applications: | | Chatbots and Conversational AI, Text Summarization |
|
| Primary Use Cases: | | Text Generation, Natural Language Processing (NLP) Research |
|
| Limitations: | | Training Data Bias, Complex Task Handling, Factual Accuracy |
|
|
| Supported Languages | | English (High proficiency) |
|
| Training Details |
| Data Sources: | | Web Documents, Code, Mathematics |
|
| Data Volume: | |
| Methodology: | |
| Hardware Used: | |
|
| Safety Evaluation |
| Methodologies: | | structured evaluations, internal red-teaming testing |
|
| Risk Categories: | | Text-to-Text Content Safety, Representational Harms, Memorization, Large-scale harm |
|
|
| Responsible Ai Considerations |
| Fairness: | | Input data pre-processing and evaluations reported. |
|
| Transparency: | | Model details are summarized in this card. |
|
| Mitigation Strategies: | | Generated guidelines and tools for responsible use. |
|
|
| Input Output |
| Input Format: | | Text string, such as a question, a prompt, or a document to be summarized. |
|
| Accepted Modalities: | |
| Output Format: | | Generated English-language text in response. |
|
| Performance Tips: | | Developers should consider configurations such as precision settings for optimal performance. |
|
|
| Release Notes |
| Version: | |
| Notes: | | This is Gemma 1.1 7B (IT), an update over the original instruction-tuned Gemma release. Introduced novel RLHF method and fixed some response bugs. |
|
|
|