| Model Type | | text generation, summarization, question answering, reasoning |
|
| Use Cases |
| Areas: | | content creation, communication, research, education |
|
| Applications: | | text generation, chatbots, conversational AI, text summarization |
|
| Primary Use Cases: | | question answering, summarization, reasoning |
|
| Limitations: | | Influenced by training data biases, Challenges with open-ended or complex tasks |
|
| Considerations: | | Model performance impacted by prompt clarity and context length. |
|
|
| Supported Languages | |
| Training Details |
| Data Sources: | | Gemma model family data sources |
|
| Methodology: | | Recurrent architecture with pre-training and instruction-tuning. |
|
| Hardware Used: | |
| Model Architecture: | |
|
| Safety Evaluation |
| Methodologies: | | internal red-teaming, structured evaluations |
|
| Risk Categories: | | text-to-text content safety, representational harms, memorization, large-scale harm |
|
| Ethical Considerations: | | The model adheres to Google's internal safety policies. |
|
|
| Responsible Ai Considerations |
| Fairness: | | Evaluated against benchmarks like WinoBias and BBQ Dataset for representational harms. |
|
| Transparency: | | Details provided in the model card and evaluation processes. |
|
| Accountability: | | Accountability not explicitly mentioned. |
|
| Mitigation Strategies: | | Provides content safety mechanisms and guidelines. |
|
|
| Input Output |
| Input Format: | | Text string (question, prompt, document) |
|
| Accepted Modalities: | |
| Output Format: | | Generated English-language text |
|
|