Pre-installed Models
Docket comes with 7 AI models ready to use out of the box.
Model Overview
| Model | Parameters | Best For | RAM Required |
|---|---|---|---|
| Llama 3.2 | 1B | Quick tasks, low resources | 4 GB |
| Gemma 3N | 3B | General conversation | 6 GB |
| Qwen 2.5 Uncensored | 7B | Unrestricted responses | 6 GB |
| Qwen 2.5 VL Vision | 7B | Image understanding | 8 GB |
| Qwen 2.5 Coder | 7B | Programming | 6 GB |
| Gemma 2 | 9B | Balanced performance | 8 GB |
| Qwen 2.5 | 14B | Complex reasoning | 12+ GB |
Model Details
Llama 3.2 (1B)
Best for: Quick responses and low-resource environments
- Fastest response times
- Lowest memory requirements
- Good for simple questions and tasks
- May struggle with complex reasoning
Gemma 3N
Best for: General purpose conversations
- Balanced speed and quality
- Good context understanding
- Suitable for most everyday tasks
- Moderate resource usage
Qwen 2.5 Uncensored
Best for: Unrestricted responses
- Fewer content restrictions
- Honest and direct responses
- Good for creative writing
- Same capabilities as standard Qwen 2.5
Qwen 2.5 VL Vision
Best for: Understanding images
- Can analyze images you upload
- Describe what's in photos
- Read text from images
- Answer questions about visual content
Usage:
- Load the model
- Click the attachment icon in chat
- Upload an image
- Ask questions about the image
Qwen 2.5 Coder
Best for: Programming assistance
- Optimized for code generation
- Understands many programming languages
- Good at debugging and explaining code
- Can write tests and documentation
Supported languages:
- Python, JavaScript, TypeScript
- Rust, Go, C/C++
- HTML, CSS, SQL
- And many more
Gemma 2
Best for: Balanced performance
- High quality responses
- Good reasoning capabilities
- Versatile for different tasks
- Moderate memory requirements
Qwen 2.5 (14B)
Best for: Complex reasoning tasks
- Largest pre-installed model
- Best quality responses
- Handles complex instructions
- Requires more RAM and time
tip
Start with smaller models and move up if you need more capability. Larger models don't always produce better results for simple tasks.
Loading Models
- Open the Models section in the sidebar
- Click on a model to select it
- Click "Load" to load it into memory
- Wait for the status to show "Ready"
Unloading Models
Models stay in memory until you:
- Load a different model
- Close the application
- Manually unload via the Models page
Only one model can be loaded at a time for local models.