Google Vertex AI#
Enterprise AI platform on Google Cloud with Gemini and PaLM models.
Provider Information#
| Field | Value |
|---|---|
| Provider ID | google-vertex |
| Total Models | 284 |
| Authentication | None |
| Status Page | https://status.cloud.google.com |
🔗 API Endpoints#
Documentation: https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference
Models API: https://us-central1-aiplatform.googleapis.com/v1/projects/{project}/locations/{location}/models
Chat Completions: https://us-central1-aiplatform.googleapis.com/v1/projects
🔒 Privacy & Data Handling#
Privacy Policy: https://cloud.google.com/privacy
Terms of Service: https://cloud.google.com/terms
Retains User Data: Yes
Trains on User Data: No
⏱️ Data Retention Policy#
Policy Type: Fixed Duration
Retention Duration: 2 days
Details: Data retention period not specified in privacy policy; estimated 48 hours based on typical Google Cloud practices; enterprise customers may configure longer periods
🛡️ Content Moderation#
Requires Moderation: No
Content Moderated: Yes
Moderated by: Google-vertex
🏢 Headquarters#
Mountain View, CA, USA
Available Models#
BERT#
| Model | Context | Input | Output | Features |
|---|---|---|---|---|
| Bert Base | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Bert Base Uncased | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Roberta Large | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Xlm Roberta Large | 128k | N/A | N/A | 📝 🔧 ⚡ |
| bert-base | N/A | N/A | N/A | 📝 |
| bert-base-uncased | N/A | N/A | N/A | 📝 |
Claude#
| Model | Context | Input | Output | Features |
|---|---|---|---|---|
| Claude 3 5 Haiku | 200k | $0.80 | $4.00 | 📝 👁️ 🔧 ⚡ |
| Claude 3 7 Sonnet | 200k | $3.00 | $15.00 | 📝 👁️ 🔧 ⚡ |
| Claude 3 Haiku | 200k | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Claude Opus 4 | 200k | $15.00 | $75.00 | 📝 👁️ 🔧 ⚡ |
| Claude Opus 4 1 | 200k | $15.00 | $75.00 | 📝 👁️ 🔧 ⚡ |
| Claude Sonnet 4 | 200k | $3.00 | $15.00 | 📝 👁️ 🔧 ⚡ |
DeepSeek#
| Model | Context | Input | Output | Features |
|---|---|---|---|---|
| Deepseek R1 | 64k | N/A | N/A | 📝 🔧 ⚡ |
| Deepseek R1 0528 Maas | 64k | N/A | N/A | 📝 🔧 ⚡ |
| Deepseek V3 | 64k | N/A | N/A | 📝 🔧 ⚡ |
| Deepseek V3 1 | 64k | N/A | N/A | 📝 🔧 ⚡ |
| Deepseek V3.1 Maas | 64k | N/A | N/A | 📝 🔧 ⚡ |
Embeddings#
| Model | Context | Input | Output | Features |
|---|---|---|---|---|
| Multimodalembedding | 1.0M | N/A | N/A | 📝 ⚡ |
| Text Embedding Large Exp 03 07 | 1.0M | N/A | N/A | 📝 ⚡ |
| Textembedding Gecko | 1.0M | N/A | N/A | 📝 ⚡ |
| embeddinggemma | N/A | N/A | N/A | 📝 |
| multimodalembedding | N/A | N/A | N/A | 📝 |
| text-embedding-large-exp-03-07 | N/A | N/A | N/A | 📝 |
| textembedding-gecko | N/A | N/A | N/A | 📝 |
GPT#
| Model | Context | Input | Output | Features |
|---|---|---|---|---|
| Gpt Oss | N/A | N/A | N/A | 📝 ⚡ |
| Gpt Oss 120b Maas | N/A | N/A | N/A | 📝 ⚡ |
Gemini#
Gemma#
| Model | Context | Input | Output | Features |
|---|---|---|---|---|
| Codegemma | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Embeddinggemma | 1.0M | N/A | N/A | 📝 ⚡ |
| Gemma | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Gemma2 | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Gemma3 | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Gemma3n | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Medgemma | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Paligemma | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Shieldgemma2 | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| T5gemma | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Txgemma | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| codegemma | N/A | N/A | N/A | 📝 |
| gemma | N/A | N/A | N/A | 📝 |
| gemma2 | N/A | N/A | N/A | 📝 |
| gemma3 | N/A | N/A | N/A | 📝 |
| gemma3n | N/A | N/A | N/A | 📝 |
| medgemma | N/A | N/A | N/A | 📝 |
| paligemma | N/A | N/A | N/A | 📝 |
| shieldgemma2 | N/A | N/A | N/A | 📝 |
| t5gemma | N/A | N/A | N/A | 📝 |
| txgemma | N/A | N/A | N/A | 📝 |
Jamba#
| Model | Context | Input | Output | Features |
|---|---|---|---|---|
| Jamba Large 1.6 | 256k | N/A | N/A | 📝 🔧 ⚡ |
Llama#
| Model | Context | Input | Output | Features |
|---|---|---|---|---|
| Codellama 7b Hf | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Llama 2 Quantized | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Llama 3.1 405b Instruct Maas | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Llama 3.2 90b Vision Instruct Maas | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Llama 3.3 70b Instruct Maas | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Llama 4 Maverick 17b 128e Instruct Maas | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Llama Guard | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Llama2 | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Llama3 | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Llama3 1 | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Llama3 2 | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Llama3 3 | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Llama4 | 128k | N/A | N/A | 📝 🔧 ⚡ |
Mistral#
| Model | Context | Input | Output | Features |
|---|---|---|---|---|
| Mistral Large 2411 | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Mistral Ocr 2505 | 128k | N/A | N/A | 📝 🔧 ⚡ |
| Mistral Small 2503 | 128k | N/A | N/A | 📝 🔧 ⚡ |
Other#
PaLM#
| Model | Context | Input | Output | Features |
|---|---|---|---|---|
| Chat Bison | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Code Bison | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Codechat Bison | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| Text Bison | 1.0M | N/A | N/A | 📝 👁️ 🔧 ⚡ |
| chat-bison | N/A | N/A | N/A | 📝 |
| code-bison | N/A | N/A | N/A | 📝 |
| codechat-bison | N/A | N/A | N/A | 📝 |
| text-bison | N/A | N/A | N/A | 📝 |
Qwen#
| Model | Context | Input | Output | Features |
|---|---|---|---|---|
| Qwen Image | N/A | N/A | N/A | 📝 ⚡ |
| Qwen2 | N/A | N/A | N/A | 📝 ⚡ |
| Qwen3 | N/A | N/A | N/A | 📝 ⚡ |
| Qwen3 235b A22b Instruct 2507 Maas | N/A | N/A | N/A | 📝 ⚡ |
| Qwen3 Coder | N/A | N/A | N/A | 📝 ⚡ |
| Qwen3 Coder 480b A35b Instruct Maas | N/A | N/A | N/A | 📝 ⚡ |
T5#
| Model | Context | Input | Output | Features |
|---|---|---|---|---|
| t5-1.1 | N/A | N/A | N/A | 📝 |
| t5-flan | N/A | N/A | N/A | 📝 |
Whisper#
| Model | Context | Input | Output | Features |
|---|---|---|---|---|
| Whisper Large | N/A | N/A | N/A | 📝 ⚡ |
Configuration#
Using with Model.Wiki#
1# List all models from this provider
2starmap list models --provider google-vertex
3
4# Fetch latest models from provider API
5starmap fetch --provider google-vertex
6
7# Sync provider data
8starmap sync --provider google-vertexSee Also#
← Back to Providers | ← Back to Home | Generated by ModelWiki