Llama Guard at Llama Guard 4 12b

Llama Guard#

Providers / Google Vertex AI / Llama Guard

📋 Overview#

  • ID: llama-guard@llama-guard-4-12b
  • Provider: Google Vertex AI
  • Authors: Meta
  • Open Weights: false
  • Context Window: 128k tokens
  • Max Output: 4k tokens

🔬 Technical Specifications#

Sampling Controls: Temperature Top-P

🎯 Capabilities#

Feature Overview#

Supports text generation and processing Supported input modalities Supported output modalities Can invoke and call tools in responses Accepts tool definitions in requests Supports basic reasoning Temperature sampling control Nucleus sampling (top-p) Maximum token limit Response streaming

Input/Output Modalities#

DirectionTextImageAudioVideoPDF
Input
Output

Core Features#

Tool CallingTool DefinitionsTool ChoiceWeb SearchFile Attachments

Response Delivery#

StreamingStructured OutputJSON ModeFunction CallText Format

Advanced Reasoning#

Basic ReasoningReasoning EffortReasoning TokensInclude ReasoningVerbosity Control

🎛️ Generation Controls#

Sampling & Decoding#

TemperatureTop-P
0.0-2.00.0-1.0

Length & Termination#

Max Tokens
1-4k

💰 Pricing#

Pricing shown for Google Vertex AI

Contact provider for pricing information.

📋 Metadata#

Created: 0001-01-01 00:00:00 UTC

Last Updated: 2025-10-19 18:13:08 UTC



Last Updated: 2025-10-21 23:55:56 UTC | Generated by ModelWiki