llama-4-maverick-17b-128e-instruct

Authors / Meta / llama-4-maverick-17b-128e-instruct

llama-4-maverick-17b-128e-instruct#

Developed by

Meta

πŸ“‹ Technical Specifications#

SpecificationValue
Model IDllama-4-maverick-17b-128e-instruct

🎯 Capabilities#

Supports text generation and processing Supported input modalities Supported output modalities Temperature sampling control Nucleus sampling (top-p) Maximum token limit Stop sequences Frequency penalty Presence penalty Response streaming

🌐 Provider Availability#

This model is available through the following providers with potential variations:

ProviderContextPricing (Input/Output)Notes
Cerebrasβ€”β€”

Other Models by This Author#

  • Codellama 7b Hf
  • compound-beta
  • compound-beta-mini
  • deepseek-r1-distill-llama-70b
  • Faster R Cnn
  • …and 32 more

Last Updated: 2025-10-21 23:55:57 UTC | Generated by ModelWiki