Browse LiteLLM Models
+The Most Comprehensive
AI Model Catalog
- A catalog of AI models with pricing and context window information, powered by LiteLLM's comprehensive model database. + Compare pricing, context windows, and features for 2,600+ models across 140+ providers. Powered by LiteLLM's open-source model database.
Trusted by leading teams
+| Model | -Context | -Input Tokens | -Output Tokens | -Cache Read Tokens | -Cache Write Tokens | +Model | +handleSort("context")}> + Context + + | +handleSort("input")}> + Input $/M + + | +handleSort("output")}> + Output $/M + + | + +
|---|---|---|---|---|---|---|---|---|---|
|
-
- {getDisplayModelName(name, litellm_provider)}
+
@@ -360,31 +484,164 @@ We also need to update [${RESOURCE_BACKUP_NAME}](https://github.com/${REPO_FULL_
{/if}
+ {getDisplayModelName(name, litellm_provider)}
+ {#if mode}
+ {getModeLabel(mode)}
+ {/if}
+
|
- {max_input_tokens && max_input_tokens > 0 && (max_input_tokens / 1000).toFixed(0) !== '0' ? (max_input_tokens >= 1000000 ? (max_input_tokens / 1000000).toFixed(0) + 'M' : (max_input_tokens / 1000).toFixed(0) + 'K') : '—'} | -{input_cost_per_token ? '$' + (input_cost_per_token * 1000000).toFixed(2) + '/M' : '—'} | -{output_cost_per_token ? '$' + (output_cost_per_token * 1000000).toFixed(2) + '/M' : '—'} | -{cache_read_input_token_cost ? '$' + (cache_read_input_token_cost * 1000000).toFixed(2) + '/M' : '—'} | -{cache_creation_input_token_cost ? '$' + (cache_creation_input_token_cost * 1000000).toFixed(2) + '/M' : '—'} | +{formatContext(max_input_tokens)} | +{formatCost(input_cost_per_token)} | +{formatCost(output_cost_per_token)} | + +|
|
-
-
+
+
+
+
+
+
+
+
+
+
+ Pricing per 1M tokens+
+
+
+ Input
+ {formatCost(input_cost_per_token)}
+
+
+ Output
+ {formatCost(output_cost_per_token)}
+
+
+ Cache Read
+ {formatCost(cache_read_input_token_cost)}
+
+
+ Cache Write
+ {formatCost(cache_creation_input_token_cost)}
+
+
+
+
+
+ Model Info+
+
+
+ Provider
+ {litellm_provider || "—"}
+
+
+ Mode
+ {mode ? getModeLabel(mode) : "—"}
+
+
+ Max Input
+ {max_input_tokens ? max_input_tokens.toLocaleString() + " tokens" : "—"}
+
+
+ Max Output
+ {max_output_tokens ? max_output_tokens.toLocaleString() + " tokens" : "—"}
+
+
+
+ Features+
+ {#each [
+ { key: supports_function_calling, label: "Function Calling" },
+ { key: supports_vision, label: "Vision" },
+ { key: supports_response_schema, label: "JSON Mode" },
+ { key: supports_tool_choice, label: "Tool Choice" },
+ { key: supports_parallel_function_calling, label: "Parallel Calls" },
+ { key: supports_audio_input, label: "Audio Input" },
+ { key: supports_prompt_caching, label: "Prompt Caching" },
+ ] as feature}
+
+
+ {#if feature.key}
+
+ {/each}
+
+
+
+
+
+
+ {#if !codeTabStates[name] || codeTabStates[name] === "sdk"}
+
+
+
+
+ {#if !codeTabStates[name] || codeTabStates[name] === "sdk"}
+
+ {:else}
+
+ {/if}
+
+ {:else}
+
+ {/if}
+ |
|||||||||