autogen_ext.models.ollama#
- class OllamaChatCompletionClient(**kwargs: Unpack)[Quelle]#
Bases:
BaseOllamaChatCompletionClient,Component[BaseOllamaClientConfigurationConfigModel]Chat completion client for Ollama hosted models.
Ollama must be installed and the appropriate model pulled.
- Parameter:
model (str) – Which Ollama model to use.
host (optional, str) – Model host url.
response_format (optional, pydantic.BaseModel) – The format of the response. If provided, the response will be parsed into this format as json.
options (optional, Mapping[str, Any] | Options) – Additional options to pass to the Ollama client.
model_info (optional, ModelInfo) – The capabilities of the model. Required if the model is not listed in the ollama model info.
Hinweis
Only models with 200k+ downloads (as of Jan 21, 2025), + phi4, deepseek-r1 have pre-defined model infos. See this file for the full list. An entry for one model encompases all parameter variants of that model.
To use this client, you must install the ollama extension
pip install "autogen-ext[ollama]"
The following code snippet shows how to use the client with an Ollama model
from autogen_ext.models.ollama import OllamaChatCompletionClient from autogen_core.models import UserMessage ollama_client = OllamaChatCompletionClient( model="llama3", ) result = await ollama_client.create([UserMessage(content="What is the capital of France?", source="user")]) # type: ignore print(result)
To load the client from a configuration, you can use the load_component method
from autogen_core.models import ChatCompletionClient config = { "provider": "OllamaChatCompletionClient", "config": {"model": "llama3"}, } client = ChatCompletionClient.load_component(config)
To output structured data, you can use the response_format argument
from autogen_ext.models.ollama import OllamaChatCompletionClient from autogen_core.models import UserMessage from pydantic import BaseModel class StructuredOutput(BaseModel): first_name: str last_name: str ollama_client = OllamaChatCompletionClient( model="llama3", response_format=StructuredOutput, ) result = await ollama_client.create([UserMessage(content="Who was the first man on the moon?", source="user")]) # type: ignore print(result)
Hinweis
Tool usage in ollama is stricter than in its OpenAI counterparts. While OpenAI accepts a map of [str, Any], Ollama requires a map of [str, Property] where Property is a typed object containing
typeanddescriptionfields. Therefore, only the keystypeanddescriptionwill be converted from the properties blob in the tool schema.To view the full list of available configuration options, see the
OllamaClientConfigurationConfigModelclass.- component_type: ClassVar[ComponentType] = 'model'#
Der logische Typ der Komponente.
- component_config_schema#
- component_provider_override: ClassVar[str | None] = 'autogen_ext.models.ollama.OllamaChatCompletionClient'#
Überschreibe den Anbieter-String für die Komponente. Dies sollte verwendet werden, um zu verhindern, dass interne Modulnamen Teil des Modulnamens werden.
- _to_config() BaseOllamaClientConfigurationConfigModel[Quelle]#
Gib die Konfiguration aus, die erforderlich wäre, um eine neue Instanz einer Komponente zu erstellen, die der Konfiguration dieser Instanz entspricht.
- Gibt zurück:
T – Die Konfiguration der Komponente.
- classmethod _from_config(config: BaseOllamaClientConfigurationConfigModel) Self[Quelle]#
Erstelle eine neue Instanz der Komponente aus einem Konfigurationsobjekt.
- Parameter:
config (T) – Das Konfigurationsobjekt.
- Gibt zurück:
Self – Die neue Instanz der Komponente.
- pydantic model BaseOllamaClientConfigurationConfigModel[Quelle]#
Bases:
CreateArgumentsConfigModelJSON-Schema anzeigen
{ "title": "BaseOllamaClientConfigurationConfigModel", "type": "object", "properties": { "model": { "title": "Model", "type": "string" }, "host": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Host" }, "response_format": { "default": null, "title": "Response Format" }, "follow_redirects": { "default": true, "title": "Follow Redirects", "type": "boolean" }, "timeout": { "default": null, "title": "Timeout" }, "headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Headers" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "options": { "anyOf": [ { "type": "object" }, { "$ref": "#/$defs/Options" }, { "type": "null" } ], "default": null, "title": "Options" } }, "$defs": { "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo is a dictionary that contains information about a model's properties.\nIt is expected to be used in the model_info property of a model client.\n\nWe are expecting this to grow over time as we add more features.", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-5", "gpt-41", "gpt-45", "gpt-4o", "o1", "o3", "o4", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "gemini-2.5-pro", "gemini-2.5-flash", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-5-haiku", "claude-3-5-sonnet", "claude-3-7-sonnet", "claude-4-opus", "claude-4-sonnet", "llama-3.3-8b", "llama-3.3-70b", "llama-4-scout", "llama-4-maverick", "codestral", "open-codestral-mamba", "mistral", "ministral", "pixtral", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" }, "structured_output": { "title": "Structured Output", "type": "boolean" }, "multiple_system_messages": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Multiple System Messages" } }, "required": [ "vision", "function_calling", "json_output", "family", "structured_output" ], "title": "ModelInfo", "type": "object" }, "Options": { "properties": { "numa": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Numa" }, "num_ctx": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Num Ctx" }, "num_batch": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Num Batch" }, "num_gpu": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Num Gpu" }, "main_gpu": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Main Gpu" }, "low_vram": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Low Vram" }, "f16_kv": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "F16 Kv" }, "logits_all": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Logits All" }, "vocab_only": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Vocab Only" }, "use_mmap": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Use Mmap" }, "use_mlock": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Use Mlock" }, "embedding_only": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Embedding Only" }, "num_thread": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Num Thread" }, "num_keep": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Num Keep" }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "num_predict": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Num Predict" }, "top_k": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Top K" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "tfs_z": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Tfs Z" }, "typical_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Typical P" }, "repeat_last_n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Repeat Last N" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "repeat_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Repeat Penalty" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "mirostat": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Mirostat" }, "mirostat_tau": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Mirostat Tau" }, "mirostat_eta": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Mirostat Eta" }, "penalize_newline": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Penalize Newline" }, "stop": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" } }, "title": "Options", "type": "object" } }, "required": [ "model" ] }
- Felder:
- field model_capabilities: ModelCapabilities | None = None#
- pydantic model CreateArgumentsConfigModel[Quelle]#
Bases:
BaseModelJSON-Schema anzeigen
{ "title": "CreateArgumentsConfigModel", "type": "object", "properties": { "model": { "title": "Model", "type": "string" }, "host": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Host" }, "response_format": { "default": null, "title": "Response Format" } }, "required": [ "model" ] }