autogen_ext.models.openai#
- class OpenAIChatCompletionClient(**kwargs: Unpack)[Quelle]#
Bases:
BaseOpenAIChatCompletionClient,Component[OpenAIClientConfigurationConfigModel]Chat-Vervollständigungsclient für von OpenAI gehostete Modelle.
Um diesen Client zu verwenden, müssen Sie die openai-Erweiterung installieren
pip install "autogen-ext[openai]"
Sie können diesen Client auch für OpenAI-kompatible Chat-Vervollständigungs-Endpunkte verwenden. Die Verwendung dieses Clients für Nicht-OpenAI-Modelle wurde nicht getestet und ist nicht garantiert.
Für Nicht-OpenAI-Modelle werfen Sie bitte einen Blick auf unsere Community-Erweiterungen für zusätzliche Modell-Clients.
- Parameter:
model (str) – Welches OpenAI-Modell verwendet werden soll.
api_key (optional, str) – Der zu verwendende API-Schlüssel. Erforderlich, wenn 'OPENAI_API_KEY' nicht in den Umgebungsvariablen gefunden wird.
organization (optional, str) – Die zu verwendende Organisations-ID.
base_url (optional, str) – Die zu verwendende Basis-URL. Erforderlich, wenn das Modell nicht auf OpenAI gehostet wird.
timeout – (optional, float): Das Timeout für die Anfrage in Sekunden.
max_retries (optional, int) – Die maximale Anzahl von Wiederholungsversuchen.
model_info (optional, ModelInfo) – Die Fähigkeiten des Modells. Erforderlich, wenn der Modellname kein gültiges OpenAI-Modell ist.
frequency_penalty (optional, float)
logit_bias – (optional, dict[str, int])
max_tokens (optional, int)
n (optional, int)
presence_penalty (optional, float)
response_format (optional, Dict[str, Any]) –
das Format der Antwort. Mögliche Optionen sind
# Text response, this is the default. {"type": "text"}# JSON response, make sure to instruct the model to return JSON. {"type": "json_object"}# Structured output response, with a pre-defined JSON schema. { "type": "json_schema", "json_schema": { "name": "name of the schema, must be an identifier.", "description": "description for the model.", # You can convert a Pydantic (v2) model to JSON schema # using the `model_json_schema()` method. "schema": "<the JSON schema itself>", # Whether to enable strict schema adherence when # generating the output. If set to true, the model will # always follow the exact schema defined in the # `schema` field. Only a subset of JSON Schema is # supported when `strict` is `true`. # To learn more, read # https://platform.openai.com/docs/guides/structured-outputs. "strict": False, # or True }, }Es wird empfohlen, den Parameter json_output in den Methoden
create()odercreate_stream()anstelle von response_format für strukturierte Ausgaben zu verwenden. Der Parameter json_output ist flexibler und ermöglicht die direkte Angabe einer Pydantic-Modellklasse.seed (optional, int)
temperature (optional, float)
top_p (optional, float)
parallel_tool_calls (optional, bool) – Ob parallele Tool-Aufrufe erlaubt sind. Wenn nicht gesetzt, wird das Serververhalten standardmäßig verwendet.
user (optional, str)
default_headers (optional, dict[str, str]) – Benutzerdefinierte Header; nützlich für Authentifizierung oder andere kundenspezifische Anforderungen.
add_name_prefixes (optional, bool) – Ob der source-Wert jedem Inhalt von
UserMessagevorangestellt werden soll. z. B. „dies ist Inhalt“ wird zu „Reviewer sagte: dies ist Inhalt.“. Dies kann nützlich sein für Modelle, die das Feld name in Nachrichten nicht unterstützen. Standardmäßig False.include_name_in_message (optional, bool) – Ob das Feld name in den an die OpenAI-API gesendeten Nachrichtenparametern enthalten sein soll. Standardmäßig True. Auf False setzen für Modell-Anbieter, die das Feld name nicht unterstützen (z. B. Groq).
stream_options (optional, dict) – Zusätzliche Optionen für das Streaming. Derzeit wird nur include_usage unterstützt.
Examples
Der folgende Code-Schnipsel zeigt, wie der Client mit einem OpenAI-Modell verwendet wird
from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_core.models import UserMessage openai_client = OpenAIChatCompletionClient( model="gpt-4o-2024-08-06", # api_key="sk-...", # Optional if you have an OPENAI_API_KEY environment variable set. ) result = await openai_client.create([UserMessage(content="What is the capital of France?", source="user")]) # type: ignore print(result) # Close the client when done. # await openai_client.close()
Um den Client mit einem Nicht-OpenAI-Modell zu verwenden, müssen Sie die Basis-URL des Modells und die Modellinformationen angeben. Zum Beispiel können Sie zum Verwenden von Ollama den folgenden Code-Schnipsel verwenden
from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_core.models import ModelFamily custom_model_client = OpenAIChatCompletionClient( model="deepseek-r1:1.5b", base_url="https://:11434/v1", api_key="placeholder", model_info={ "vision": False, "function_calling": False, "json_output": False, "family": ModelFamily.R1, "structured_output": True, }, ) # Close the client when done. # await custom_model_client.close()
Um den Streaming-Modus zu verwenden, können Sie den folgenden Code-Schnipsel verwenden
import asyncio from autogen_core.models import UserMessage from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: # Similar for AzureOpenAIChatCompletionClient. model_client = OpenAIChatCompletionClient(model="gpt-4o") # assuming OPENAI_API_KEY is set in the environment. messages = [UserMessage(content="Write a very short story about a dragon.", source="user")] # Create a stream. stream = model_client.create_stream(messages=messages) # Iterate over the stream and print the responses. print("Streamed responses:") async for response in stream: if isinstance(response, str): # A partial response is a string. print(response, flush=True, end="") else: # The last response is a CreateResult object with the complete message. print("\n\n------------\n") print("The complete response:", flush=True) print(response.content, flush=True) # Close the client when done. await model_client.close() asyncio.run(main())
Um die strukturierte Ausgabe sowie Funktionsaufrufe zu verwenden, können Sie den folgenden Code-Schnipsel verwenden
import asyncio from typing import Literal from autogen_core.models import ( AssistantMessage, FunctionExecutionResult, FunctionExecutionResultMessage, SystemMessage, UserMessage, ) from autogen_core.tools import FunctionTool from autogen_ext.models.openai import OpenAIChatCompletionClient from pydantic import BaseModel # Define the structured output format. class AgentResponse(BaseModel): thoughts: str response: Literal["happy", "sad", "neutral"] # Define the function to be called as a tool. def sentiment_analysis(text: str) -> str: """Given a text, return the sentiment.""" return "happy" if "happy" in text else "sad" if "sad" in text else "neutral" # Create a FunctionTool instance with `strict=True`, # which is required for structured output mode. tool = FunctionTool(sentiment_analysis, description="Sentiment Analysis", strict=True) async def main() -> None: # Create an OpenAIChatCompletionClient instance. model_client = OpenAIChatCompletionClient(model="gpt-4o-mini") # Generate a response using the tool. response1 = await model_client.create( messages=[ SystemMessage(content="Analyze input text sentiment using the tool provided."), UserMessage(content="I am happy.", source="user"), ], tools=[tool], ) print(response1.content) # Should be a list of tool calls. # [FunctionCall(name="sentiment_analysis", arguments={"text": "I am happy."}, ...)] assert isinstance(response1.content, list) response2 = await model_client.create( messages=[ SystemMessage(content="Analyze input text sentiment using the tool provided."), UserMessage(content="I am happy.", source="user"), AssistantMessage(content=response1.content, source="assistant"), FunctionExecutionResultMessage( content=[FunctionExecutionResult(content="happy", call_id=response1.content[0].id, is_error=False, name="sentiment_analysis")] ), ], # Use the structured output format. json_output=AgentResponse, ) print(response2.content) # Should be a structured output. # {"thoughts": "The user is happy.", "response": "happy"} # Close the client when done. await model_client.close() asyncio.run(main())
Um den Client aus einer Konfiguration zu laden, können Sie die Methode load_component verwenden
from autogen_core.models import ChatCompletionClient config = { "provider": "OpenAIChatCompletionClient", "config": {"model": "gpt-4o", "api_key": "REPLACE_WITH_YOUR_API_KEY"}, } client = ChatCompletionClient.load_component(config)
Die vollständige Liste der verfügbaren Konfigurationsoptionen finden Sie in der Klasse
OpenAIClientConfigurationConfigModel.- component_type: ClassVar[ComponentType] = 'model'#
Der logische Typ der Komponente.
- component_config_schema#
alias von
OpenAIClientConfigurationConfigModel
- component_provider_override: ClassVar[str | None] = 'autogen_ext.models.openai.OpenAIChatCompletionClient'#
Überschreibe den Anbieter-String für die Komponente. Dies sollte verwendet werden, um zu verhindern, dass interne Modulnamen Teil des Modulnamens werden.
- _to_config() OpenAIClientConfigurationConfigModel[Quelle]#
Gib die Konfiguration aus, die erforderlich wäre, um eine neue Instanz einer Komponente zu erstellen, die der Konfiguration dieser Instanz entspricht.
- Gibt zurück:
T – Die Konfiguration der Komponente.
- classmethod _from_config(config: OpenAIClientConfigurationConfigModel) Self[Quelle]#
Erstelle eine neue Instanz der Komponente aus einem Konfigurationsobjekt.
- Parameter:
config (T) – Das Konfigurationsobjekt.
- Gibt zurück:
Self – Die neue Instanz der Komponente.
- class AzureOpenAIChatCompletionClient(**kwargs: Unpack)[Quelle]#
Bases:
BaseOpenAIChatCompletionClient,Component[AzureOpenAIClientConfigurationConfigModel]Chat-Vervollständigungsclient für Azure OpenAI gehostete Modelle.
Um diesen Client zu verwenden, müssen Sie die Erweiterungen azure und openai installieren
pip install "autogen-ext[openai,azure]"
- Parameter:
model (str) – Welches OpenAI-Modell verwendet werden soll.
azure_endpoint (str) – Der Endpunkt für das Azure-Modell. Erforderlich für Azure-Modelle.
azure_deployment (str) – Bereitstellungsname für das Azure-Modell. Erforderlich für Azure-Modelle.
api_version (str) – Die zu verwendende API-Version. Erforderlich für Azure-Modelle.
azure_ad_token (str) – Das zu verwendende Azure AD-Token. Geben Sie dieses oder azure_ad_token_provider für die Token-basierte Authentifizierung an.
azure_ad_token_provider (optional, Callable[[], Awaitable[str]] | AzureTokenProvider) – Der zu verwendende Azure AD-Token-Anbieter. Geben Sie diesen oder azure_ad_token für die Token-basierte Authentifizierung an.
api_key (optional, str) – Der zu verwendende API-Schlüssel. Verwenden Sie diesen, wenn Sie eine schlüsselbasierte Authentifizierung verwenden. Er ist optional, wenn Sie die Token-basierte Authentifizierung mit Azure Active Directory (AAD) oder die Umgebungsvariable AZURE_OPENAI_API_KEY verwenden.
timeout – (optional, float): Das Timeout für die Anfrage in Sekunden.
max_retries (optional, int) – Die maximale Anzahl von Wiederholungsversuchen.
model_info (optional, ModelInfo) – Die Fähigkeiten des Modells. Erforderlich, wenn der Modellname kein gültiges OpenAI-Modell ist.
frequency_penalty (optional, float)
logit_bias – (optional, dict[str, int])
max_tokens (optional, int)
n (optional, int)
presence_penalty (optional, float)
response_format (optional, Dict[str, Any]) –
das Format der Antwort. Mögliche Optionen sind
# Text response, this is the default. {"type": "text"}# JSON response, make sure to instruct the model to return JSON. {"type": "json_object"}# Structured output response, with a pre-defined JSON schema. { "type": "json_schema", "json_schema": { "name": "name of the schema, must be an identifier.", "description": "description for the model.", # You can convert a Pydantic (v2) model to JSON schema # using the `model_json_schema()` method. "schema": "<the JSON schema itself>", # Whether to enable strict schema adherence when # generating the output. If set to true, the model will # always follow the exact schema defined in the # `schema` field. Only a subset of JSON Schema is # supported when `strict` is `true`. # To learn more, read # https://platform.openai.com/docs/guides/structured-outputs. "strict": False, # or True }, }Es wird empfohlen, den Parameter json_output in den Methoden
create()odercreate_stream()anstelle von response_format für strukturierte Ausgaben zu verwenden. Der Parameter json_output ist flexibler und ermöglicht die direkte Angabe einer Pydantic-Modellklasse.seed (optional, int)
temperature (optional, float)
top_p (optional, float)
parallel_tool_calls (optional, bool) – Ob parallele Tool-Aufrufe erlaubt sind. Wenn nicht gesetzt, wird das Serververhalten standardmäßig verwendet.
user (optional, str)
default_headers (optional, dict[str, str]) – Benutzerdefinierte Header; nützlich für Authentifizierung oder andere kundenspezifische Anforderungen.
add_name_prefixes (optional, bool) – Ob der source-Wert jedem Inhalt von
UserMessagevorangestellt werden soll. z. B. „dies ist Inhalt“ wird zu „Reviewer sagte: dies ist Inhalt.“. Dies kann nützlich sein für Modelle, die das Feld name in Nachrichten nicht unterstützen. Standardmäßig False.include_name_in_message (optional, bool) – Ob das Feld name in den an die OpenAI-API gesendeten Nachrichtenparametern enthalten sein soll. Standardmäßig True. Auf False setzen für Modell-Anbieter, die das Feld name nicht unterstützen (z. B. Groq).
stream_options (optional, dict) – Zusätzliche Optionen für das Streaming. Derzeit wird nur include_usage unterstützt.
Um den Client zu verwenden, müssen Sie Ihren Bereitstellungsnamen, den Azure Cognitive Services-Endpunkt und die API-Version angeben. Für die Authentifizierung können Sie entweder einen API-Schlüssel oder ein Azure Active Directory (AAD)-Token-Anmeldeinformations-Objekt bereitstellen.
Das folgende Code-Schnipsel zeigt die Verwendung der AAD-Authentifizierung. Die verwendete Identität muss die Rolle Cognitive Services OpenAI User zugewiesen bekommen haben.
from autogen_ext.auth.azure import AzureTokenProvider from autogen_ext.models.openai import AzureOpenAIChatCompletionClient from azure.identity import DefaultAzureCredential # Create the token provider token_provider = AzureTokenProvider( DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default", ) az_model_client = AzureOpenAIChatCompletionClient( azure_deployment="{your-azure-deployment}", model="{model-name, such as gpt-4o}", api_version="2024-06-01", azure_endpoint="https://{your-custom-endpoint}.openai.azure.com/", azure_ad_token_provider=token_provider, # Optional if you choose key-based authentication. # api_key="sk-...", # For key-based authentication. )
Weitere Nutzungsbeispiele finden Sie in der Klasse
OpenAIChatCompletionClient.Um den Client, der die identitätsbasierte Authentifizierung verwendet, aus einer Konfiguration zu laden, können Sie die Methode load_component verwenden
from autogen_core.models import ChatCompletionClient config = { "provider": "AzureOpenAIChatCompletionClient", "config": { "model": "gpt-4o-2024-05-13", "azure_endpoint": "https://{your-custom-endpoint}.openai.azure.com/", "azure_deployment": "{your-azure-deployment}", "api_version": "2024-06-01", "azure_ad_token_provider": { "provider": "autogen_ext.auth.azure.AzureTokenProvider", "config": { "provider_kind": "DefaultAzureCredential", "scopes": ["https://cognitiveservices.azure.com/.default"], }, }, }, } client = ChatCompletionClient.load_component(config)
Die vollständige Liste der verfügbaren Konfigurationsoptionen finden Sie in der Klasse
AzureOpenAIClientConfigurationConfigModel.Hinweis
Derzeit wird nur DefaultAzureCredential unterstützt, ohne dass zusätzliche Argumente übergeben werden.
Hinweis
Der Azure OpenAI-Client setzt standardmäßig den User-Agent-Header auf autogen-python/{version}. Um dies zu überschreiben, können Sie die Umgebungsvariable autogen_ext.models.openai.AZURE_OPENAI_USER_AGENT auf eine leere Zeichenkette setzen.
Weitere Informationen zur direkten Verwendung des Azure-Clients oder weitere Details finden Sie hier.
- component_type: ClassVar[ComponentType] = 'model'#
Der logische Typ der Komponente.
- component_config_schema#
- component_provider_override: ClassVar[str | None] = 'autogen_ext.models.openai.AzureOpenAIChatCompletionClient'#
Überschreibe den Anbieter-String für die Komponente. Dies sollte verwendet werden, um zu verhindern, dass interne Modulnamen Teil des Modulnamens werden.
- _to_config() AzureOpenAIClientConfigurationConfigModel[Quelle]#
Gib die Konfiguration aus, die erforderlich wäre, um eine neue Instanz einer Komponente zu erstellen, die der Konfiguration dieser Instanz entspricht.
- Gibt zurück:
T – Die Konfiguration der Komponente.
- classmethod _from_config(config: AzureOpenAIClientConfigurationConfigModel) Self[Quelle]#
Erstelle eine neue Instanz der Komponente aus einem Konfigurationsobjekt.
- Parameter:
config (T) – Das Konfigurationsobjekt.
- Gibt zurück:
Self – Die neue Instanz der Komponente.
- class BaseOpenAIChatCompletionClient(client: AsyncOpenAI | AsyncAzureOpenAI, *, create_args: Dict[str, Any], model_capabilities: ModelCapabilities | None = None, model_info: ModelInfo | None = None, add_name_prefixes: bool = False, include_name_in_message: bool = True)[Quelle]#
Bases:
ChatCompletionClient- async create(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], tool_choice: Tool | Literal['auto', 'required', 'none'] = 'auto', json_output: bool | type[BaseModel] | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None) CreateResult[Quelle]#
Creates a single response from the model.
- Parameter:
messages (Sequence[LLMMessage]) – The messages to send to the model.
tools (Sequence[Tool | ToolSchema], optional) – The tools to use with the model. Defaults to [].
tool_choice (Tool | Literal["auto", "required", "none"], optional) – A single Tool object to force the model to use, “auto” to let the model choose any available tool, “required” to force tool usage, or “none” to disable tool usage. Defaults to “auto”.
json_output (Optional[bool | type[BaseModel]], optional) – Whether to use JSON mode, structured output, or neither. Defaults to None. If set to a Pydantic BaseModel type, it will be used as the output type for structured output. If set to a boolean, it will be used to determine whether to use JSON mode or not. If set to True, make sure to instruct the model to produce JSON output in the instruction or prompt.
extra_create_args (Mapping[str, Any], optional) – Extra arguments to pass to the underlying client. Defaults to {}.
cancellation_token (Optional[CancellationToken], optional) – A token for cancellation. Defaults to None.
- Gibt zurück:
CreateResult – The result of the model call.
- async create_stream(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], tool_choice: Tool | Literal['auto', 'required', 'none'] = 'auto', json_output: bool | type[BaseModel] | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None, max_consecutive_empty_chunk_tolerance: int = 0, include_usage: bool | None = None) AsyncGenerator[str | CreateResult, None][source]#
Erstellt einen Stream von Zeichenketten-Chunks vom Modell, der mit einem
CreateResultendet.Erweitert
autogen_core.models.ChatCompletionClient.create_stream()zur Unterstützung der OpenAI API.Beim Streaming ist das Standardverhalten, keine Nutzungszählungen für Tokens zurückzugeben. Siehe: OpenAI API-Referenz für mögliche Argumente.
Sie können das Flag include_usage auf True setzen oder extra_create_args={“stream_options”: {“include_usage”: True}}. Wenn sowohl das Flag als auch stream_options gesetzt sind, aber mit unterschiedlichen Werten, wird eine Ausnahme ausgelöst. (falls von der zugegriffenen API unterstützt) um einen endgültigen Chunk mit
RequestUsagezurückzugeben, der die Prompt- und Completion-Token-Zählungen enthält, alle vorherigen Chunks habenNonefür die Nutzung. Siehe: OpenAI API-Referenz für Stream-Optionen.- Weitere Beispiele für unterstützte Argumente, die in extra_create_args enthalten sein können
temperature (float): Steuert die Zufälligkeit der Ausgabe. Höhere Werte (z.B. 0,8) machen die Ausgabe zufälliger, während niedrigere Werte (z.B. 0,2) sie fokussierter und deterministischer machen.
max_tokens (int): Die maximale Anzahl von Tokens, die in der Vervollständigung generiert werden sollen.
top_p (float): Eine Alternative zur Stichprobenziehung mit Temperatur, die als Nucleus-Sampling bezeichnet wird und bei der das Modell die Ergebnisse der Tokens mit einer Wahrscheinlichkeitsmasse von top_p berücksichtigt.
frequency_penalty (float): Ein Wert zwischen -2,0 und 2,0, der neue Tokens basierend auf ihrer aktuellen Häufigkeit im bisherigen Text bestraft und so die Wahrscheinlichkeit wiederholter Phrasen verringert.
presence_penalty (float): Ein Wert zwischen -2,0 und 2,0, der neue Tokens basierend darauf bestraft, ob sie im bisherigen Text vorkommen, und das Modell ermutigt, über neue Themen zu sprechen.
- actual_usage() RequestUsage[source]#
- total_usage() RequestUsage[source]#
- count_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int[source]#
- remaining_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int[source]#
- property capabilities: ModelCapabilities#
- pydantic model AzureOpenAIClientConfigurationConfigModel[source]#
Bases:
BaseOpenAIClientConfigurationConfigModelJSON-Schema anzeigen
{ "title": "AzureOpenAIClientConfigurationConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "parallel_tool_calls": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Parallel Tool Calls" }, "reasoning_effort": { "anyOf": [ { "enum": [ "minimal", "low", "medium", "high" ], "type": "string" }, { "type": "null" } ], "default": null, "title": "Reasoning Effort" }, "model": { "title": "Model", "type": "string" }, "api_key": { "anyOf": [ { "format": "password", "type": "string", "writeOnly": true }, { "type": "null" } ], "default": null, "title": "Api Key" }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "add_name_prefixes": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Add Name Prefixes" }, "include_name_in_message": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Include Name In Message" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" }, "azure_endpoint": { "title": "Azure Endpoint", "type": "string" }, "azure_deployment": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Azure Deployment" }, "api_version": { "title": "Api Version", "type": "string" }, "azure_ad_token": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Azure Ad Token" }, "azure_ad_token_provider": { "anyOf": [ { "$ref": "#/$defs/ComponentModel" }, { "type": "null" } ], "default": null } }, "$defs": { "ComponentModel": { "description": "Model class for a component. Contains all information required to instantiate a component.", "properties": { "provider": { "title": "Provider", "type": "string" }, "component_type": { "anyOf": [ { "enum": [ "model", "agent", "tool", "termination", "token_provider", "workbench" ], "type": "string" }, { "type": "string" }, { "type": "null" } ], "default": null, "title": "Component Type" }, "version": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Version" }, "component_version": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Component Version" }, "description": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Description" }, "label": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Label" }, "config": { "title": "Config", "type": "object" } }, "required": [ "provider", "config" ], "title": "ComponentModel", "type": "object" }, "JSONSchema": { "properties": { "name": { "title": "Name", "type": "string" }, "description": { "title": "Description", "type": "string" }, "schema": { "title": "Schema", "type": "object" }, "strict": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Strict" } }, "required": [ "name" ], "title": "JSONSchema", "type": "object" }, "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo is a dictionary that contains information about a model's properties.\nIt is expected to be used in the model_info property of a model client.\n\nWe are expecting this to grow over time as we add more features.", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-5", "gpt-41", "gpt-45", "gpt-4o", "o1", "o3", "o4", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "gemini-2.5-pro", "gemini-2.5-flash", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-5-haiku", "claude-3-5-sonnet", "claude-3-7-sonnet", "claude-4-opus", "claude-4-sonnet", "llama-3.3-8b", "llama-3.3-70b", "llama-4-scout", "llama-4-maverick", "codestral", "open-codestral-mamba", "mistral", "ministral", "pixtral", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" }, "structured_output": { "title": "Structured Output", "type": "boolean" }, "multiple_system_messages": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Multiple System Messages" } }, "required": [ "vision", "function_calling", "json_output", "family", "structured_output" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object", "json_schema" ], "title": "Type", "type": "string" }, "json_schema": { "anyOf": [ { "$ref": "#/$defs/JSONSchema" }, { "type": "null" } ] } }, "required": [ "type", "json_schema" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } }, "required": [ "model", "azure_endpoint", "api_version" ] }
- Felder:
- field azure_ad_token_provider: ComponentModel | None = None#
- pydantic model OpenAIClientConfigurationConfigModel[source]#
Bases:
BaseOpenAIClientConfigurationConfigModelJSON-Schema anzeigen
{ "title": "OpenAIClientConfigurationConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "parallel_tool_calls": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Parallel Tool Calls" }, "reasoning_effort": { "anyOf": [ { "enum": [ "minimal", "low", "medium", "high" ], "type": "string" }, { "type": "null" } ], "default": null, "title": "Reasoning Effort" }, "model": { "title": "Model", "type": "string" }, "api_key": { "anyOf": [ { "format": "password", "type": "string", "writeOnly": true }, { "type": "null" } ], "default": null, "title": "Api Key" }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "add_name_prefixes": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Add Name Prefixes" }, "include_name_in_message": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Include Name In Message" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" }, "organization": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Organization" }, "base_url": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Base Url" } }, "$defs": { "JSONSchema": { "properties": { "name": { "title": "Name", "type": "string" }, "description": { "title": "Description", "type": "string" }, "schema": { "title": "Schema", "type": "object" }, "strict": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Strict" } }, "required": [ "name" ], "title": "JSONSchema", "type": "object" }, "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo is a dictionary that contains information about a model's properties.\nIt is expected to be used in the model_info property of a model client.\n\nWe are expecting this to grow over time as we add more features.", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-5", "gpt-41", "gpt-45", "gpt-4o", "o1", "o3", "o4", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "gemini-2.5-pro", "gemini-2.5-flash", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-5-haiku", "claude-3-5-sonnet", "claude-3-7-sonnet", "claude-4-opus", "claude-4-sonnet", "llama-3.3-8b", "llama-3.3-70b", "llama-4-scout", "llama-4-maverick", "codestral", "open-codestral-mamba", "mistral", "ministral", "pixtral", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" }, "structured_output": { "title": "Structured Output", "type": "boolean" }, "multiple_system_messages": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Multiple System Messages" } }, "required": [ "vision", "function_calling", "json_output", "family", "structured_output" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object", "json_schema" ], "title": "Type", "type": "string" }, "json_schema": { "anyOf": [ { "$ref": "#/$defs/JSONSchema" }, { "type": "null" } ] } }, "required": [ "type", "json_schema" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } }, "required": [ "model" ] }
- pydantic model BaseOpenAIClientConfigurationConfigModel[source]#
Bases:
CreateArgumentsConfigModelJSON-Schema anzeigen
{ "title": "BaseOpenAIClientConfigurationConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "parallel_tool_calls": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Parallel Tool Calls" }, "reasoning_effort": { "anyOf": [ { "enum": [ "minimal", "low", "medium", "high" ], "type": "string" }, { "type": "null" } ], "default": null, "title": "Reasoning Effort" }, "model": { "title": "Model", "type": "string" }, "api_key": { "anyOf": [ { "format": "password", "type": "string", "writeOnly": true }, { "type": "null" } ], "default": null, "title": "Api Key" }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "add_name_prefixes": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Add Name Prefixes" }, "include_name_in_message": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Include Name In Message" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" } }, "$defs": { "JSONSchema": { "properties": { "name": { "title": "Name", "type": "string" }, "description": { "title": "Description", "type": "string" }, "schema": { "title": "Schema", "type": "object" }, "strict": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Strict" } }, "required": [ "name" ], "title": "JSONSchema", "type": "object" }, "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo is a dictionary that contains information about a model's properties.\nIt is expected to be used in the model_info property of a model client.\n\nWe are expecting this to grow over time as we add more features.", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-5", "gpt-41", "gpt-45", "gpt-4o", "o1", "o3", "o4", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "gemini-2.5-pro", "gemini-2.5-flash", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-5-haiku", "claude-3-5-sonnet", "claude-3-7-sonnet", "claude-4-opus", "claude-4-sonnet", "llama-3.3-8b", "llama-3.3-70b", "llama-4-scout", "llama-4-maverick", "codestral", "open-codestral-mamba", "mistral", "ministral", "pixtral", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" }, "structured_output": { "title": "Structured Output", "type": "boolean" }, "multiple_system_messages": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Multiple System Messages" } }, "required": [ "vision", "function_calling", "json_output", "family", "structured_output" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object", "json_schema" ], "title": "Type", "type": "string" }, "json_schema": { "anyOf": [ { "$ref": "#/$defs/JSONSchema" }, { "type": "null" } ] } }, "required": [ "type", "json_schema" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } }, "required": [ "model" ] }
- Felder:
- field model_capabilities: ModelCapabilities | None = None#
- pydantic model CreateArgumentsConfigModel[source]#
Bases:
BaseModelJSON-Schema anzeigen
{ "title": "CreateArgumentsConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "parallel_tool_calls": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Parallel Tool Calls" }, "reasoning_effort": { "anyOf": [ { "enum": [ "minimal", "low", "medium", "high" ], "type": "string" }, { "type": "null" } ], "default": null, "title": "Reasoning Effort" } }, "$defs": { "JSONSchema": { "properties": { "name": { "title": "Name", "type": "string" }, "description": { "title": "Description", "type": "string" }, "schema": { "title": "Schema", "type": "object" }, "strict": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Strict" } }, "required": [ "name" ], "title": "JSONSchema", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object", "json_schema" ], "title": "Type", "type": "string" }, "json_schema": { "anyOf": [ { "$ref": "#/$defs/JSONSchema" }, { "type": "null" } ] } }, "required": [ "type", "json_schema" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } } }
- Felder:
- feld response_format: ResponseFormat | None = None#
- feld stream_options: StreamOptions | None = None#