autogen_core.models#
- class ChatCompletionClient[Quelle]#
Bases:
ComponentBase[BaseModel],ABC- abstract async create(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], tool_choice: Tool | Literal['auto', 'required', 'none'] = 'auto', json_output: bool | type[BaseModel] | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None) CreateResult[Quelle]#
Creates a single response from the model.
- Parameter:
messages (Sequence[LLMMessage]) – The messages to send to the model.
tools (Sequence[Tool | ToolSchema], optional) – The tools to use with the model. Defaults to [].
tool_choice (Tool | Literal["auto", "required", "none"], optional) – A single Tool object to force the model to use, “auto” to let the model choose any available tool, “required” to force tool usage, or “none” to disable tool usage. Defaults to “auto”.
json_output (Optional[bool | type[BaseModel]], optional) – Whether to use JSON mode, structured output, or neither. Defaults to None. If set to a Pydantic BaseModel type, it will be used as the output type for structured output. If set to a boolean, it will be used to determine whether to use JSON mode or not. If set to True, make sure to instruct the model to produce JSON output in the instruction or prompt.
extra_create_args (Mapping[str, Any], optional) – Extra arguments to pass to the underlying client. Defaults to {}.
cancellation_token (Optional[CancellationToken], optional) – A token for cancellation. Defaults to None.
- Gibt zurück:
CreateResult – The result of the model call.
- abstract create_stream(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], tool_choice: Tool | Literal['auto', 'required', 'none'] = 'auto', json_output: bool | type[BaseModel] | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None) AsyncGenerator[str | CreateResult, None][Quelle]#
Creates a stream of string chunks from the model ending with a CreateResult.
- Parameter:
messages (Sequence[LLMMessage]) – The messages to send to the model.
tools (Sequence[Tool | ToolSchema], optional) – The tools to use with the model. Defaults to [].
tool_choice (Tool | Literal["auto", "required", "none"], optional) – A single Tool object to force the model to use, “auto” to let the model choose any available tool, “required” to force tool usage, or “none” to disable tool usage. Defaults to “auto”.
json_output (Optional[bool | type[BaseModel]], optional) –
Whether to use JSON mode, structured output, or neither. Defaults to None. If set to a Pydantic BaseModel type, it will be used as the output type for structured output. If set to a boolean, it will be used to determine whether to use JSON mode or not. If set to True, make sure to instruct the model to produce JSON output in the instruction or prompt.
extra_create_args (Mapping[str, Any], optional) – Extra arguments to pass to the underlying client. Defaults to {}.
cancellation_token (Optional[CancellationToken], optional) – A token for cancellation. Defaults to None.
- Gibt zurück:
AsyncGenerator[Union[str, CreateResult], None] – Ein Generator, der String-Chunks liefert und mit einem
CreateResultendet.
- abstract actual_usage() RequestUsage[Quelle]#
- abstract total_usage() RequestUsage[Quelle]#
- abstract count_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int[Quelle]#
- abstract remaining_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int[Quelle]#
- abstract property capabilities: ModelCapabilities#
- pydantic model SystemMessage[Quelle]#
Bases:
BaseModelSystem message contains instructions for the model coming from the developer.
Hinweis
Open AI is moving away from using ‘system’ role in favor of ‘developer’ role. See Model Spec for more details. However, the ‘system’ role is still allowed in their API and will be automatically converted to ‘developer’ role on the server side. So, you can use SystemMessage for developer messages.
JSON-Schema anzeigen
{ "title": "SystemMessage", "description": "System message contains instructions for the model coming from the developer.\n\n.. note::\n\n Open AI is moving away from using 'system' role in favor of 'developer' role.\n See `Model Spec <https://cdn.openai.com/spec/model-spec-2024-05-08.html#definitions>`_ for more details.\n However, the 'system' role is still allowed in their API and will be automatically converted to 'developer' role\n on the server side.\n So, you can use `SystemMessage` for developer messages.", "type": "object", "properties": { "content": { "title": "Content", "type": "string" }, "type": { "const": "SystemMessage", "default": "SystemMessage", "title": "Type", "type": "string" } }, "required": [ "content" ] }
- Felder:
content (str)type (Literal['SystemMessage'])
- pydantic model UserMessage[Quelle]#
Bases:
BaseModelBenutzernachrichten enthalten Eingaben von Endbenutzern oder sind ein Auffangbehälter für Daten, die an das Modell übergeben werden.
JSON-Schema anzeigen
{ "title": "UserMessage", "description": "User message contains input from end users, or a catch-all for data provided to the model.", "type": "object", "properties": { "content": { "anyOf": [ { "type": "string" }, { "items": { "anyOf": [ { "type": "string" }, {} ] }, "type": "array" } ], "title": "Content" }, "source": { "title": "Source", "type": "string" }, "type": { "const": "UserMessage", "default": "UserMessage", "title": "Type", "type": "string" } }, "required": [ "content", "source" ] }
- Felder:
content (str | List[str | autogen_core._image.Image])source (str)type (Literal['UserMessage'])
- pydantic model AssistantMessage[Quelle]#
Bases:
BaseModelAssistant-Nachrichten werden vom Sprachmodell abgetastet.
JSON-Schema anzeigen
{ "title": "AssistantMessage", "description": "Assistant message are sampled from the language model.", "type": "object", "properties": { "content": { "anyOf": [ { "type": "string" }, { "items": { "$ref": "#/$defs/FunctionCall" }, "type": "array" } ], "title": "Content" }, "thought": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Thought" }, "source": { "title": "Source", "type": "string" }, "type": { "const": "AssistantMessage", "default": "AssistantMessage", "title": "Type", "type": "string" } }, "$defs": { "FunctionCall": { "properties": { "id": { "title": "Id", "type": "string" }, "arguments": { "title": "Arguments", "type": "string" }, "name": { "title": "Name", "type": "string" } }, "required": [ "id", "arguments", "name" ], "title": "FunctionCall", "type": "object" } }, "required": [ "content", "source" ] }
- Felder:
content (str | List[autogen_core._types.FunctionCall])source (str)thought (str | None)type (Literal['AssistantMessage'])
- field content: str | List[FunctionCall] [Required]#
Der Inhalt der Nachricht.
- pydantic model FunctionExecutionResult[source]#
Bases:
BaseModelDas Ergebnis der Funktionsausführung enthält die Ausgabe eines Funktionsaufrufs.
JSON-Schema anzeigen
{ "title": "FunctionExecutionResult", "description": "Function execution result contains the output of a function call.", "type": "object", "properties": { "content": { "title": "Content", "type": "string" }, "name": { "title": "Name", "type": "string" }, "call_id": { "title": "Call Id", "type": "string" }, "is_error": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Is Error" } }, "required": [ "content", "name", "call_id" ] }
- Felder:
call_id (str)content (str)is_error (bool | None)name (str)
- pydantic model FunctionExecutionResultMessage[source]#
Bases:
BaseModelDie Nachricht über das Ergebnis der Funktionsausführung enthält die Ausgabe mehrerer Funktionsaufrufe.
JSON-Schema anzeigen
{ "title": "FunctionExecutionResultMessage", "description": "Function execution result message contains the output of multiple function calls.", "type": "object", "properties": { "content": { "items": { "$ref": "#/$defs/FunctionExecutionResult" }, "title": "Content", "type": "array" }, "type": { "const": "FunctionExecutionResultMessage", "default": "FunctionExecutionResultMessage", "title": "Type", "type": "string" } }, "$defs": { "FunctionExecutionResult": { "description": "Function execution result contains the output of a function call.", "properties": { "content": { "title": "Content", "type": "string" }, "name": { "title": "Name", "type": "string" }, "call_id": { "title": "Call Id", "type": "string" }, "is_error": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Is Error" } }, "required": [ "content", "name", "call_id" ], "title": "FunctionExecutionResult", "type": "object" } }, "required": [ "content" ] }
- Felder:
content (List[autogen_core.models._types.FunctionExecutionResult])type (Literal['FunctionExecutionResultMessage'])
- Feld content: List[FunctionExecutionResult] [Erforderlich]#
- pydantic model CreateResult[source]#
Bases:
BaseModelDas Erstellungsergebnis enthält die Ausgabe einer Modellvervollständigung.
JSON-Schema anzeigen
{ "title": "CreateResult", "description": "Create result contains the output of a model completion.", "type": "object", "properties": { "finish_reason": { "enum": [ "stop", "length", "function_calls", "content_filter", "unknown" ], "title": "Finish Reason", "type": "string" }, "content": { "anyOf": [ { "type": "string" }, { "items": { "$ref": "#/$defs/FunctionCall" }, "type": "array" } ], "title": "Content" }, "usage": { "$ref": "#/$defs/RequestUsage" }, "cached": { "title": "Cached", "type": "boolean" }, "logprobs": { "anyOf": [ { "items": { "$ref": "#/$defs/ChatCompletionTokenLogprob" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Logprobs" }, "thought": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Thought" } }, "$defs": { "ChatCompletionTokenLogprob": { "properties": { "token": { "title": "Token", "type": "string" }, "logprob": { "title": "Logprob", "type": "number" }, "top_logprobs": { "anyOf": [ { "items": { "$ref": "#/$defs/TopLogprob" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Top Logprobs" }, "bytes": { "anyOf": [ { "items": { "type": "integer" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Bytes" } }, "required": [ "token", "logprob" ], "title": "ChatCompletionTokenLogprob", "type": "object" }, "FunctionCall": { "properties": { "id": { "title": "Id", "type": "string" }, "arguments": { "title": "Arguments", "type": "string" }, "name": { "title": "Name", "type": "string" } }, "required": [ "id", "arguments", "name" ], "title": "FunctionCall", "type": "object" }, "RequestUsage": { "properties": { "prompt_tokens": { "title": "Prompt Tokens", "type": "integer" }, "completion_tokens": { "title": "Completion Tokens", "type": "integer" } }, "required": [ "prompt_tokens", "completion_tokens" ], "title": "RequestUsage", "type": "object" }, "TopLogprob": { "properties": { "logprob": { "title": "Logprob", "type": "number" }, "bytes": { "anyOf": [ { "items": { "type": "integer" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Bytes" } }, "required": [ "logprob" ], "title": "TopLogprob", "type": "object" } }, "required": [ "finish_reason", "content", "usage", "cached" ] }
- Felder:
cached (bool)content (str | List[autogen_core._types.FunctionCall])finish_reason (Literal['stop', 'length', 'function_calls', 'content_filter', 'unknown'])logprobs (List[autogen_core.models._types.ChatCompletionTokenLogprob] | None)thought (str | None)usage (autogen_core.models._types.RequestUsage)
- Feld finish_reason: Literal['stop', 'length', 'function_calls', 'content_filter', 'unknown'] [Erforderlich]#
Der Grund, warum das Modell die Vervollständigung beendet hat.
- Feld content: str | List[FunctionCall] [Erforderlich]#
Die Ausgabe der Modellvervollständigung.
- Feld usage: RequestUsage [Erforderlich]#
Die Token-Nutzung bei Eingabeaufforderung und Vervollständigung.
- Feld cached: bool [Erforderlich]#
Gibt an, ob die Vervollständigung aus einer zwischengespeicherten Antwort generiert wurde.
- Feld logprobs: List[ChatCompletionTokenLogprob] | None = None#
Die Log-Wahrscheinlichkeiten der Token in der Vervollständigung.
- pydantic model ChatCompletionTokenLogprob[source]#
Bases:
BaseModelJSON-Schema anzeigen
{ "title": "ChatCompletionTokenLogprob", "type": "object", "properties": { "token": { "title": "Token", "type": "string" }, "logprob": { "title": "Logprob", "type": "number" }, "top_logprobs": { "anyOf": [ { "items": { "$ref": "#/$defs/TopLogprob" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Top Logprobs" }, "bytes": { "anyOf": [ { "items": { "type": "integer" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Bytes" } }, "$defs": { "TopLogprob": { "properties": { "logprob": { "title": "Logprob", "type": "number" }, "bytes": { "anyOf": [ { "items": { "type": "integer" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Bytes" } }, "required": [ "logprob" ], "title": "TopLogprob", "type": "object" } }, "required": [ "token", "logprob" ] }
- Felder:
bytes (List[int] | None)logprob (float)token (str)top_logprobs (List[autogen_core.models._types.TopLogprob] | None)
- Feld top_logprobs: List[TopLogprob] | None = None#
- Klasse ModelFamily(*args: Any, **kwargs: Any)[source]#
Basiert auf:
objectEine Modellfamilie ist eine Gruppe von Modellen, die aus Sicht der Fähigkeiten ähnliche Eigenschaften teilen. Dies unterscheidet sich von diskreten unterstützten Funktionen wie Vision, Funktionsaufruf und JSON-Ausgabe.
Diese Namespace-Klasse enthält Konstanten für die Modellfamilien, die AutoGen versteht. Andere Familien existieren definitiv und können durch einen String dargestellt werden, aber AutoGen wird sie als unbekannt behandeln.
- GPT_5 = 'gpt-5'#
- GPT_41 = 'gpt-41'#
- GPT_45 = 'gpt-45'#
- GPT_4O = 'gpt-4o'#
- O1 = 'o1'#
- O3 = 'o3'#
- O4 = 'o4'#
- GPT_4 = 'gpt-4'#
- GPT_35 = 'gpt-35'#
- R1 = 'r1'#
- GEMINI_1_5_FLASH = 'gemini-1.5-flash'#
- GEMINI_1_5_PRO = 'gemini-1.5-pro'#
- GEMINI_2_0_FLASH = 'gemini-2.0-flash'#
- GEMINI_2_5_PRO = 'gemini-2.5-pro'#
- GEMINI_2_5_FLASH = 'gemini-2.5-flash'#
- CLAUDE_3_HAIKU = 'claude-3-haiku'#
- CLAUDE_3_SONNET = 'claude-3-sonnet'#
- CLAUDE_3_OPUS = 'claude-3-opus'#
- CLAUDE_3_5_HAIKU = 'claude-3-5-haiku'#
- CLAUDE_3_5_SONNET = 'claude-3-5-sonnet'#
- CLAUDE_3_7_SONNET = 'claude-3-7-sonnet'#
- CLAUDE_4_OPUS = 'claude-4-opus'#
- CLAUDE_4_SONNET = 'claude-4-sonnet'#
- LLAMA_3_3_8B = 'llama-3.3-8b'#
- LLAMA_3_3_70B = 'llama-3.3-70b'#
- LLAMA_4_SCOUT = 'llama-4-scout'#
- LLAMA_4_MAVERICK = 'llama-4-maverick'#
- CODESRAL = 'codestral'#
- OPEN_CODESRAL_MAMBA = 'open-codestral-mamba'#
- MISTRAL = 'mistral'#
- MINISTRAL = 'ministral'#
- PIXTRAL = 'pixtral'#
- UNKNOWN = 'unknown'#
- ANY#
Alias von
Literal[‘gpt-5’, ‘gpt-41’, ‘gpt-45’, ‘gpt-4o’, ‘o1’, ‘o3’, ‘o4’, ‘gpt-4’, ‘gpt-35’, ‘r1’, ‘gemini-1.5-flash’, ‘gemini-1.5-pro’, ‘gemini-2.0-flash’, ‘gemini-2.5-pro’, ‘gemini-2.5-flash’, ‘claude-3-haiku’, ‘claude-3-sonnet’, ‘claude-3-opus’, ‘claude-3-5-haiku’, ‘claude-3-5-sonnet’, ‘claude-3-7-sonnet’, ‘claude-4-opus’, ‘claude-4-sonnet’, ‘llama-3.3-8b’, ‘llama-3.3-70b’, ‘llama-4-scout’, ‘llama-4-maverick’, ‘codestral’, ‘open-codestral-mamba’, ‘mistral’, ‘ministral’, ‘pixtral’, ‘unknown’]
- class ModelInfo[source]#
Bases:
TypedDictModelInfo ist ein Dictionary, das Informationen über die Eigenschaften eines Modells enthält. Es wird im `model_info`-Eigenschaft eines Modell-Clients erwartet.
Wir erwarten, dass dies mit der Zeit wächst, wenn wir weitere Funktionen hinzufügen.
- json_output: Required[bool]#
Dies ist anders als strukturiertes JSON.
- Typ:
True, wenn das Modell JSON-Ausgaben unterstützt, sonst False. Hinweis
- family: Required[Literal['gpt-5', 'gpt-41', 'gpt-45', 'gpt-4o', 'o1', 'o3', 'o4', 'gpt-4', 'gpt-35', 'r1', 'gemini-1.5-flash', 'gemini-1.5-pro', 'gemini-2.0-flash', 'gemini-2.5-pro', 'gemini-2.5-flash', 'claude-3-haiku', 'claude-3-sonnet', 'claude-3-opus', 'claude-3-5-haiku', 'claude-3-5-sonnet', 'claude-3-7-sonnet', 'claude-4-opus', 'claude-4-sonnet', 'llama-3.3-8b', 'llama-3.3-70b', 'llama-4-scout', 'llama-4-maverick', 'codestral', 'open-codestral-mamba', 'mistral', 'ministral', 'pixtral', 'unknown'] | str]#
Die Modellfamilie sollte eine der Konstanten aus
ModelFamilysein oder ein String, der eine unbekannte Modellfamilie darstellt.
- validate_model_info(model_info: ModelInfo) None[source]#
Validiert das `model_info`-Dictionary.
- Löst aus:
ValueError – Wenn das `model_info`-Dictionary erforderliche Felder vermisst.