autogen_ext.models.anthropic#
- class AnthropicChatCompletionClient(**kwargs: Unpack)[Quelle]#
Bases:
BaseAnthropicChatCompletionClient,Component[AnthropicClientConfigurationConfigModel]Chat-Vervollständigungsclient für Anthropic's Claude-Modelle.
- Parameter:
model (str) – Das zu verwendende Claude-Modell (z. B. „claude-3-sonnet-20240229“, „claude-3-opus-20240229“)
api_key (str, optional) – Anthropic API-Schlüssel. Erforderlich, wenn nicht in Umgebungsvariablen vorhanden.
base_url (str, optional) – Überschreiben Sie den Standard-API-Endpunkt.
max_tokens (int, optional) – Maximale Token in der Antwort. Standard ist 4096.
temperature (float, optional) – Steuert die Zufälligkeit. Niedriger ist deterministischer. Standard ist 1.0.
top_p (float, optional) – Steuert die Diversität mittels Nucleus-Sampling. Standard ist 1.0.
top_k (int, optional) – Steuert die Diversität mittels Top-K-Sampling. Standard ist -1 (deaktiviert).
model_info (ModelInfo, optional) – Die Fähigkeiten des Modells. Erforderlich, wenn ein benutzerdefiniertes Modell verwendet wird.
Um diesen Client zu verwenden, müssen Sie die Anthropic-Erweiterung installieren
pip install "autogen-ext[anthropic]"
Beispiel
import asyncio from autogen_ext.models.anthropic import AnthropicChatCompletionClient from autogen_core.models import UserMessage async def main(): anthropic_client = AnthropicChatCompletionClient( model="claude-3-sonnet-20240229", api_key="your-api-key", # Optional if ANTHROPIC_API_KEY is set in environment ) result = await anthropic_client.create([UserMessage(content="What is the capital of France?", source="user")]) # type: ignore print(result) if __name__ == "__main__": asyncio.run(main())
Um den Client aus einer Konfiguration zu laden
from autogen_core.models import ChatCompletionClient config = { "provider": "AnthropicChatCompletionClient", "config": {"model": "claude-3-sonnet-20240229"}, } client = ChatCompletionClient.load_component(config)
- component_type: ClassVar[ComponentType] = 'model'#
Der logische Typ der Komponente.
- component_config_schema#
alias von
AnthropicClientConfigurationConfigModel
- component_provider_override: ClassVar[str | None] = 'autogen_ext.models.anthropic.AnthropicChatCompletionClient'#
Überschreibe den Anbieter-String für die Komponente. Dies sollte verwendet werden, um zu verhindern, dass interne Modulnamen Teil des Modulnamens werden.
- _to_config() AnthropicClientConfigurationConfigModel[Quelle]#
Gib die Konfiguration aus, die erforderlich wäre, um eine neue Instanz einer Komponente zu erstellen, die der Konfiguration dieser Instanz entspricht.
- Gibt zurück:
T – Die Konfiguration der Komponente.
- classmethod _from_config(config: AnthropicClientConfigurationConfigModel) Self[Quelle]#
Erstelle eine neue Instanz der Komponente aus einem Konfigurationsobjekt.
- Parameter:
config (T) – Das Konfigurationsobjekt.
- Gibt zurück:
Self – Die neue Instanz der Komponente.
- class AnthropicBedrockChatCompletionClient(**kwargs: Unpack)[Quelle]#
Bases:
BaseAnthropicChatCompletionClient,Component[AnthropicBedrockClientConfigurationConfigModel]Chat-Vervollständigungsclient für Anthropic's Claude-Modelle auf AWS Bedrock.
- Parameter:
model (str) – Das zu verwendende Claude-Modell (z. B. „claude-3-sonnet-20240229“, „claude-3-opus-20240229“)
api_key (str, optional) – Anthropic API-Schlüssel. Erforderlich, wenn nicht in Umgebungsvariablen vorhanden.
base_url (str, optional) – Überschreiben Sie den Standard-API-Endpunkt.
max_tokens (int, optional) – Maximale Token in der Antwort. Standard ist 4096.
temperature (float, optional) – Steuert die Zufälligkeit. Niedriger ist deterministischer. Standard ist 1.0.
top_p (float, optional) – Steuert die Diversität mittels Nucleus-Sampling. Standard ist 1.0.
top_k (int, optional) – Steuert die Diversität mittels Top-K-Sampling. Standard ist -1 (deaktiviert).
model_info (ModelInfo, optional) – Die Fähigkeiten des Modells. Erforderlich, wenn ein benutzerdefiniertes Modell verwendet wird.
bedrock_info (BedrockInfo, optional) – Die Fähigkeiten des Modells in Bedrock. Erforderlich, wenn ein Modell von AWS Bedrock verwendet wird.
Um diesen Client zu verwenden, müssen Sie die Anthropic-Erweiterung installieren
pip install "autogen-ext[anthropic]"
Beispiel
import asyncio from autogen_ext.models.anthropic import AnthropicBedrockChatCompletionClient, BedrockInfo from autogen_core.models import UserMessage, ModelInfo async def main(): anthropic_client = AnthropicBedrockChatCompletionClient( model="anthropic.claude-3-5-sonnet-20240620-v1:0", temperature=0.1, model_info=ModelInfo( vision=False, function_calling=True, json_output=False, family="unknown", structured_output=True ), bedrock_info=BedrockInfo( aws_access_key="<aws_access_key>", aws_secret_key="<aws_secret_key>", aws_session_token="<aws_session_token>", aws_region="<aws_region>", ), ) result = await anthropic_client.create([UserMessage(content="What is the capital of France?", source="user")]) # type: ignore print(result) if __name__ == "__main__": asyncio.run(main())
- component_type: ClassVar[ComponentType] = 'model'#
Der logische Typ der Komponente.
- component_config_schema#
- component_provider_override: ClassVar[str | None] = 'autogen_ext.models.anthropic.AnthropicBedrockChatCompletionClient'#
Überschreibe den Anbieter-String für die Komponente. Dies sollte verwendet werden, um zu verhindern, dass interne Modulnamen Teil des Modulnamens werden.
- _to_config() AnthropicBedrockClientConfigurationConfigModel[Quelle]#
Gib die Konfiguration aus, die erforderlich wäre, um eine neue Instanz einer Komponente zu erstellen, die der Konfiguration dieser Instanz entspricht.
- Gibt zurück:
T – Die Konfiguration der Komponente.
- classmethod _from_config(config: AnthropicBedrockClientConfigurationConfigModel) Self[Quelle]#
Erstelle eine neue Instanz der Komponente aus einem Konfigurationsobjekt.
- Parameter:
config (T) – Das Konfigurationsobjekt.
- Gibt zurück:
Self – Die neue Instanz der Komponente.
- class BaseAnthropicChatCompletionClient(client: Any, *, create_args: Dict[str, Any], model_info: ModelInfo | None = None)[Quelle]#
Bases:
ChatCompletionClient- async create(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], tool_choice: Tool | Literal['auto', 'required', 'none'] = 'auto', json_output: bool | type[BaseModel] | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None) CreateResult[Quelle]#
Creates a single response from the model.
- Parameter:
messages (Sequence[LLMMessage]) – The messages to send to the model.
tools (Sequence[Tool | ToolSchema], optional) – The tools to use with the model. Defaults to [].
tool_choice (Tool | Literal["auto", "required", "none"], optional) – A single Tool object to force the model to use, “auto” to let the model choose any available tool, “required” to force tool usage, or “none” to disable tool usage. Defaults to “auto”.
json_output (Optional[bool | type[BaseModel]], optional) – Whether to use JSON mode, structured output, or neither. Defaults to None. If set to a Pydantic BaseModel type, it will be used as the output type for structured output. If set to a boolean, it will be used to determine whether to use JSON mode or not. If set to True, make sure to instruct the model to produce JSON output in the instruction or prompt.
extra_create_args (Mapping[str, Any], optional) – Extra arguments to pass to the underlying client. Defaults to {}.
cancellation_token (Optional[CancellationToken], optional) – A token for cancellation. Defaults to None.
- Gibt zurück:
CreateResult – The result of the model call.
- async create_stream(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], tool_choice: Tool | Literal['auto', 'required', 'none'] = 'auto', json_output: bool | type[BaseModel] | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None, max_consecutive_empty_chunk_tolerance: int = 0) AsyncGenerator[str | CreateResult, None][Quelle]#
Erstellt einen AsyncGenerator, der einen Stream von Vervollständigungen basierend auf den bereitgestellten Nachrichten und Tools liefert.
- count_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int[Quelle]#
Schätzen Sie die Anzahl der von Nachrichten und Tools verwendeten Token.
Hinweis: Dies ist eine Schätzung, die auf gängigen Tokenisierungsmustern basiert und möglicherweise nicht exakt mit Anthropic's genauer Tokenzählung für Claude-Modelle übereinstimmt.
- remaining_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int[Quelle]#
Berechnen Sie die verbleibenden Token basierend auf dem Token-Limit des Modells.
- actual_usage() RequestUsage[Quelle]#
- total_usage() RequestUsage[Quelle]#
- property capabilities: ModelCapabilities#
- class AnthropicClientConfiguration[Quelle]#
Bases:
BaseAnthropicClientConfiguration- response_format: ResponseFormat | None#
- thinking: ThinkingConfig | None#
- model_capabilities: ModelCapabilities#
- class AnthropicBedrockClientConfiguration[Quelle]#
Bases:
AnthropicClientConfiguration- bedrock_info: BedrockInfo#
- response_format: ResponseFormat | None#
- thinking: ThinkingConfig | None#
- model_capabilities: ModelCapabilities#
- pydantic model AnthropicClientConfigurationConfigModel[Quelle]#
Bases:
BaseAnthropicClientConfigurationConfigModelJSON-Schema anzeigen
{ "title": "AnthropicClientConfigurationConfigModel", "type": "object", "properties": { "model": { "title": "Model", "type": "string" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": 4096, "title": "Max Tokens" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": 1.0, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "top_k": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Top K" }, "stop_sequences": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop Sequences" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "metadata": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Metadata" }, "thinking": { "anyOf": [ { "$ref": "#/$defs/ThinkingConfigModel" }, { "type": "null" } ], "default": null }, "api_key": { "anyOf": [ { "format": "password", "type": "string", "writeOnly": true }, { "type": "null" } ], "default": null, "title": "Api Key" }, "base_url": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Base Url" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" }, "tools": { "anyOf": [ { "items": { "type": "object" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Tools" }, "tool_choice": { "anyOf": [ { "enum": [ "auto", "any", "none" ], "type": "string" }, { "type": "object" }, { "type": "null" } ], "default": null, "title": "Tool Choice" } }, "$defs": { "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo is a dictionary that contains information about a model's properties.\nIt is expected to be used in the model_info property of a model client.\n\nWe are expecting this to grow over time as we add more features.", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-5", "gpt-41", "gpt-45", "gpt-4o", "o1", "o3", "o4", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "gemini-2.5-pro", "gemini-2.5-flash", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-5-haiku", "claude-3-5-sonnet", "claude-3-7-sonnet", "claude-4-opus", "claude-4-sonnet", "llama-3.3-8b", "llama-3.3-70b", "llama-4-scout", "llama-4-maverick", "codestral", "open-codestral-mamba", "mistral", "ministral", "pixtral", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" }, "structured_output": { "title": "Structured Output", "type": "boolean" }, "multiple_system_messages": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Multiple System Messages" } }, "required": [ "vision", "function_calling", "json_output", "family", "structured_output" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object" ], "title": "Type", "type": "string" } }, "required": [ "type" ], "title": "ResponseFormat", "type": "object" }, "ThinkingConfigModel": { "description": "Configuration for thinking mode.", "properties": { "type": { "enum": [ "enabled", "disabled" ], "title": "Type", "type": "string" }, "budget_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Budget Tokens" } }, "required": [ "type" ], "title": "ThinkingConfigModel", "type": "object" } }, "required": [ "model" ] }
- Felder:
- pydantic model AnthropicBedrockClientConfigurationConfigModel[Quelle]#
Bases:
AnthropicClientConfigurationConfigModelJSON-Schema anzeigen
{ "title": "AnthropicBedrockClientConfigurationConfigModel", "type": "object", "properties": { "model": { "title": "Model", "type": "string" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": 4096, "title": "Max Tokens" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": 1.0, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "top_k": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Top K" }, "stop_sequences": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop Sequences" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "metadata": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Metadata" }, "thinking": { "anyOf": [ { "$ref": "#/$defs/ThinkingConfigModel" }, { "type": "null" } ], "default": null }, "api_key": { "anyOf": [ { "format": "password", "type": "string", "writeOnly": true }, { "type": "null" } ], "default": null, "title": "Api Key" }, "base_url": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Base Url" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" }, "tools": { "anyOf": [ { "items": { "type": "object" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Tools" }, "tool_choice": { "anyOf": [ { "enum": [ "auto", "any", "none" ], "type": "string" }, { "type": "object" }, { "type": "null" } ], "default": null, "title": "Tool Choice" }, "bedrock_info": { "anyOf": [ { "$ref": "#/$defs/BedrockInfoConfigModel" }, { "type": "null" } ], "default": null } }, "$defs": { "BedrockInfoConfigModel": { "properties": { "aws_access_key": { "format": "password", "title": "Aws Access Key", "type": "string", "writeOnly": true }, "aws_session_token": { "format": "password", "title": "Aws Session Token", "type": "string", "writeOnly": true }, "aws_region": { "title": "Aws Region", "type": "string" }, "aws_secret_key": { "format": "password", "title": "Aws Secret Key", "type": "string", "writeOnly": true } }, "required": [ "aws_access_key", "aws_session_token", "aws_region", "aws_secret_key" ], "title": "BedrockInfoConfigModel", "type": "object" }, "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo is a dictionary that contains information about a model's properties.\nIt is expected to be used in the model_info property of a model client.\n\nWe are expecting this to grow over time as we add more features.", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-5", "gpt-41", "gpt-45", "gpt-4o", "o1", "o3", "o4", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "gemini-2.5-pro", "gemini-2.5-flash", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-5-haiku", "claude-3-5-sonnet", "claude-3-7-sonnet", "claude-4-opus", "claude-4-sonnet", "llama-3.3-8b", "llama-3.3-70b", "llama-4-scout", "llama-4-maverick", "codestral", "open-codestral-mamba", "mistral", "ministral", "pixtral", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" }, "structured_output": { "title": "Structured Output", "type": "boolean" }, "multiple_system_messages": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Multiple System Messages" } }, "required": [ "vision", "function_calling", "json_output", "family", "structured_output" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object" ], "title": "Type", "type": "string" } }, "required": [ "type" ], "title": "ResponseFormat", "type": "object" }, "ThinkingConfigModel": { "description": "Configuration for thinking mode.", "properties": { "type": { "enum": [ "enabled", "disabled" ], "title": "Type", "type": "string" }, "budget_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Budget Tokens" } }, "required": [ "type" ], "title": "ThinkingConfigModel", "type": "object" } }, "required": [ "model" ] }
- field bedrock_info: BedrockInfoConfigModel | None = None#
- pydantic model CreateArgumentsConfigModel[Quelle]#
Bases:
BaseModelJSON-Schema anzeigen
{ "title": "CreateArgumentsConfigModel", "type": "object", "properties": { "model": { "title": "Model", "type": "string" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": 4096, "title": "Max Tokens" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": 1.0, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "top_k": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Top K" }, "stop_sequences": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop Sequences" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "metadata": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Metadata" }, "thinking": { "anyOf": [ { "$ref": "#/$defs/ThinkingConfigModel" }, { "type": "null" } ], "default": null } }, "$defs": { "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object" ], "title": "Type", "type": "string" } }, "required": [ "type" ], "title": "ResponseFormat", "type": "object" }, "ThinkingConfigModel": { "description": "Configuration for thinking mode.", "properties": { "type": { "enum": [ "enabled", "disabled" ], "title": "Type", "type": "string" }, "budget_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Budget Tokens" } }, "required": [ "type" ], "title": "ThinkingConfigModel", "type": "object" } }, "required": [ "model" ] }
- Felder:
- Feld response_format: ResponseFormat | None = None#
- Feld thinking: ThinkingConfigModel | None = None#
- Klasse BedrockInfo[Quelle]#
Bases:
TypedDictBedrockInfo ist ein Dictionary, das Informationen über die Eigenschaften eines Bedrock-Modells enthält. Es wird erwartet, dass es in der Eigenschaft `bedrock_info` eines Model-Clients verwendet wird.
- aws_access_key: Required[str]#
Zugriffsschlüssel für das AWS-Konto, um Zugriff auf Bedrock-Modelle zu erhalten.
- aws_secret_key: Required[str]#
Geheimer Zugriffsschlüssel für das AWS-Konto, um Zugriff auf Bedrock-Modelle zu erhalten.