Zustandsverwaltung#
Bisher haben wir besprochen, wie Komponenten in einer Multi-Agenten-Anwendung aufgebaut werden - Agenten, Teams, Abbruchbedingungen. In vielen Fällen ist es nützlich, den Zustand dieser Komponenten auf der Festplatte zu speichern und später wieder zu laden. Dies ist besonders nützlich in einer Webanwendung, bei der zustandlose Endpunkte auf Anfragen reagieren und den Zustand der Anwendung aus persistentem Speicher laden müssen.
In diesem Notebook besprechen wir, wie der Zustand von Agenten, Teams und Abbruchbedingungen gespeichert und geladen wird.
Agenten speichern und laden#
Wir können den Zustand eines Agenten abrufen, indem wir die Methode save_state() auf einem AssistantAgent aufrufen.
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.ui import Console
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(model="gpt-4o-2024-08-06")
assistant_agent = AssistantAgent(
name="assistant_agent",
system_message="You are a helpful assistant",
model_client=model_client,
)
# Use asyncio.run(...) when running in a script.
response = await assistant_agent.on_messages(
[TextMessage(content="Write a 3 line poem on lake tangayika", source="user")], CancellationToken()
)
print(response.chat_message)
await model_client.close()
In Tanganyika's embrace so wide and deep,
Ancient waters cradle secrets they keep,
Echoes of time where horizons sleep.
agent_state = await assistant_agent.save_state()
print(agent_state)
{'type': 'AssistantAgentState', 'version': '1.0.0', 'llm_messages': [{'content': 'Write a 3 line poem on lake tangayika', 'source': 'user', 'type': 'UserMessage'}, {'content': "In Tanganyika's embrace so wide and deep, \nAncient waters cradle secrets they keep, \nEchoes of time where horizons sleep. ", 'source': 'assistant_agent', 'type': 'AssistantMessage'}]}
model_client = OpenAIChatCompletionClient(model="gpt-4o-2024-08-06")
new_assistant_agent = AssistantAgent(
name="assistant_agent",
system_message="You are a helpful assistant",
model_client=model_client,
)
await new_assistant_agent.load_state(agent_state)
# Use asyncio.run(...) when running in a script.
response = await new_assistant_agent.on_messages(
[TextMessage(content="What was the last line of the previous poem you wrote", source="user")], CancellationToken()
)
print(response.chat_message)
await model_client.close()
The last line of the poem was: "Echoes of time where horizons sleep."
Hinweis
Für AssistantAgent besteht sein Zustand aus dem model_context. Wenn Sie Ihren eigenen benutzerdefinierten Agenten schreiben, sollten Sie die Methoden save_state() und load_state() überschreiben, um das Verhalten anzupassen. Die Standardimplementierungen speichern und laden einen leeren Zustand.
Teams speichern und laden#
Wir können den Zustand eines Teams abrufen, indem wir die Methode save_state auf dem Team aufrufen und ihn durch Aufruf der Methode load_state auf dem Team wieder laden.
Wenn wir save_state für ein Team aufrufen, speichert es den Zustand aller Agenten im Team.
Wir beginnen damit, ein einfaches RoundRobinGroupChat-Team mit einem einzelnen Agenten zu erstellen und es bitten, ein Gedicht zu schreiben.
model_client = OpenAIChatCompletionClient(model="gpt-4o-2024-08-06")
# Define a team.
assistant_agent = AssistantAgent(
name="assistant_agent",
system_message="You are a helpful assistant",
model_client=model_client,
)
agent_team = RoundRobinGroupChat([assistant_agent], termination_condition=MaxMessageTermination(max_messages=2))
# Run the team and stream messages to the console.
stream = agent_team.run_stream(task="Write a beautiful poem 3-line about lake tangayika")
# Use asyncio.run(...) when running in a script.
await Console(stream)
# Save the state of the agent team.
team_state = await agent_team.save_state()
---------- user ----------
Write a beautiful poem 3-line about lake tangayika
---------- assistant_agent ----------
In Tanganyika's gleam, beneath the azure skies,
Whispers of ancient waters, in tranquil guise,
Nature's mirror, where dreams and serenity lie.
[Prompt tokens: 29, Completion tokens: 34]
---------- Summary ----------
Number of messages: 2
Finish reason: Maximum number of messages 2 reached, current message count: 2
Total prompt tokens: 29
Total completion tokens: 34
Duration: 0.71 seconds
Wenn wir das Team zurücksetzen (was die Instanziierung des Teams simuliert) und die Frage stellen: Was war die letzte Zeile des Gedichts, das Sie geschrieben haben?, sehen wir, dass das Team dies nicht bewältigen kann, da keine Referenz auf den vorherigen Lauf vorhanden ist.
await agent_team.reset()
stream = agent_team.run_stream(task="What was the last line of the poem you wrote?")
await Console(stream)
---------- user ----------
What was the last line of the poem you wrote?
---------- assistant_agent ----------
I'm sorry, but I am unable to recall or access previous interactions, including any specific poem I may have composed in our past conversations. If you like, I can write a new poem for you.
[Prompt tokens: 28, Completion tokens: 40]
---------- Summary ----------
Number of messages: 2
Finish reason: Maximum number of messages 2 reached, current message count: 2
Total prompt tokens: 28
Total completion tokens: 40
Duration: 0.70 seconds
TaskResult(messages=[TextMessage(source='user', models_usage=None, content='What was the last line of the poem you wrote?', type='TextMessage'), TextMessage(source='assistant_agent', models_usage=RequestUsage(prompt_tokens=28, completion_tokens=40), content="I'm sorry, but I am unable to recall or access previous interactions, including any specific poem I may have composed in our past conversations. If you like, I can write a new poem for you.", type='TextMessage')], stop_reason='Maximum number of messages 2 reached, current message count: 2')
Als nächstes laden wir den Zustand des Teams und stellen die gleiche Frage. Wir sehen, dass das Team in der Lage ist, die letzte Zeile des von ihm geschriebenen Gedichts genau zurückzugeben.
print(team_state)
# Load team state.
await agent_team.load_state(team_state)
stream = agent_team.run_stream(task="What was the last line of the poem you wrote?")
await Console(stream)
{'type': 'TeamState', 'version': '1.0.0', 'agent_states': {'group_chat_manager/a55364ad-86fd-46ab-9449-dcb5260b1e06': {'type': 'RoundRobinManagerState', 'version': '1.0.0', 'message_thread': [{'source': 'user', 'models_usage': None, 'content': 'Write a beautiful poem 3-line about lake tangayika', 'type': 'TextMessage'}, {'source': 'assistant_agent', 'models_usage': {'prompt_tokens': 29, 'completion_tokens': 34}, 'content': "In Tanganyika's gleam, beneath the azure skies, \nWhispers of ancient waters, in tranquil guise, \nNature's mirror, where dreams and serenity lie.", 'type': 'TextMessage'}], 'current_turn': 0, 'next_speaker_index': 0}, 'collect_output_messages/a55364ad-86fd-46ab-9449-dcb5260b1e06': {}, 'assistant_agent/a55364ad-86fd-46ab-9449-dcb5260b1e06': {'type': 'ChatAgentContainerState', 'version': '1.0.0', 'agent_state': {'type': 'AssistantAgentState', 'version': '1.0.0', 'llm_messages': [{'content': 'Write a beautiful poem 3-line about lake tangayika', 'source': 'user', 'type': 'UserMessage'}, {'content': "In Tanganyika's gleam, beneath the azure skies, \nWhispers of ancient waters, in tranquil guise, \nNature's mirror, where dreams and serenity lie.", 'source': 'assistant_agent', 'type': 'AssistantMessage'}]}, 'message_buffer': []}}, 'team_id': 'a55364ad-86fd-46ab-9449-dcb5260b1e06'}
---------- user ----------
What was the last line of the poem you wrote?
---------- assistant_agent ----------
The last line of the poem I wrote is:
"Nature's mirror, where dreams and serenity lie."
[Prompt tokens: 86, Completion tokens: 22]
---------- Summary ----------
Number of messages: 2
Finish reason: Maximum number of messages 2 reached, current message count: 2
Total prompt tokens: 86
Total completion tokens: 22
Duration: 0.96 seconds
TaskResult(messages=[TextMessage(source='user', models_usage=None, content='What was the last line of the poem you wrote?', type='TextMessage'), TextMessage(source='assistant_agent', models_usage=RequestUsage(prompt_tokens=86, completion_tokens=22), content='The last line of the poem I wrote is: \n"Nature\'s mirror, where dreams and serenity lie."', type='TextMessage')], stop_reason='Maximum number of messages 2 reached, current message count: 2')
Zustand persistent machen (Datei oder Datenbank)#
In vielen Fällen möchten wir den Zustand des Teams auf der Festplatte (oder in einer Datenbank) persistent machen und später wieder laden. Der Zustand ist ein Dictionary, das in eine Datei serialisiert oder in eine Datenbank geschrieben werden kann.
import json
## save state to disk
with open("coding/team_state.json", "w") as f:
json.dump(team_state, f)
## load state from disk
with open("coding/team_state.json", "r") as f:
team_state = json.load(f)
new_agent_team = RoundRobinGroupChat([assistant_agent], termination_condition=MaxMessageTermination(max_messages=2))
await new_agent_team.load_state(team_state)
stream = new_agent_team.run_stream(task="What was the last line of the poem you wrote?")
await Console(stream)
await model_client.close()
---------- user ----------
What was the last line of the poem you wrote?
---------- assistant_agent ----------
The last line of the poem I wrote is:
"Nature's mirror, where dreams and serenity lie."
[Prompt tokens: 86, Completion tokens: 22]
---------- Summary ----------
Number of messages: 2
Finish reason: Maximum number of messages 2 reached, current message count: 2
Total prompt tokens: 86
Total completion tokens: 22
Duration: 0.72 seconds
TaskResult(messages=[TextMessage(source='user', models_usage=None, content='What was the last line of the poem you wrote?', type='TextMessage'), TextMessage(source='assistant_agent', models_usage=RequestUsage(prompt_tokens=86, completion_tokens=22), content='The last line of the poem I wrote is: \n"Nature\'s mirror, where dreams and serenity lie."', type='TextMessage')], stop_reason='Maximum number of messages 2 reached, current message count: 2')