Globale Suche
In [1]
Kopiert!
# Copyright (c) 2024 Microsoft Corporation.
# Licensed under the MIT License.
# Copyright (c) 2024 Microsoft Corporation. # Lizenziert unter der MIT-Lizenz.
In [2]
Kopiert!
import os
import pandas as pd
from graphrag.config.enums import ModelType
from graphrag.config.models.language_model_config import LanguageModelConfig
from graphrag.language_model.manager import ModelManager
from graphrag.query.indexer_adapters import (
read_indexer_communities,
read_indexer_entities,
read_indexer_reports,
)
from graphrag.query.structured_search.global_search.community_context import (
GlobalCommunityContext,
)
from graphrag.query.structured_search.global_search.search import GlobalSearch
from graphrag.tokenizer.get_tokenizer import get_tokenizer
import os import pandas as pd from graphrag.config.enums import ModelType from graphrag.config.models.language_model_config import LanguageModelConfig from graphrag.language_model.manager import ModelManager from graphrag.query.indexer_adapters import ( read_indexer_communities, read_indexer_entities, read_indexer_reports, ) from graphrag.query.structured_search.global_search.community_context import ( GlobalCommunityContext, ) from graphrag.query.structured_search.global_search.search import GlobalSearch from graphrag.tokenizer.get_tokenizer import get_tokenizer
Beispiel für globale Suche¶
Die globale Suchmethode generiert Antworten, indem sie alle von der KI generierten Community-Berichte in einer Map-Reduce-Art durchsucht. Dies ist eine ressourcenintensive Methode, liefert aber oft gute Ergebnisse für Fragen, die ein Verständnis des gesamten Datensatzes erfordern (z. B. Was sind die bedeutendsten Werte der in diesem Notebook genannten Kräuter?).
LLM-Einrichtung¶
In [3]
Kopiert!
api_key = os.environ["GRAPHRAG_API_KEY"]
config = LanguageModelConfig(
api_key=api_key,
type=ModelType.Chat,
model_provider="openai",
model="gpt-4.1",
max_retries=20,
)
model = ModelManager().get_or_create_chat_model(
name="global_search",
model_type=ModelType.Chat,
config=config,
)
tokenizer = get_tokenizer(config)
api_key = os.environ["GRAPHRAG_API_KEY"] config = LanguageModelConfig( api_key=api_key, type=ModelType.Chat, model_provider="openai", model="gpt-4.1", max_retries=20, ) model = ModelManager().get_or_create_chat_model( name="global_search", model_type=ModelType.Chat, config=config, ) tokenizer = get_tokenizer(config)
Community-Berichte als Kontext für die globale Suche laden¶
- Alle Community-Berichte in der Tabelle
community_reportsvon GraphRAG laden, um sie als Kontextdaten für die globale Suche zu verwenden. - Entitäten aus den Tabellen
entitiesvon GraphRAG laden, um die Community-Gewichte für die Kontextbewertung zu berechnen. Beachten Sie, dass dies optional ist (wenn keine Entitäten angegeben werden, berechnen wir keine Community-Gewichte und verwenden nur das Rang-Attribut in der Tabelle der Community-Berichte zur Kontextbewertung). - Alle Communities in der Tabelle
communitiesvon GraphRAG laden, um die Hierarchie der Community-Graphen für die dynamische Community-Auswahl zu rekonstruieren.
In [4]
Kopiert!
# parquet files generated from indexing pipeline
INPUT_DIR = "./inputs/operation dulce"
COMMUNITY_TABLE = "communities"
COMMUNITY_REPORT_TABLE = "community_reports"
ENTITY_TABLE = "entities"
# community level in the Leiden community hierarchy from which we will load the community reports
# higher value means we use reports from more fine-grained communities (at the cost of higher computation cost)
COMMUNITY_LEVEL = 2
# Parquet-Dateien, die von der Indizierungspipeline generiert wurden INPUT_DIR = "./inputs/operation dulce" COMMUNITY_TABLE = "communities" COMMUNITY_REPORT_TABLE = "community_reports" ENTITY_TABLE = "entities" # Community-Ebene in der Leiden-Community-Hierarchie, aus der wir die Community-Berichte laden werden # höherer Wert bedeutet, dass wir Berichte aus feingranulareren Communities verwenden (auf Kosten höherer Rechenkosten) COMMUNITY_LEVEL = 2
In [5]
Kopiert!
community_df = pd.read_parquet(f"{INPUT_DIR}/{COMMUNITY_TABLE}.parquet")
entity_df = pd.read_parquet(f"{INPUT_DIR}/{ENTITY_TABLE}.parquet")
report_df = pd.read_parquet(f"{INPUT_DIR}/{COMMUNITY_REPORT_TABLE}.parquet")
communities = read_indexer_communities(community_df, report_df)
reports = read_indexer_reports(report_df, community_df, COMMUNITY_LEVEL)
entities = read_indexer_entities(entity_df, community_df, COMMUNITY_LEVEL)
print(f"Total report count: {len(report_df)}")
print(
f"Report count after filtering by community level {COMMUNITY_LEVEL}: {len(reports)}"
)
report_df.head()
community_df = pd.read_parquet(f"{INPUT_DIR}/{COMMUNITY_TABLE}.parquet") entity_df = pd.read_parquet(f"{INPUT_DIR}/{ENTITY_TABLE}.parquet") report_df = pd.read_parquet(f"{INPUT_DIR}/{COMMUNITY_REPORT_TABLE}.parquet") communities = read_indexer_communities(community_df, report_df) reports = read_indexer_reports(report_df, community_df, COMMUNITY_LEVEL) entities = read_indexer_entities(entity_df, community_df, COMMUNITY_LEVEL) print(f"Gesamte Anzahl Berichte: {len(report_df)}") print( f"Anzahl Berichte nach Filterung nach Community-Ebene {COMMUNITY_LEVEL}: {len(reports)}" ) report_df.head()
Total report count: 2 Report count after filtering by community level 2: 2
Out[5]
| id | human_readable_id | community | level | parent | children | title | summary | full_content | rank | rating_explanation | findings | full_content_json | period | size | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 6c3a555680d647ac8be866a129c7b0ea | 0 | 0 | 0 | -1 | [] | Operation: Dulce and Dulce Base Exploration | Die Community dreht sich um 'Operation: Dulc... | # Operation: Dulce und Dulce Base Exploration\... | 8.5 | Die Bewertung der Auswirkungsschwere ist hoch aufgrund der ... | [{'explanation': 'Operation: Dulce ist ein bedeutender ... | {\n "title": "Operation: Dulce and Dulce Ba... | 2025-03-04 | 7 |
| 1 | 0127331a1ea34b8ba19de2c2a4cb3bc9 | 1 | 1 | 0 | -1 | [] | Paranormal Military Squad und Operation: Dulce | Die Community konzentriert sich auf das Paranormal Mi... | # Paranormal Military Squad und Operation: Dul... | 8.5 | Die Bewertung der Auswirkungsschwere ist hoch aufgrund der ... | [{'explanation': 'Agent Alex Mercer ist ein Schlüsselfaktor ... | {\n "title": "Paranormal Military Squad und... | 2025-03-04 | 9 |
Globalen Kontext basierend auf Community-Berichten aufbauen¶
In [6]
Kopiert!
context_builder = GlobalCommunityContext(
community_reports=reports,
communities=communities,
entities=entities, # default to None if you don't want to use community weights for ranking
tokenizer=tokenizer,
)
context_builder = GlobalCommunityContext( community_reports=reports, communities=communities, entities=entities, # auf None setzen, wenn keine Community-Gewichte für die Bewertung verwendet werden sollen tokenizer=tokenizer, )
Globale Suche durchführen¶
In [7]
Kopiert!
context_builder_params = {
"use_community_summary": False, # False means using full community reports. True means using community short summaries.
"shuffle_data": True,
"include_community_rank": True,
"min_community_rank": 0,
"community_rank_name": "rank",
"include_community_weight": True,
"community_weight_name": "occurrence weight",
"normalize_community_weight": True,
"max_tokens": 12_000, # change this based on the token limit you have on your model (if you are using a model with 8k limit, a good setting could be 5000)
"context_name": "Reports",
}
map_llm_params = {
"max_tokens": 1000,
"temperature": 0.0,
"response_format": {"type": "json_object"},
}
reduce_llm_params = {
"max_tokens": 2000, # change this based on the token limit you have on your model (if you are using a model with 8k limit, a good setting could be 1000-1500)
"temperature": 0.0,
}
context_builder_params = { "use_community_summary": False, # False bedeutet, die vollständigen Community-Berichte zu verwenden. True bedeutet, kurze Zusammenfassungen der Community zu verwenden. "shuffle_data": True, "include_community_rank": True, "min_community_rank": 0, "community_rank_name": "rank", "include_community_weight": True, "community_weight_name": "occurrence weight", "normalize_community_weight": True, "max_tokens": 12_000, # ändern Sie dies entsprechend dem Token-Limit Ihres Modells (wenn Sie ein Modell mit 8k-Limit verwenden, könnte eine gute Einstellung 5000 sein) "context_name": "Reports", } map_llm_params = { "max_tokens": 1000, "temperature": 0.0, "response_format": {"type": "json_object"}, } reduce_llm_params = { "max_tokens": 2000, # ändern Sie dies entsprechend dem Token-Limit Ihres Modells (wenn Sie ein Modell mit 8k-Limit verwenden, könnte eine gute Einstellung 1000-1500 sein) "temperature": 0.0, }
In [8]
Kopiert!
search_engine = GlobalSearch(
model=model,
context_builder=context_builder,
tokenizer=tokenizer,
max_data_tokens=12_000, # change this based on the token limit you have on your model (if you are using a model with 8k limit, a good setting could be 5000)
map_llm_params=map_llm_params,
reduce_llm_params=reduce_llm_params,
allow_general_knowledge=False, # set this to True will add instruction to encourage the LLM to incorporate general knowledge in the response, which may increase hallucinations, but could be useful in some use cases.
json_mode=True, # set this to False if your LLM model does not support JSON mode.
context_builder_params=context_builder_params,
concurrent_coroutines=32,
response_type="multiple paragraphs", # free form text describing the response type and format, can be anything, e.g. prioritized list, single paragraph, multiple paragraphs, multiple-page report
)
search_engine = GlobalSearch( model=model, context_builder=context_builder, tokenizer=tokenizer, max_data_tokens=12_000, # ändern Sie dies entsprechend dem Token-Limit Ihres Modells (wenn Sie ein Modell mit 8k-Limit verwenden, könnte eine gute Einstellung 5000 sein) map_llm_params=map_llm_params, reduce_llm_params=reduce_llm_params, allow_general_knowledge=False, # set this to True will add instruction to encourage the LLM to incorporate general knowledge in the response, which may increase hallucinations, but could be useful in some use cases. json_mode=True, # set this to False if your LLM model does not support JSON mode. context_builder_params=context_builder_params, concurrent_coroutines=32, response_type="multiple paragraphs", # free form text describing the response type and format, can be anything, e.g. prioritized list, single paragraph, multiple paragraphs, multiple-page report )
In [9]
Kopiert!
result = await search_engine.search("What is operation dulce?")
print(result.response)
result = await search_engine.search("Was ist Operation Dulce?") print(result.response)
## Overview of Operation: Dulce Operation: Dulce is a major mission undertaken by the Paranormal Military Squad, a specialized team tasked with investigating alien technology and its broader implications for humanity. The operation is centered on the exploration and investigation of the Dulce base, a highly secretive and mysterious location reputed to house advanced alien technology. The mission's complexity and significance make it a central focus for the community involved, as it connects all key entities and drives their actions [Data: Reports (0, 1)]. ## Mission Objectives and Setting The primary objective of Operation: Dulce is to navigate and uncover the secrets of the Dulce base. This facility is not only the main setting for the operation but also serves as the focal point for the team's efforts to understand and potentially secure alien technological assets. The exploration of the base is critical to achieving the operation's goals, as it may reveal information or artifacts with far-reaching consequences for humanity [Data: Reports (0, 1)]. ## The Paranormal Military Squad The operation is executed by the Paranormal Military Squad, an elite group composed of agents Alex Mercer, Taylor Cruz, Jordan Hayes, and Sam Rivera. Each member plays a crucial role in the mission, and their relationships and interactions with both the Dulce base and the alien technology are vital to the operation's dynamics and potential success. The team's expertise and cohesion are essential in navigating the challenges posed by the base and its secrets [Data: Reports (1)]. ## Motivations and Implications A strong sense of duty motivates the members of the Paranormal Military Squad to undertake Operation: Dulce. This sense of responsibility underscores the importance and complexity of the mission within the community. The operation is not only a technical or tactical endeavor but also a moral one, as the outcomes may have significant implications for the future of humanity and its relationship with alien technology [Data: Reports (0)]. ## Conclusion In summary, Operation: Dulce is a pivotal mission focused on the investigation of the Dulce base and its alien technology. It is carried out by the Paranormal Military Squad, whose members are driven by a profound sense of duty. The operation's success or failure may have lasting effects on humanity, making it a central and highly significant undertaking within the community [Data: Reports (0, 1)].
In [10]
Kopiert!
# inspect the data used to build the context for the LLM responses
result.context_data["reports"]
# die Daten inspizieren, die zum Aufbau des Kontexts für die LLM-Antworten verwendet wurden result.context_data["reports"]
Out[10]
| id | title | occurrence weight | content | rank | |
|---|---|---|---|---|---|
| 0 | 1 | Paranormal Military Squad und Operation: Dulce | 1.0 | # Paranormal Military Squad und Operation: Dul... | 8.5 |
| 1 | 0 | Operation: Dulce and Dulce Base Exploration | 1.0 | # Operation: Dulce und Dulce Base Exploration\... | 8.5 |
In [11]
Kopiert!
# inspect number of LLM calls and tokens
print(
f"LLM calls: {result.llm_calls}. Prompt tokens: {result.prompt_tokens}. Output tokens: {result.output_tokens}."
)
# Anzahl der LLM-Aufrufe und Tokens inspizieren print( f"LLM-Aufrufe: {result.llm_calls}. Prompt-Tokens: {result.prompt_tokens}. Ausgabe-Tokens: {result.output_tokens}." )
LLM calls: 2. Prompt tokens: 3467. Output tokens: 779.