langgraph-basis
Verschillen
Dit geeft de verschillen weer tussen de geselecteerde revisie en de huidige revisie van de pagina.
Beide kanten vorige revisieVorige revisieVolgende revisie | Vorige revisie | ||
langgraph-basis [2024/09/27 16:40] – [5️⃣ Snelle generatie op basis van gebruikersvereisten] a3dijke | langgraph-basis [2024/10/15 00:01] (huidige) – [LangGraph Basis principes] a3dijke | ||
---|---|---|---|
Regel 1: | Regel 1: | ||
====== LangGraph Basis principes ====== | ====== LangGraph Basis principes ====== | ||
🗂️ [[start|Terug naar start]]\\ | 🗂️ [[start|Terug naar start]]\\ | ||
- | ♻️ 🗂️ [[langgraph|LangGraph start]] | + | 🦜♻️🗂️ [[langgraph|LangGraph start]] |
+ | 🦜♻️💫 [[langgraph-tests|LangGraph Tests]]\\ | ||
+ | |||
+ | |||
+ | **[[https:// | ||
+ | [[https:// | ||
===== LangGraph Instaleren ===== | ===== LangGraph Instaleren ===== | ||
Regel 14: | Regel 19: | ||
💡Hieronder de onderdelen zoals deze in het begin van de [[https:// | 💡Hieronder de onderdelen zoals deze in het begin van de [[https:// | ||
[[https:// | [[https:// | ||
+ | |||
+ | **[[https:// | ||
---- | ---- | ||
Regel 374: | Regel 381: | ||
- | Onderstaande is nog beetje onbekend | + | Onderstaande is nog beetje onbekend |
> De volgende twee regels zijn cruciaal voor het Human-in-the-Loop-framework. Om persistentie te garanderen, moet u een controlepunt opnemen bij het compileren van de grafiek, wat nodig is om interrupts te ondersteunen. We gebruiken SqliteSaver voor een in-memory SQLite-database om de status op te slaan. Om consistent te interrupten voor een specifiek knooppunt, moeten we de naam van het knooppunt opgeven aan de compile-methode: | > De volgende twee regels zijn cruciaal voor het Human-in-the-Loop-framework. Om persistentie te garanderen, moet u een controlepunt opnemen bij het compileren van de grafiek, wat nodig is om interrupts te ondersteunen. We gebruiken SqliteSaver voor een in-memory SQLite-database om de status op te slaan. Om consistent te interrupten voor een specifiek knooppunt, moeten we de naam van het knooppunt opgeven aan de compile-methode: | ||
Regel 619: | Regel 626: | ||
- | < | + | < |
Regel 720: | Regel 727: | ||
Gegevens verzamelen met een Agent en als gegevens verzameld zijn actie ondernemen. | Gegevens verzamelen met een Agent en als gegevens verzameld zijn actie ondernemen. | ||
- | >Prompt Generation from User Requirements\\ | + | >Prompt Generation from User Requirements: In this example we will create a chat bot that helps a user generate a prompt. It will first collect requirements from the user, and then will generate the prompt (and refine it based on user input). These are split into two separate states, and the LLM decides when to transition between them. |
- | In this example we will create a chat bot that helps a user generate a prompt. It will first collect requirements from the user, and then will generate the prompt (and refine it based on user input). These are split into two separate states, and the LLM decides when to transition between them. | + | |
+ | < | ||
+ | |||
+ | < | ||
+ | import os | ||
+ | |||
+ | |||
+ | def _set_env(var: | ||
+ | if not os.environ.get(var): | ||
+ | os.environ[var] = getpass.getpass(f" | ||
+ | |||
+ | |||
+ | _set_env(" | ||
+ | |||
+ | < | ||
+ | |||
+ | from langchain_core.messages import SystemMessage | ||
+ | from langchain_openai import ChatOpenAI | ||
+ | |||
+ | from pydantic import BaseModel</ | ||
+ | |||
+ | < | ||
+ | |||
+ | You should get the following information from them: | ||
+ | |||
+ | - What the objective of the prompt is | ||
+ | - What variables will be passed into the prompt template | ||
+ | - Any constraints for what the output should NOT do | ||
+ | - Any requirements that the output MUST adhere to | ||
+ | |||
+ | If you are not able to discern this info, ask them to clarify! Do not attempt to wildly guess. | ||
+ | |||
+ | After you are able to discern all the information, | ||
+ | |||
+ | |||
+ | def get_messages_info(messages): | ||
+ | return [SystemMessage(content=template)] + messages | ||
+ | |||
+ | |||
+ | class PromptInstructions(BaseModel): | ||
+ | """ | ||
+ | |||
+ | objective: str | ||
+ | variables: List[str] | ||
+ | constraints: | ||
+ | requirements: | ||
+ | |||
+ | |||
+ | llm = ChatOpenAI(temperature=0) | ||
+ | llm_with_tool = llm.bind_tools([PromptInstructions]) | ||
+ | |||
+ | |||
+ | def info_chain(state): | ||
+ | messages = get_messages_info(state[" | ||
+ | response = llm_with_tool.invoke(messages) | ||
+ | return {" | ||
+ | |||
+ | < | ||
+ | |||
+ | # New system prompt | ||
+ | prompt_system = """ | ||
+ | |||
+ | {reqs}""" | ||
+ | |||
+ | |||
+ | # Function to get the messages for the prompt | ||
+ | # Will only get messages AFTER the tool call | ||
+ | def get_prompt_messages(messages: | ||
+ | tool_call = None | ||
+ | other_msgs = [] | ||
+ | for m in messages: | ||
+ | if isinstance(m, | ||
+ | tool_call = m.tool_calls[0][" | ||
+ | elif isinstance(m, | ||
+ | continue | ||
+ | elif tool_call is not None: | ||
+ | other_msgs.append(m) | ||
+ | return [SystemMessage(content=prompt_system.format(reqs=tool_call))] + other_msgs | ||
+ | |||
+ | |||
+ | def prompt_gen_chain(state): | ||
+ | messages = get_prompt_messages(state[" | ||
+ | response = llm.invoke(messages) | ||
+ | return {" | ||
+ | |||
+ | < | ||
+ | |||
+ | from langgraph.graph import END | ||
+ | |||
+ | |||
+ | def get_state(state) -> Literal[" | ||
+ | messages = state[" | ||
+ | if isinstance(messages[-1], | ||
+ | return " | ||
+ | elif not isinstance(messages[-1], | ||
+ | return END | ||
+ | return " | ||
+ | |||
+ | < | ||
+ | from langgraph.graph import StateGraph, START | ||
+ | from langgraph.graph.message import add_messages | ||
+ | from typing import Annotated | ||
+ | from typing_extensions import TypedDict | ||
+ | |||
+ | |||
+ | class State(TypedDict): | ||
+ | messages: Annotated[list, | ||
+ | |||
+ | |||
+ | memory = MemorySaver() | ||
+ | workflow = StateGraph(State) | ||
+ | workflow.add_node(" | ||
+ | workflow.add_node(" | ||
+ | |||
+ | |||
+ | @workflow.add_node | ||
+ | def add_tool_message(state: | ||
+ | return { | ||
+ | " | ||
+ | ToolMessage( | ||
+ | content=" | ||
+ | tool_call_id=state[" | ||
+ | ) | ||
+ | ] | ||
+ | } | ||
+ | |||
+ | |||
+ | workflow.add_conditional_edges(" | ||
+ | workflow.add_edge(" | ||
+ | workflow.add_edge(" | ||
+ | workflow.add_edge(START, | ||
+ | graph = workflow.compile(checkpointer=memory)</ | ||
+ | |||
+ | < | ||
+ | |||
+ | display(Image(graph.get_graph().draw_mermaid_png()))</ | ||
+ | |||
+ | USING THE GRAPH: | ||
+ | |||
+ | < | ||
+ | |||
+ | config = {" | ||
+ | while True: | ||
+ | user = input(" | ||
+ | print(f" | ||
+ | if user in {" | ||
+ | print(" | ||
+ | break | ||
+ | output = None | ||
+ | for output in graph.stream( | ||
+ | {" | ||
+ | ): | ||
+ | last_message = next(iter(output.values()))[" | ||
+ | last_message.pretty_print() | ||
+ | |||
+ | if output and " | ||
+ | print(" | ||
+ | [[https:// | ||
+ | Prompt Generation from User Requirements]] | ||
---- | ---- | ||
Regel 742: | Regel 906: | ||
[[https:// | [[https:// | ||
- | [[https:// | + | [[https:// |
+ | **[[https:// | ||
+ | [[https:// | ||
[[https:// | [[https:// |
langgraph-basis.1727448003.txt.gz · Laatst gewijzigd: 2024/09/27 16:40 door a3dijke