Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tool calls not working as expected when are called in parallel #2610

Open
5 tasks done
henryf3 opened this issue Dec 3, 2024 · 8 comments
Open
5 tasks done

Tool calls not working as expected when are called in parallel #2610

henryf3 opened this issue Dec 3, 2024 · 8 comments

Comments

@henryf3
Copy link

henryf3 commented Dec 3, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangGraph/LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangGraph/LangChain rather than my code.
  • I am sure this is better as an issue rather than a GitHub discussion, since this is a LangGraph bug and not a design question.

Example Code

## tool fucntion
def publish_final_answer(
        final_answer: str,
        index: int,
    ):

        print("\n\t\tMy index is", index, "\n")

        is_final_answer = True
        try:
            response = json.loads(final_answer, strict=False)
        except Exception as e:
            response = f"Tool Error: Final answer is not a valid JSON: {e}"
            is_final_answer = False

        tool_response = {
            "message": response,
            "is_final_answer": is_final_answer,
            TRACE_ID: index,
        }

        print("Tool response:", tool_response)

        return json.dumps(tool_response)

Error Message and Stack Trace (if applicable)

No response

Description

image
This is my graph, I am executing in parallel two agents (sql_ret_0 and sql_ret_1), each calls a tool inside ToolNode sql_ret_tools_0 and sql_ret_tools_1 respectively, to identify each instance of my agent I use a idx (0/1), so when I call a tool I pass the index to the tool, and this is working correctly since the tool calls look like this:

INFO:root:Tools calls: [{'name': 'publish_final_answer', 'args': {'index': 0, 'final_answer': ''}, 'id': '071ca8f1-8ae0-43ae-8e01-9e018995df33', 'type': 'tool_call'}]
INFO:root:Tools calls: [{'name': 'publish_final_answer', 'args': {'index': 1, 'final_answer': ''}, 'id': 'f36a865c-94c5-4141-8a3b-bbfefe8c7b76', 'type': 'tool_call'}]

But the tools is being called twice only from the second node (sql_ret_1) because I added a print inside the tool to print the index and it shows twice the same


                My index is 1 

Tool response: {'message': {'answer': "I'm here to help"}, 'is_final_answer': True, 'trace_id': 1}

                My index is 1 

Tool response: {'message': {'answer': "I'm here to help"}, 'is_final_answer': True, 'trace_id': 1}

The graph seeem to be built correctly as you can see from the diagram, also it works perfectly when I instantiate only one node sql_ret.

System Info

langchain==0.3.1
langchain-anthropic==0.2.1
langchain-core==0.3.21
langchain-openai==0.2.1
langchain-text-splitters==0.3.0

platform mac
python version 3.11

@gbaian10
Copy link
Contributor

gbaian10 commented Dec 3, 2024

Could you provide some sample code?

@henryf3
Copy link
Author

henryf3 commented Dec 4, 2024

@gbaian10 sorry for the delay, here I uploaded a working minimal example, so you can test it yourself. Note that I use an abstraction over LangGraph but that shouldnt be a problem

@gbaian10
Copy link
Contributor

gbaian10 commented Dec 5, 2024

import operator
from typing import Annotated, TypedDict

from langgraph.graph import END, START, StateGraph


class State(TypedDict):
    aggregate: Annotated[list, operator.add]


class ReturnNodeValue:
    def __init__(self, node_secret: str) -> None:
        self._value = node_secret

    def __call__(self, state: State) -> State:
        print(f"Adding {self._value} to {state['aggregate']}")
        return {"aggregate": [self._value]}


builder = StateGraph(State)
for s in "abcdef":
    builder.add_node(s, ReturnNodeValue(f"I'm {s.upper()}"))

builder.add_edge(START, "a")
builder.add_edge("a", "b")
builder.add_edge("a", "c")
builder.add_edge("b", "d")
builder.add_edge("c", "e")
builder.add_edge(["d", "e"], "f")
builder.add_edge("f", END)

graph = builder.compile()
graph.get_graph().draw_mermaid_png(output_file_path="example.png")

graph.invoke({"aggregate": []})

image

Output:

Adding I'm A to []
Adding I'm B to ["I'm A"]
Adding I'm C to ["I'm A"]
Adding I'm D to ["I'm A", "I'm B", "I'm C"]
Adding I'm E to ["I'm A", "I'm B", "I'm C"]
Adding I'm F to ["I'm A", "I'm B", "I'm C", "I'm D", "I'm E"]

You can refer to this example: when you reach D or E, both B and C are already completed.
And C is stored as the last message in your state's messages.

Here, D is equivalent to your tool (which uses ToolNode)
And ToolNode always takes the last AIMessage.
In the example above, this means both D and E will only read C, and no one reads B.

idx=0 corresponds to B (the second to last), and idx=1 corresponds to C (the last).
This is why you see two 1, because they both only read the last one.

@henryf3
Copy link
Author

henryf3 commented Dec 5, 2024

@gbaian10 I see, then, is it not possible to achieve my workflow?

@gbaian10
Copy link
Contributor

gbaian10 commented Dec 5, 2024

image

I think that 0 and 1 only need one set (red and green), and the preceding node (usually an LLM) should have the ability to call multiple tools.

For example, in the scenario below, I asked a question, and it generated two tool calls, executing the same tool twice simultaneously.

from dotenv import load_dotenv
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, MessagesState, StateGraph
from langgraph.prebuilt import ToolNode
from rich import get_console

load_dotenv()


@tool
def get_weather(city: str) -> str:
    """Get the weather for a specific city"""
    return f"It's sunny in {city}!"


tools = [get_weather]
model = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)


def model_node(state: MessagesState) -> MessagesState:
    return {"messages": [model.invoke(state["messages"])]}


def print_msg(state: MessagesState) -> None:
    get_console().print(state["messages"])


graph_builder = StateGraph(MessagesState)

graph_builder.add_node(model_node)
graph_builder.add_node(ToolNode(tools=tools))
graph_builder.add_node(print_msg)

graph_builder.add_edge(START, model_node.__name__)
graph_builder.add_edge(model_node.__name__, "tools")
graph_builder.add_edge("tools", print_msg.__name__)
graph_builder.add_edge(print_msg.__name__, END)

app = graph_builder.compile()
app.invoke(MessagesState(messages=["What's the weather in Paris and Tokyo?"]))

image

@henryf3
Copy link
Author

henryf3 commented Dec 5, 2024

@gbaian10 I dont see blue in the picture 😁, but my tools are intended to iterate n times with the main node (where the llm is called), I am not sure if your approach covers it. What do you think?

@gbaian10
Copy link
Contributor

gbaian10 commented Dec 5, 2024

@gbaian10 I dont see blue in the picture 😁, but my tools are intended to iterate n times with the main node (where the llm is called), I am not sure if your approach covers it. What do you think?

Red 😅

I'm not quite sure about your situation.
If it's just a simple iterative process, perhaps parallel execution isn't necessary?

@henryf3
Copy link
Author

henryf3 commented Dec 5, 2024

@gbaian10 Each sql_ret_idx agent needs to iterate using tools to generate an answer, once each node has a answer ready, I use a decision_maker node to select the most accurate. That's why I have two paths (red/green) at the same time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants