You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TL;DR, it appears the bug/issue happens here, where the otherwise valid messages to be streamed are getting short-circuited and the generator yields nothing instead of the messages it should. The values for chunk.event.split(CHECKPOINT_NAMESPACE_SEPARATOR)[0]) look like: messages/partial or messages/complete but since updatedStreamModes looks like ['messages', 'updates'] it will never match...
For a proof of concept, building on the above two article's examples, here is an example simple graph and tests to show how local works, but remote does not:
import{StateGraph,END}from"@langchain/langgraph";import{ToolNode}from"@langchain/langgraph/prebuilt";import{StateAnnotation}from"./simple-messages-state.js";import{ChatOpenAI}from"@langchain/openai";import{tool}from"@langchain/core/tools";import{z}from"zod";import{AIMessage}from"@langchain/core/messages";constmodel=newChatOpenAI({model: "gpt-4o-mini",temperature: 0,streaming: true});constsearchTool=tool((_)=>{// This is a placeholder for the actual implementationreturn"Cold, with a low of 3℃";},{name: "search",description:
"Use to surf the web, fetch current information, check the weather, and retrieve other information.",schema: z.object({query: z.string().describe("The query to use in your search."),}),});consttools=[searchTool];consttoolNode=newToolNode(tools);constboundModel=model.bindTools(tools);constrouteMessage=(state: typeofStateAnnotation.State)=>{const{ messages }=state;constlastMessage=messages[messages.length-1]asAIMessage;// If no tools are called, we can finish (respond to the user)if(!lastMessage?.tool_calls?.length){returnEND;}// Otherwise if there is, we continue and call the toolsreturn"tools";};constcallModel=async(state: typeofStateAnnotation.State,)=>{// For versions of @langchain/core < 0.2.3, you must call `.stream()`// and aggregate the message from chunks instead of calling `.invoke()`.const{ messages }=state;constresponseMessage=awaitboundModel.invoke(messages);return{messages: [responseMessage]};};// Define the graphconstworkflow=newStateGraph(StateAnnotation).addNode("agent",callModel).addNode("tools",toolNode).addEdge("__start__","agent").addConditionalEdges("agent",routeMessage).addEdge("tools","agent");exportconstexampleSimpleGraph=workflow.compile();exampleSimpleGraph.name="Example Simple Graph";
and here are passing (local) and failing (remote) tests:
Example Simple Streamed Graph
✓ should process input through the graph - local (2543 ms)
✕ should process input through the graph - remote (2431 ms)
import{isAIMessageChunk,isToolMessageChunk}from"@langchain/core/messages";import{exampleSimpleGraph}from"../src/agent/example-simple-graph.js";describe("Example Simple Streamed Graph",()=>{asyncfunctiontestGraphStream(graph){constoptions={streamMode: "messages"};conststream=awaitgraph.stream({messages: [{role: "user",content: "What's the current weather in Nepal?"}]},options);letstreamedAIMessage="";letstreamedToolMessage="";forawait(const[message]ofstream){if(isToolMessageChunk(message)){streamedToolMessage+=message.content;}elseif(isAIMessageChunk(message)){streamedAIMessage+=message.content;}}expect(streamedToolMessage).toBeDefined();expect(typeofstreamedToolMessage).toBe("string");expect(streamedToolMessage).toBe("Cold, with a low of 3℃");expect(streamedAIMessage).toBeDefined();expect(typeofstreamedAIMessage).toBe("string");expect(streamedAIMessage).toContain("cold");}it("should process input through the graph - local",async()=>{awaittestGraphStream(exampleSimpleGraph);},30000);// Increased timeout to 30 secondsit("should process input through the graph - remote",async()=>{consturl=`http://0.0.0.0:8123`;constgraphName="example-simple-graph";constremoteGraph=newRemoteGraph({graphId: graphName, url });awaittestGraphStream(remoteGraph);},30000);// Increased timeout to 30 seconds});
The text was updated successfully, but these errors were encountered:
Jronk
changed the title
RemoteGraph's .stream() "message" streamMode is not working
RemoteGraph's .stream() "messages" streamMode is not working
Nov 20, 2024
For example, following this how-to guide: How to stream LLM tokens from your graph, And using How to interact with the deployment using RemoteGraph, does not work and the messages are not properly streaming.
TL;DR, it appears the bug/issue happens here, where the otherwise valid messages to be streamed are getting short-circuited and the generator yields nothing instead of the messages it should. The values for
chunk.event.split(CHECKPOINT_NAMESPACE_SEPARATOR)[0])
look like:messages/partial
ormessages/complete
but sinceupdatedStreamModes
looks like['messages', 'updates']
it will never match...For a proof of concept, building on the above two article's examples, here is an example simple graph and tests to show how local works, but remote does not:
and here are passing (local) and failing (remote) tests:
Example Simple Streamed Graph ✓ should process input through the graph - local (2543 ms) ✕ should process input through the graph - remote (2431 ms)
The text was updated successfully, but these errors were encountered: