Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support Memory.graph_store.llm independent from Memory.llm #2046

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

jtanningbed
Copy link

@jtanningbed jtanningbed commented Nov 24, 2024

Description

This PR addresses a limitation in the memory system's LLM configuration that currently requires the main memory.llm to use OpenAI's function calling format. The issue stems from the _search function in the graph memory implementation being tightly coupled to OpenAI's tool format, which blocks usage with LLMs that use different function/tool calling schemas (like Anthropic's tools).

There is already an optional llm: LlmConfig in the GraphStoreConfig definition but the current logic to set the graph_store.llm only checks for the existence of a graph_store.llm.provider to override the default "openai_structured" provider, while only considering the top level Memory.llm_config to configure the Memory.graph_store.llm. This inherently locks Memory LLM model configurations only to those adhering to the OpenAi FunctionDefinition spec.

This change allows the main memory interface to use any LLM provider while keeping graph operations (which require OpenAI's tool format) isolated. Until the tool definitions are made provider-agnostic, graph store operations will remain limited to LLMs that follow the OpenAI tool schema.

Key changes:

  • Added support for separate LLM configuration in graph store operations
  • Updated MemoryGraph initialization to properly handle graph-specific LLM config
  • Clear separation between conversation LLM and graph operation LLM

Fixes # (issue)

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)

How Has This Been Tested?

  • Very minor change, full backwards compatibility with previous implementation while leveraging an already existing mostly unused configuration
  • Testing on integration with another project I'm working on to integrate mem0 w/ autogen and anthropic :
    2024-11-24 03:53:12,189 - anthropic._base_client - DEBUG - Sending HTTP Request: POST https://api.anthropic.com/v1/messages
    2024-11-24 03:53:12,189 - httpcore.connection - DEBUG - connect_tcp.started host='api.anthropic.com' port=443 local_address=None timeout=600 socket_options=None
    2024-11-24 03:53:12,192 - openai._base_client - DEBUG - Request options: {'method': 'post', 'url': '/chat/completions', 'files': None, 'json_data': {'messages': [{'role': 'system', 'content': "You are a smart assistant who understands the entities, their types, and relations in a given text. If user message contains self reference such as 'I', 'me', 'my' etc. then use test_user as the source node. Extract the entities."}, {'role': 'user', 'content': 'The capital of France is Paris.'}], 'model': 'gpt-4-turbo-preview', 'max_tokens': 4096, 'temperature': 0.7, 'tool_choice': 'auto', 'tools': [{'type': 'function', 'function': {'name': 'search', 'description': 'Search for nodes and relations in the graph.', 'strict': True, 'parameters': {'type': 'object', 'properties': {'nodes': {'type': 'array', 'items': {'type': 'string'}, 'description': 'List of nodes to search for.'}, 'relations': {'type': 'array', 'items': {'type': 'string'}, 'description': 'List of relations to search for.'}}, 'required': ['nodes', 'relations'], 'additionalProperties': False}}}], 'top_p': 0}}
    2024-11-24 03:53:12,192 - openai._base_client - DEBUG - Sending HTTP Request: POST https://api.openai.com/v1/chat/completions
    2024-11-24 03:53:12,192 - httpcore.connection - DEBUG - connect_tcp.started host='api.openai.com' port=443 local_address=None timeout=5.0 socket_options=None
    2024-11-24 03:53:12,199 - httpcore.connection - DEBUG - connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x106c2bcb0>
    2024-11-24 03:53:12,199 - httpcore.connection - DEBUG - start_tls.started ssl_context=<ssl.SSLContext object at 0x1447d23d0> server_hostname='api.anthropic.com' timeout=600
    2024-11-24 03:53:12,199 - httpcore.connection - DEBUG - connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x116e5e6f0>
    2024-11-24 03:53:12,199 - httpcore.connection - DEBUG - start_tls.started ssl_context=<ssl.SSLContext object at 0x155cd3d50> server_hostname='api.openai.com' timeout=5.0
    2024-11-24 03:53:12,219 - httpcore.connection - DEBUG - start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x116e5e600>
    2024-11-24 03:53:12,219 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'POST']>
    2024-11-24 03:53:12,219 - httpcore.http11 - DEBUG - send_request_headers.complete
    2024-11-24 03:53:12,220 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'POST']>
    2024-11-24 03:53:12,222 - httpcore.http11 - DEBUG - send_request_body.complete
    2024-11-24 03:53:12,222 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'POST']>
    2024-11-24 03:53:12,224 - httpcore.connection - DEBUG - start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x106c2be30>
    2024-11-24 03:53:12,225 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'POST']>
    2024-11-24 03:53:12,225 - httpcore.http11 - DEBUG - send_request_headers.complete
    2024-11-24 03:53:12,225 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'POST']>
    2024-11-24 03:53:12,226 - httpcore.http11 - DEBUG - send_request_body.complete
    2024-11-24 03:53:12,226 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'POST']>
    2024-11-24 03:53:14,879 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Sun, 24 Nov 2024 08:53:14 GMT'), (b'Content-Type', b'application/json'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'anthropic-ratelimit-requests-limit', b'1000'), (b'anthropic-ratelimit-requests-remaining', b'999'), (b'anthropic-ratelimit-requests-reset', b'2024-11-24T08:53:12Z'), (b'anthropic-ratelimit-input-tokens-limit', b'40000'), (b'anthropic-ratelimit-input-tokens-remaining', b'40000'), (b'anthropic-ratelimit-input-tokens-reset', b'2024-11-24T08:53:14Z'), (b'anthropic-ratelimit-output-tokens-limit', b'8000'), (b'anthropic-ratelimit-output-tokens-remaining', b'8000'), (b'anthropic-ratelimit-output-tokens-reset', b'2024-11-24T08:53:14Z'), (b'anthropic-ratelimit-tokens-limit', b'48000'), (b'anthropic-ratelimit-tokens-remaining', b'48000'), (b'anthropic-ratelimit-tokens-reset', b'2024-11-24T08:53:14Z'), (b'request-id', b'req_01LLdsha4zvtDV4eTQDJBpzj'), (b'via', b'1.1 google'), (b'CF-Cache-Status', b'DYNAMIC'), (b'X-Robots-Tag', b'none'), (b'Server', b'cloudflare'), (b'CF-RAY', b'8e782ad00b58676f-ATL'), (b'Content-Encoding', b'gzip')])
    2024-11-24 03:53:14,879 - httpx - INFO - HTTP Request: POST https://api.anthropic.com/v1/messages "HTTP/1.1 200 OK"
    2024-11-24 03:53:14,879 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'POST']>
    2024-11-24 03:53:14,879 - httpcore.http11 - DEBUG - receive_response_body.complete
    2024-11-24 03:53:14,880 - httpcore.http11 - DEBUG - response_closed.started
    2024-11-24 03:53:14,880 - httpcore.http11 - DEBUG - response_closed.complete
    2024-11-24 03:53:14,880 - anthropic._base_client - DEBUG - HTTP Response: POST https://api.anthropic.com/v1/messages "200 OK" Headers({'date': 'Sun, 24 Nov 2024 08:53:14 GMT', 'content-type': 'application/json', 'transfer-encoding': 'chunked', 'connection': 'keep-alive', 'anthropic-ratelimit-requests-limit': '1000', 'anthropic-ratelimit-requests-remaining': '999', 'anthropic-ratelimit-requests-reset': '2024-11-24T08:53:12Z', 'anthropic-ratelimit-input-tokens-limit': '40000', 'anthropic-ratelimit-input-tokens-remaining': '40000', 'anthropic-ratelimit-input-tokens-reset': '2024-11-24T08:53:14Z', 'anthropic-ratelimit-output-tokens-limit': '8000', 'anthropic-ratelimit-output-tokens-remaining': '8000', 'anthropic-ratelimit-output-tokens-reset': '2024-11-24T08:53:14Z', 'anthropic-ratelimit-tokens-limit': '48000', 'anthropic-ratelimit-tokens-remaining': '48000', 'anthropic-ratelimit-tokens-reset': '2024-11-24T08:53:14Z', 'request-id': 'req_01LLdsha4zvtDV4eTQDJBpzj', 'via': '1.1 google', 'cf-cache-status': 'DYNAMIC', 'x-robots-tag': 'none', 'server': 'cloudflare', 'cf-ray': '8e782ad00b58676f-ATL', 'content-encoding': 'gzip'})
    2024-11-24 03:53:14,880 - anthropic._base_client - DEBUG - request_id: req_01LLdsha4zvtDV4eTQDJBpzj
    2024-11-24 03:53:14,882 - openai._base_client - DEBUG - Request options: {'method': 'post', 'url': '/embeddings', 'files': None, 'post_parser': <function Embeddings.create..parser at 0x1450c89a0>, 'json_data': {'input': ['The capital of France is Paris'], 'model': 'text-embedding-3-small', 'encoding_format': 'base64'}}
    2024-11-24 03:53:14,882 - openai._base_client - DEBUG - Sending HTTP Request: POST https://api.openai.com/v1/embeddings
    2024-11-24 03:53:14,882 - httpcore.connection - DEBUG - connect_tcp.started host='api.openai.com' port=443 local_address=None timeout=5.0 socket_options=None
    2024-11-24 03:53:14,891 - httpcore.connection - DEBUG - connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x116e5c0e0>
    2024-11-24 03:53:14,891 - httpcore.connection - DEBUG - start_tls.started ssl_context=<ssl.SSLContext object at 0x1447cb750> server_hostname='api.openai.com' timeout=5.0
    2024-11-24 03:53:14,903 - httpcore.connection - DEBUG - start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x116e5f200>
    2024-11-24 03:53:14,903 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'POST']>
    2024-11-24 03:53:14,903 - httpcore.http11 - DEBUG - send_request_headers.complete
    2024-11-24 03:53:14,903 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'POST']>
    2024-11-24 03:53:14,903 - httpcore.http11 - DEBUG - send_request_body.complete
    2024-11-24 03:53:14,903 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'POST']>
    2024-11-24 03:53:15,258 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Sun, 24 Nov 2024 08:53:15 GMT'), (b'Content-Type', b'application/json'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'access-control-allow-origin', b''), (b'access-control-expose-headers', b'X-Request-ID'), (b'openai-model', b'text-embedding-3-small'), (b'openai-organization', b'user-c5lzfxolohvf9t9d9ydybprb'), (b'openai-processing-ms', b'120'), (b'openai-version', b'2020-10-01'), (b'strict-transport-security', b'max-age=31536000; includeSubDomains; preload'), (b'x-ratelimit-limit-requests', b'5000'), (b'x-ratelimit-limit-tokens', b'1000000'), (b'x-ratelimit-remaining-requests', b'4999'), (b'x-ratelimit-remaining-tokens', b'999993'), (b'x-ratelimit-reset-requests', b'12ms'), (b'x-ratelimit-reset-tokens', b'0s'), (b'x-request-id', b'req_a3d28ec9e413f819397cb768aad6f418'), (b'CF-Cache-Status', b'DYNAMIC'), (b'Set-Cookie', b'__cf_bm=KbmYT.W_TepriyrbCZ2ZgQYiPg6ubWk0dolTe_zOVsc-1732438395-1.0.1.1-qByw.GppwMie9NvPTyBIRO8EzvIo1iQfBTUMMDalgUSNrA3tZWWfkojCFYYc8lJut5afMaqEQUPWmMqXchIE9A; path=/; expires=Sun, 24-Nov-24 09:23:15 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), (b'X-Content-Type-Options', b'nosniff'), (b'Set-Cookie', b'_cfuvid=LcAuv0OP1d.cZhDQl4VvwDas.UbyKNvx5g4qVNYTMzw-1732438395350-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), (b'Server', b'cloudflare'), (b'CF-RAY', b'8e782ae0ce6053a9-ATL'), (b'Content-Encoding', b'gzip'), (b'alt-svc', b'h3=":443"; ma=86400')])
    2024-11-24 03:53:15,259 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
    2024-11-24 03:53:15,259 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'POST']>
    2024-11-24 03:53:15,260 - httpcore.http11 - DEBUG - receive_response_body.complete
    2024-11-24 03:53:15,260 - httpcore.http11 - DEBUG - response_closed.started
    2024-11-24 03:53:15,260 - httpcore.http11 - DEBUG - response_closed.complete
    2024-11-24 03:53:15,260 - openai._base_client - DEBUG - HTTP Response: POST https://api.openai.com/v1/embeddings "200 OK" Headers([('date', 'Sun, 24 Nov 2024 08:53:15 GMT'), ('content-type', 'application/json'), ('transfer-encoding', 'chunked'), ('connection', 'keep-alive'), ('access-control-allow-origin', '
    '), ('access-control-expose-headers', 'X-Request-ID'), ('openai-model', 'text-embedding-3-small'), ('openai-organization', 'user-c5lzfxolohvf9t9d9ydybprb'), ('openai-processing-ms', '120'), ('openai-version', '2020-10-01'), ('strict-transport-security', 'max-age=31536000; includeSubDomains; preload'), ('x-ratelimit-limit-requests', '5000'), ('x-ratelimit-limit-tokens', '1000000'), ('x-ratelimit-remaining-requests', '4999'), ('x-ratelimit-remaining-tokens', '999993'), ('x-ratelimit-reset-requests', '12ms'), ('x-ratelimit-reset-tokens', '0s'), ('x-request-id', 'req_a3d28ec9e413f819397cb768aad6f418'), ('cf-cache-status', 'DYNAMIC'), ('set-cookie', '__cf_bm=KbmYT.W_TepriyrbCZ2ZgQYiPg6ubWk0dolTe_zOVsc-1732438395-1.0.1.1-qByw.GppwMie9NvPTyBIRO8EzvIo1iQfBTUMMDalgUSNrA3tZWWfkojCFYYc8lJut5afMaqEQUPWmMqXchIE9A; path=/; expires=Sun, 24-Nov-24 09:23:15 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), ('x-content-type-options', 'nosniff'), ('set-cookie', '_cfuvid=LcAuv0OP1d.cZhDQl4VvwDas.UbyKNvx5g4qVNYTMzw-1732438395350-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), ('server', 'cloudflare'), ('cf-ray', '8e782ae0ce6053a9-ATL'), ('content-encoding', 'gzip'), ('alt-svc', 'h3=":443"; ma=86400')])
    2024-11-24 03:53:15,260 - openai._base_client - DEBUG - request_id: req_a3d28ec9e413f819397cb768aad6f418
    2024-11-24 03:53:15,263 - root - INFO - Total existing memories: 0
    2024-11-24 03:53:15,264 - anthropic._base_client - DEBUG - Request options: {'method': 'post', 'url': '/v1/messages', 'timeout': 600, 'files': None, 'json_data': {'max_tokens': 4096, 'messages': [{'role': 'user', 'content': 'You are a smart memory manager which controls the memory of a system.\n You can perform four operations: (1) add into the memory, (2) update the memory, (3) delete from the memory, and (4) no change.\n\n Based on the above four operations, the memory will change.\n\n Compare newly retrieved facts with the existing memory. For each new fact, decide whether to:\n - ADD: Add it to the memory as a new element\n - UPDATE: Update an existing memory element\n - DELETE: Delete an existing memory element\n - NONE: Make no change (if the fact is already present or irrelevant)\n\n There are specific guidelines to select which operation to perform:\n\n 1. Add: If the retrieved facts contain new information not present in the memory, then you have to add it by generating a new ID in the id field.\n - Example:\n - Old Memory:\n [\n {\n "id" : "0",\n "text" : "User is a software engineer"\n }\n ]\n - Retrieved facts: ["Name is John"]\n - New Memory:\n {\n "memory" : [\n {\n "id" : "0",\n "text" : "User is a software engineer",\n "event" : "NONE"\n },\n {\n "id" : "1",\n "text" : "Name is John",\n "event" : "ADD"\n }\n ]\n\n }\n\n 2. Update: If the retrieved facts contain information that is already present in the memory but the information is totally different, then you have to update it. \n If the retrieved fact contains information that conveys the same thing as the elements present in the memory, then you have to keep the fact which has the most information. \n Example (a) -- if the memory contains "User likes to play cricket" and the retrieved fact is "Loves to play cricket with friends", then update the memory with the retrieved facts.\n Example (b) -- if the memory contains "Likes cheese pizza" and the retrieved fact is "Loves cheese pizza", then you do not need to update it because they convey the same information.\n If the direction is to update the memory, then you have to update it.\n Please keep in mind while updating you have to keep the same ID.\n Please note to return the IDs in the output from the input IDs only and do not generate any new ID.\n - Example:\n - Old Memory:\n [\n {\n "id" : "0",\n "text" : "I really like cheese pizza"\n },\n {\n "id" : "1",\n "text" : "User is a software engineer"\n },\n {\n "id" : "2",\n "text" : "User likes to play cricket"\n }\n ]\n - Retrieved facts: ["Loves chicken pizza", "Loves to play cricket with friends"]\n - New Memory:\n {\n "memory" : [\n {\n "id" : "0",\n "text" : "Loves cheese and chicken pizza",\n "event" : "UPDATE",\n "old_memory" : "I really like cheese pizza"\n },\n {\n "id" : "1",\n "text" : "User is a software engineer",\n "event" : "NONE"\n },\n {\n "id" : "2",\n "text" : "Loves to play cricket with friends",\n "event" : "UPDATE",\n "old_memory" : "User likes to play cricket"\n }\n ]\n }\n\n\n 3. Delete: If the retrieved facts contain information that contradicts the information present in the memory, then you have to delete it. Or if the direction is to delete the memory, then you have to delete it.\n Please note to return the IDs in the output from the input IDs only and do not generate any new ID.\n - Example:\n - Old Memory:\n [\n {\n "id" : "0",\n "text" : "Name is John"\n },\n {\n "id" : "1",\n "text" : "Loves cheese pizza"\n }\n ]\n - Retrieved facts: ["Dislikes cheese pizza"]\n - New Memory:\n {\n "memory" : [\n {\n "id" : "0",\n "text" : "Name is John",\n "event" : "NONE"\n },\n {\n "id" : "1",\n "text" : "Loves cheese pizza",\n "event" : "DELETE"\n }\n ]\n }\n\n 4. No Change: If the retrieved facts contain information that is already present in the memory, then you do not need to make any changes.\n - Example:\n - Old Memory:\n [\n {\n "id" : "0",\n "text" : "Name is John"\n },\n {\n "id" : "1",\n "text" : "Loves cheese pizza"\n }\n ]\n - Retrieved facts: ["Name is John"]\n - New Memory:\n {\n "memory" : [\n {\n "id" : "0",\n "text" : "Name is John",\n "event" : "NONE"\n },\n {\n "id" : "1",\n "text" : "Loves cheese pizza",\n "event" : "NONE"\n }\n ]\n }\n\n Below is the current content of my memory which I have collected till now. You have to update it in the following format only:\n\n \n []\n \n\n The new retrieved facts are mentioned in the triple backticks. You have to analyze the new retrieved facts and determine whether these facts should be added, updated, or deleted in the memory.\n\n \n [\'The capital of France is Paris\']\n \n\n Follow the instruction mentioned below:\n - Do not return anything from the custom few shot prompts provided above.\n - If the current memory is empty, then you have to add the new retrieved facts to the memory.\n - You should return the updated memory in only JSON format as shown below. The memory key should be the same if no changes are made.\n - If there is an addition, generate a new key and add the new memory corresponding to it.\n - If there is a deletion, the memory key-value pair should be removed from the memory.\n - If there is an update, the ID key should remain the same and only the value needs to be updated.\n\n Do not return anything except the JSON format.\n '}], 'model': 'claude-3-opus-20240229', 'system': '', 'temperature': 0.7, 'top_p': 0}}
    2024-11-24 03:53:15,266 - anthropic._base_client - DEBUG - Sending HTTP Request: POST https://api.anthropic.com/v1/messages
    2024-11-24 03:53:15,266 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'POST']>
    2024-11-24 03:53:15,266 - httpcore.http11 - DEBUG - send_request_headers.complete
    2024-11-24 03:53:15,267 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'POST']>
    2024-11-24 03:53:15,267 - httpcore.http11 - DEBUG - send_request_body.complete
    2024-11-24 03:53:15,267 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'POST']>
    ^C2024-11-24 03:53:18,965 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Sun, 24 Nov 2024 08:53:19 GMT'), (b'Content-Type', b'application/json'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'anthropic-ratelimit-requests-limit', b'1000'), (b'anthropic-ratelimit-requests-remaining', b'999'), (b'anthropic-ratelimit-requests-reset', b'2024-11-24T08:53:15Z'), (b'anthropic-ratelimit-input-tokens-limit', b'40000'), (b'anthropic-ratelimit-input-tokens-remaining', b'40000'), (b'anthropic-ratelimit-input-tokens-reset', b'2024-11-24T08:53:19Z'), (b'anthropic-ratelimit-output-tokens-limit', b'8000'), (b'anthropic-ratelimit-output-tokens-remaining', b'8000'), (b'anthropic-ratelimit-output-tokens-reset', b'2024-11-24T08:53:19Z'), (b'anthropic-ratelimit-tokens-limit', b'48000'), (b'anthropic-ratelimit-tokens-remaining', b'48000'), (b'anthropic-ratelimit-tokens-reset', b'2024-11-24T08:53:19Z'), (b'request-id', b'req_01BYUvsvS5ftRxpx8qwKvQAS'), (b'via', b'1.1 google'), (b'CF-Cache-Status', b'DYNAMIC'), (b'X-Robots-Tag', b'none'), (b'Server', b'cloudflare'), (b'CF-RAY', b'8e782ae31c4c676f-ATL'), (b'Content-Encoding', b'gzip')])
    2024-11-24 03:53:18,965 - httpx - INFO - HTTP Request: POST https://api.anthropic.com/v1/messages "HTTP/1.1 200 OK"
    2024-11-24 03:53:18,965 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'POST']>
    2024-11-24 03:53:18,966 - httpcore.http11 - DEBUG - receive_response_body.complete
    2024-11-24 03:53:18,966 - httpcore.http11 - DEBUG - response_closed.started
    2024-11-24 03:53:18,966 - httpcore.http11 - DEBUG - response_closed.complete
    2024-11-24 03:53:18,966 - anthropic._base_client - DEBUG - HTTP Response: POST https://api.anthropic.com/v1/messages "200 OK" Headers({'date': 'Sun, 24 Nov 2024 08:53:19 GMT', 'content-type': 'application/json', 'transfer-encoding': 'chunked', 'connection': 'keep-alive', 'anthropic-ratelimit-requests-limit': '1000', 'anthropic-ratelimit-requests-remaining': '999', 'anthropic-ratelimit-requests-reset': '2024-11-24T08:53:15Z', 'anthropic-ratelimit-input-tokens-limit': '40000', 'anthropic-ratelimit-input-tokens-remaining': '40000', 'anthropic-ratelimit-input-tokens-reset': '2024-11-24T08:53:19Z', 'anthropic-ratelimit-output-tokens-limit': '8000', 'anthropic-ratelimit-output-tokens-remaining': '8000', 'anthropic-ratelimit-output-tokens-reset': '2024-11-24T08:53:19Z', 'anthropic-ratelimit-tokens-limit': '48000', 'anthropic-ratelimit-tokens-remaining': '48000', 'anthropic-ratelimit-tokens-reset': '2024-11-24T08:53:19Z', 'request-id': 'req_01BYUvsvS5ftRxpx8qwKvQAS', 'via': '1.1 google', 'cf-cache-status': 'DYNAMIC', 'x-robots-tag': 'none', 'server': 'cloudflare', 'cf-ray': '8e782ae31c4c676f-ATL', 'content-encoding': 'gzip'})
    2024-11-24 03:53:18,966 - anthropic._base_client - DEBUG - request_id: req_01BYUvsvS5ftRxpx8qwKvQAS
    2024-11-24 03:53:18,966 - root - INFO - {'id': '0', 'text': 'The capital of France is Paris', 'event': 'ADD'}
    2024-11-24 03:53:18,966 - root - INFO - Creating memory with data='The capital of France is Paris'
    2024-11-24 03:53:18,984 - mem0.vector_stores.qdrant - INFO - Inserting 1 vectors into collection mem0
    ^C2024-11-24 03:53:20,890 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Sun, 24 Nov 2024 08:53:20 GMT'), (b'Content-Type', b'application/json'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'access-control-expose-headers', b'X-Request-ID'), (b'openai-organization', b'user-c5lzfxolohvf9t9d9ydybprb'), (b'openai-processing-ms', b'8562'), (b'openai-version', b'2020-10-01'), (b'x-ratelimit-limit-requests', b'500'), (b'x-ratelimit-limit-tokens', b'450000'), (b'x-ratelimit-remaining-requests', b'499'), (b'x-ratelimit-remaining-tokens', b'445837'), (b'x-ratelimit-reset-requests', b'120ms'), (b'x-ratelimit-reset-tokens', b'555ms'), (b'x-request-id', b'req_88783e8f5ee62fc03cb6cbc352057df5'), (b'strict-transport-security', b'max-age=31536000; includeSubDomains; preload'), (b'CF-Cache-Status', b'DYNAMIC'), (b'Set-Cookie', b'__cf_bm=KUKrAjD6KB37csI2ow.k6bWCR.rhBBcxMEHJ7vVov7U-1732438400-1.0.1.1-G8rrOBsbYCDFjuUVMLDBZKJqqHH2w7_swlFp5wTqucKHQa.ukBVkuPYtghbWI6iFBsXn7HBDPCTR4dWGpdUTMg; path=/; expires=Sun, 24-Nov-24 09:23:20 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), (b'X-Content-Type-Options', b'nosniff'), (b'Set-Cookie', b'_cfuvid=IeGMetCoJLPlY18dYwQjWyra3b9zdpT7uKat7AkhLzo-1732438400982-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), (b'Server', b'cloudflare'), (b'CF-RAY', b'8e782ad00a8eadd7-ATL'), (b'Content-Encoding', b'gzip'), (b'alt-svc', b'h3=":443"; ma=86400')])
    2024-11-24 03:53:20,893 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
    2024-11-24 03:53:20,894 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'POST']>
    2024-11-24 03:53:20,894 - httpcore.http11 - DEBUG - receive_response_body.complete
    2024-11-24 03:53:20,895 - httpcore.http11 - DEBUG - response_closed.started
    2024-11-24 03:53:20,895 - httpcore.http11 - DEBUG - response_closed.complete
    ...

Test configuration:

  • Main LLM: Claude-3-opus (Anthropic)
  • Graph Store LLM: GPT-4-turbo (OpenAI)
  • Neo4j for graph storage
  • Python 3.12

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules
  • I have checked my code and corrected any misspellings

Future Considerations

  1. Consider making tool definitions provider-agnostic to support more LLM providers for graph operations
  2. Add validation to ensure graph store LLM config uses a compatible provider
  3. Document provider compatibility requirements in configuration schema

@CLAassistant
Copy link

CLAassistant commented Nov 24, 2024

CLA assistant check
All committers have signed the CLA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants