Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama-index-memory-mem0: embedding dimension is always 0 #2049

Open
tslmy opened this issue Nov 24, 2024 · 1 comment
Open

llama-index-memory-mem0: embedding dimension is always 0 #2049

tslmy opened this issue Nov 24, 2024 · 1 comment

Comments

@tslmy
Copy link

tslmy commented Nov 24, 2024

🐛 Describe the bug

from llama_index.core import Settings
from llama_index.memory.mem0 import Mem0Memory

from llama_index.embeddings.ollama import OllamaEmbedding
from llama_index.llms.openai_like import OpenAILike
from llama_index.agent.openai import OpenAIAgent

Settings.llm = OpenAILike(
    model="llama3.1",
    api_base="http://localhost:11434/v1",
    api_key="ollama",
    is_function_calling_model=True,
    is_chat_model=True,
)
Settings.embed_model = OllamaEmbedding(
    model_name="nomic-embed-text:latest",
)
memory_from_config = Mem0Memory.from_config(
    config={
        "vector_store": {
            "provider": "qdrant",
            "config": {
                "collection_name": "temp",
                "embedding_model_dims": 768,  # Change this according to your local model's dimensions
            },
        },
        "llm": {
            "provider": "ollama",
            "config": {
                "model": "llama3.1",
                "temperature": 0,
                "max_tokens": 8000,
                "ollama_base_url": "http://localhost:11434",
            },
        },
        "embedder": {
            "provider": "ollama",
            "config": {
                "model": "nomic-embed-text:latest",
                "ollama_base_url": "http://localhost:11434",
                "embedding_dims": 768,  # Change this according to your local model's dimensions
            },
        },
    },
    context={"user_id": "test"},
)
agent_runner = OpenAIAgent.from_tools(
    memory=memory_from_config,
)

if __name__ == "__main__":
    agent_runner.chat("Hi!")

Observed:

  File ".../.venv/lib/python3.12/site-packages/qdrant_client/local/distances.py", line 94, in cosine_similarity
    return np.dot(vectors, query)
           ^^^^^^^^^^^^^^^^^^^^^^
ValueError: shapes (0,768) and (0,) not aligned: 768 (dim 1) != 0 (dim 0)

Expected: No exceptions.

Versions:

requires-python = ">=3.12,<3.13"

name = "llama-index-memory-mem0"
version = "0.2.0"

name = "llama-index"
version = "0.12.1"

name = "mem0ai"
version = "0.1.32"
@spike-spiegel-21
Copy link
Collaborator

Hi @tslmy Currently we don't support OpenAIAgent on llama-index. We will increase support to more agents. Please refer this documentation: https://docs.mem0.ai/integrations/llama-index

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants