Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Receiving the response "Request failed" in chat with Copilot integration (intermittent) #399

Open
wright-io opened this issue Dec 17, 2024 · 5 comments

Comments

@wright-io
Copy link

Describe the issue

Occasionally, I receive a "request failed" response from Copilot chat with CodeGate.

Steps to Reproduce

  • Install CodeGate and configure to work with Copilot as per docs
  • Issue chat request in Copilot
  • Occasionally receive the "request failed" response

Operating System

MacOS (Arm)

IDE and Version

VS Code 1.96.0

Extension and Version

Copilot extension 1.252.0

Provider

OpenAI

Model

GPT 4o

Logs

2024-12-17T18:50:42.017Z [error    ] Pipeline processing error: Expecting value: line 1 column 1 (char 0) module=pipeline pathname=/app/src/codegate/providers/copilot/pipeline.py
2024-12-17T18:51:35.017Z [error    ] Pipeline processing error: Expecting value: line 1 column 1 (char 0) module=pipeline pathname=/app/src/codegate/providers/copilot/pipeline.py

Additional Context

Error response from Copilot:

Sorry, your request failed. Please try again. Request id: 6b142432-5ffc-43d1-8b4b-5230d239ba7a

Image

@lukehinds
Copy link
Contributor

@wright-io do you recall if this was several or a significantly large file? I have seen this before and I believe I have a fix

@jhrozek
Copy link
Contributor

jhrozek commented Dec 17, 2024

I've been trying to hammer the chat to reproduce the issue and I've been intermittently able to.
I don't know what is the root cause yet, just saying it's not just you :-)

@lukehinds
Copy link
Contributor

@jhrozek load up lots of big files (500+) lines, that seems to do it. I might have a fix for this I can push, just need to finish off some TLS rework code.

@wright-io
Copy link
Author

@wright-io do you recall if this was several or a significantly large file? I have seen this before and I believe I have a fix

I don't believe I had any files in the context. But there was a lot of info in the chat itself. I had it generating some code and iterating that multiple times in the same chat and then I sent an example of a file I wanted it to use as a model.

It was certainly hundreds of lines.

@jhrozek
Copy link
Contributor

jhrozek commented Dec 17, 2024

@jhrozek load up lots of big files (500+) lines, that seems to do it. I might have a fix for this I can push, just need to finish off some TLS rework code.

I was able to trigger this without a context at all, but instead by asking a lot of nonsensical questions in quick succession. This might actually be the same as big files as context as our system prompts are quite big and make for a large message.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants