-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug Report: The model frequently generates repetitive token sequences. #368
Comments
Bug Report: Repetitive Token Generation in "gemini-1.5-flash" Model Description of the Bug: Example:
Steps to Reproduce:
Expected Behavior: Actual Behavior: Impact: Resource Waste: Tokens are wasted, increasing costs and exhausting API usage limits. Output Quality: The generated text becomes unusable, requiring additional API requests. Reproduction Rate: Workaround: Request for Resolution:
Actual vs. Expected Behavior:
Expected Output: |
|
Hi @Razaghallu786, Could you please provide a bit more clarification on this? Is this happening with some features like function calling or structured output or just simply running the above prompt?? |
Which temperature are you using? If you are using 0, can you try with a higher one? |
Description of the bug:
No response
Actual vs expected behavior:
No response
Any other information you'd like to share?
No response
The text was updated successfully, but these errors were encountered: