-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark on SWE-Bench #415
Comments
Seconding this but not sure how this can be done, also what other benchmarks are worth testing? |
erkinalp
added a commit
to erkinalp/devika
that referenced
this issue
Dec 18, 2024
- Add Docker-based evaluation harness - Implement comprehensive test coverage - Add SWE-bench dependencies - Support batch evaluation with proper error handling Fixes stitionai#415 Co-Authored-By: Erkin Alp Güney <[email protected]>
erkinalp
added a commit
to erkinalp/devika
that referenced
this issue
Dec 18, 2024
- Add Docker-based evaluation harness - Implement comprehensive test coverage - Add SWE-bench dependencies - Support batch evaluation with proper error handling Fixes stitionai#415 Co-Authored-By: Erkin Alp Güney <[email protected]>
erkinalp
added a commit
to erkinalp/devika
that referenced
this issue
Dec 18, 2024
Co-Authored-By: Erkin Alp Güney <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
It would be interesting to see the performance on SWE-Bench benchmarks, so that this project can be more clearly differentiated from the increasing number of other coding agents.
https://www.swebench.com/
https://github.com/princeton-nlp/SWE-bench
https://arxiv.org/abs/2310.06770
The text was updated successfully, but these errors were encountered: