-
Notifications
You must be signed in to change notification settings - Fork 284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[UPRISE]After rereading the paper UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation,I have some questions. #262
Comments
Q1: How to get the scores through GPT-Neo-2.7B? Q2: In which procedure, the prompt get positive or negative, after get the scores or after encode before score? |
What about the score through prompt retriever? |
You may refer to Section 3.4 to see how we get the score after tuning the prompt retriever. |
Training is in Section 3.3, you may refer to the provided code as well. |
Yes, sim(x, p) is the score. |
In paper,the positive prompt number is 1 and negative prompt number is 20.But not demonstrate the total number of prompts in one train epoch . |
Yes, InfoNCE would not consider the prompts that are neither positive nor negative. |
I found some confusion about the pipline of training and inferencing. |
We do not input the task name during training, and the task name in the image is only for ease of understanding. You may refer to the formula in section 3.2 for details. |
I viewed the file prompt_pool.json and each dict is annotated to different task name.So the task name is only to divide to its metric score? |
Q1: Is the task name only used to divide it by metric score? Q2: The normal state of mind when retrieving is to retriever in the prompts of similar task rather than all the prompts. |
1.How to get the scores through GPT-Neo-2.7B?
2.In which procedure,the prompt get positive or negative,after get the scores or after encode before score?
The text was updated successfully, but these errors were encountered: