-
Notifications
You must be signed in to change notification settings - Fork 4.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: The following model_kwargs
are not used by the model: ['skip_special_tokens']
#6391
Comments
git pull origin main |
哥,还是不行,我也遇到了相同问题 |
作者您好,我这边也遇到了同样的问题,按您的操作了也是不行,以前都没有这样的问题,今天突然出现,是否是版本更新原因? |
哥们,问题解决了吗?是transformer库不适配吗? |
fixed |
I can help you. |
Hi @hiyouga @harrybattler , This is my checked out commit: LLaMA-Factory]$ git log --pretty=format:'%H' -n 1
ffbb4dbdb09ba799af1800c78b2e9d669bccd24b llamafactory-cli env
Traceback (most recent call last):
File "/environments/llama_factory_uat/bin/torchrun", line 8, in <module>
sys.exit(main())
^^^^^^
File "/environments/llama_factory_uat/lib64/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py",
line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/environments/llama_factory_uat/lib64/python3.11/site-packages/torch/distributed/run.py", line 919, in main
run(args)
File "/environments/llama_factory_uat/lib64/python3.11/site-packages/torch/distributed/run.py", line 910, in run
elastic_launch(
File "/environments/llama_factory_uat/lib64/python3.11/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/environments/llama_factory_uat/lib64/python3.11/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/packages/LLaMA-Factory/src/llamafactory/launcher.py FAILED |
could you reopen it, please? @hiyouga |
get pull origin main can resolve my question ,if you don't resolve it ,i think you should check your transformers version ,such as replace it 4.43.4 |
I just downgraded transformers to 4.43.4 and it didn't help |
Reminder
System Info
$ llamafactory-cli env
llamafactory
version: 0.9.2.dev0Reproduction
Dear Llama-Factory team,
I created a a LoRA adapter which worked out of the box for Llama 3.3, thank you very much.
Unfortunately, when I wanted to evaluate the model with the adapter, I get this error
my
llama3_lora_predict.yaml
Expected behavior
No response
Others
No response
The text was updated successfully, but these errors were encountered: