We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
==========Options============ means: [0.485, 0.456, 0.406] stds: [0.229, 0.224, 0.225] gpu: 2 exp_name: Totaltext num_workers: 0 batch_size: 6 max_epoch: 250 start_epoch: 0 lr: 0.001 cuda: True output_dir: output input_size: 640 max_annotation: 2000 adj_num: 4 num_points: 20 use_hard: True load_memory: False scale: 1 grad_clip: 25 dis_threshold: 0.35 cls_threshold: 0.875 approx_factor: 0.004 know: False knownet: MixNet_FSNet_M know_resume: /kaggle/working/Mixnet/model/Totaltext_mid/MixNet_FSNet_M_622.pth resume: None mgpu: False save_dir: ./model/ vis_dir: ./vis/ log_dir: ./logs/ loss: CrossEntropyLoss pretrain: False verbose: True viz: False lr_adjust: fix stepvalues: [] weight_decay: 0.0 gamma: 0.1 momentum: 0.9 optim: Adam save_freq: 5 display_freq: 10 viz_freq: 50 log_freq: 10000 val_freq: 1000 net: FSNet_M mid: True embed: False onlybackbone: False rescale: 255.0 test_size: [640, 960] checkepoch: 1070 img_root: None device: cuda =============End============= MixNet backbone parameter size: 29339968 load pretrain weight from /kaggle/working/Mixnet/pretrained/triHRnet_Synth_weight.pth. Start training MixNet. Epoch: 0 : LR = [0.001] Traceback (most recent call last): File "/kaggle/working/Mixnet/train_mixNet.py", line 418, in main() File "/kaggle/working/Mixnet/train_mixNet.py", line 399, in main train(model, train_loader, criterion, scheduler, optimizer, epoch) File "/kaggle/working/Mixnet/train_mixNet.py", line 99, in train input_dict = _parse_data(inputs) File "/kaggle/working/Mixnet/train_mixNet.py", line 74, in _parse_data input_dict['edge_field'] = inputs[10] IndexError: list index out of range
i have cut the MixNet_FSNet_M_622.pth to transfer learning and get an error when loading the data, although i have same format when testing
The text was updated successfully, but these errors were encountered:
No branches or pull requests
==========Options============
means: [0.485, 0.456, 0.406]
stds: [0.229, 0.224, 0.225]
gpu: 2
exp_name: Totaltext
num_workers: 0
batch_size: 6
max_epoch: 250
start_epoch: 0
lr: 0.001
cuda: True
output_dir: output
input_size: 640
max_annotation: 2000
adj_num: 4
num_points: 20
use_hard: True
load_memory: False
scale: 1
grad_clip: 25
dis_threshold: 0.35
cls_threshold: 0.875
approx_factor: 0.004
know: False
knownet: MixNet_FSNet_M
know_resume: /kaggle/working/Mixnet/model/Totaltext_mid/MixNet_FSNet_M_622.pth
resume: None
mgpu: False
save_dir: ./model/
vis_dir: ./vis/
log_dir: ./logs/
loss: CrossEntropyLoss
pretrain: False
verbose: True
viz: False
lr_adjust: fix
stepvalues: []
weight_decay: 0.0
gamma: 0.1
momentum: 0.9
optim: Adam
save_freq: 5
display_freq: 10
viz_freq: 50
log_freq: 10000
val_freq: 1000
net: FSNet_M
mid: True
embed: False
onlybackbone: False
rescale: 255.0
test_size: [640, 960]
checkepoch: 1070
img_root: None
device: cuda
=============End=============
MixNet backbone parameter size: 29339968
load pretrain weight from /kaggle/working/Mixnet/pretrained/triHRnet_Synth_weight.pth.
Start training MixNet.
Epoch: 0 : LR = [0.001]
Traceback (most recent call last):
File "/kaggle/working/Mixnet/train_mixNet.py", line 418, in
main()
File "/kaggle/working/Mixnet/train_mixNet.py", line 399, in main
train(model, train_loader, criterion, scheduler, optimizer, epoch)
File "/kaggle/working/Mixnet/train_mixNet.py", line 99, in train
input_dict = _parse_data(inputs)
File "/kaggle/working/Mixnet/train_mixNet.py", line 74, in _parse_data
input_dict['edge_field'] = inputs[10]
IndexError: list index out of range
i have cut the MixNet_FSNet_M_622.pth to transfer learning and get an error when loading the data, although i have same format when testing
The text was updated successfully, but these errors were encountered: