Skip to content

anhad13/SelfTrainingAndLRP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Steps for self-training PRPN:

  1. Training multiple PRPN instances:

python -u main.py --batch 64 --save trained_models/ --alpha 1.0 --epochs 35 --PRPN --force_binarize

  1. Store outputs of trained PRPN models (will be stored in the same dir as model_path):

python -u main.py --force_binarize --eval_on train --eval_only --load trained_models/${model_path} --batch 1 --PRPN

  1. Generate training data from PRPN model outputs:

python -u scripts/overlap.py <comma sep outputs generated in step 2>

  1. Co-train the multi-task model (from loaded pre-trained model + training outputs):

python -u main.py --batch 64 --PRPN
--shen --alpha 0.5 --save Fout20_12_${load_from}
--beta 0.5 --force_binarize --training_ratio 0.2 --load --train_from_pickle --training_method interleave

About

Self-training PRPN and low-resource parsing

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •