You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm curious about how PICO and dependency parsing was trained using sciBert. For PICO, I can imagine training being set up like squad, where a 'question' is one if the labels, and the model outputs a span. Was this how PICO was trained, or something else?
I'm having a trickier time coming up with a training regimen for dependency parsing.
I tried looking how at code; it seems that training is passed off to the AllenNLP library
I'm curious about how PICO and dependency parsing was trained using sciBert. For PICO, I can imagine training being set up like squad, where a 'question' is one if the labels, and the model outputs a span. Was this how PICO was trained, or something else?
I'm having a trickier time coming up with a training regimen for dependency parsing.
I tried looking how at code; it seems that training is passed off to the AllenNLP library
https://github.com/allenai/scibert/blob/master/scripts/train_allennlp_local.sh#L35
But I'm having a tricky time figuring out where in that code does scibert finetuning happen.
For example, searching for the task 'PICO' or searching for 'scibert' in that repo does not return any results
https://github.com/allenai/allennlp/search?q=scibert
The text was updated successfully, but these errors were encountered: