CIS 421/521 - NLP - Vector Semantics - 2020-12-03 - Shared screen with speaker view
So cool, I’m a host :)
Does distributional word similarity take into consideration different uses of a word per regions/cultures? In other words, is a “universal” word context for words possible?
Jérémie Allard (he/him)
whats the difference in accuracy between a light and large model?
Could you explain one more time how did you build the predictive model and extract embedding weights?
Does the data corpus used as the training source have a big impact on well those tools perform? Ie 2 different dictionaries or the Oxford encyclopedia ie ?
Thanks for the demo!!