Investigating a neural language model’s replicability of psycholinguistic experiments : A case study of NPI licensing
- 주제(키워드) BERT , grammatical illusion , licensing strength , negative polarity items , neural language model , NPI licensing , psycholinguistics , scale of negativity
- 설명문(URI) https://www.scopus.com/record/display.uri?eid=2-s2.0-85149812180&origin=resultslist&sort=plf-f&src=s&sid=e47d8018855f114da747d1ab24420dae&sot=a&sdt=a&sl=23&s=EID%282-s2.0-85149812180%29&relpos=0&citeCnt=0&searchTerm=
- 설명문(URI) https://www.webofscience.com/wos/woscc/full-record/WOS:000953477100001
- 등재 SSCI, SCOPUS
- 발행기관 Frontiers Media S.A.
- 발행년도 2023
- URI http://www.dcollection.net/handler/ewha/000000204059
- 본문언어 영어
- Published As https://doi.org/10.3389/fpsyg.2023.937656
- 저작권 이화여자대학교 논문은 저작권에 의해 보호받습니다.
초록/요약
The recent success of deep learning neural language models such as Bidirectional Encoder Representations from Transformers (BERT) has brought innovations to computational language research. The present study explores the possibility of using a language model in investigating human language processes, based on the case study of negative polarity items (NPIs). We first conducted an experiment with BERT to examine whether the model successfully captures the hierarchical structural relationship between an NPI and its licensor and whether it may lead to an error analogous to the grammatical illusion shown in the psycholinguistic experiment (Experiment 1). We also investigated whether the language model can capture the fine-grained semantic properties of NPI licensors and discriminate their subtle differences on the scale of licensing strengths (Experiment 2). The results of the two experiments suggest that overall, the neural language model is highly sensitive to both syntactic and semantic constraints in NPI processing. The model’s processing patterns and sensitivities are shown to be very close to humans, suggesting their role as a research tool or object in the study of language.
more