LMFingerprints: Visual Explanations of Language Model Embedding Spaces through Layerwise Contextualization Scores
dc.contributor.author | Sevastjanova, Rita | en_US |
dc.contributor.author | Kalouli, Aikaterini-Lida | en_US |
dc.contributor.author | Beck, Christin | en_US |
dc.contributor.author | Hauptmann, Hanna | en_US |
dc.contributor.author | El-Assady, Mennatallah | en_US |
dc.contributor.editor | Borgo, Rita | en_US |
dc.contributor.editor | Marai, G. Elisabeta | en_US |
dc.contributor.editor | Schreck, Tobias | en_US |
dc.date.accessioned | 2022-06-03T06:06:12Z | |
dc.date.available | 2022-06-03T06:06:12Z | |
dc.date.issued | 2022 | |
dc.description.abstract | Language models, such as BERT, construct multiple, contextualized embeddings for each word occurrence in a corpus. Understanding how the contextualization propagates through the model's layers is crucial for deciding which layers to use for a specific analysis task. Currently, most embedding spaces are explained by probing classifiers; however, some findings remain inconclusive. In this paper, we present LMFingerprints, a novel scoring-based technique for the explanation of contextualized word embeddings. We introduce two categories of scoring functions, which measure (1) the degree of contextualization, i.e., the layerwise changes in the embedding vectors, and (2) the type of contextualization, i.e., the captured context information. We integrate these scores into an interactive explanation workspace. By combining visual and verbal elements, we provide an overview of contextualization in six popular transformer-based language models. We evaluate hypotheses from the domain of computational linguistics, and our results not only confirm findings from related work but also reveal new aspects about the information captured in the embedding spaces. For instance, we show that while numbers are poorly contextualized, stopwords have an unexpected high contextualization in the models' upper layers, where their neighborhoods shift from similar functionality tokens to tokens that contribute to the meaning of the surrounding sentences. | en_US |
dc.description.number | 3 | |
dc.description.sectionheaders | Text and Music | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 41 | |
dc.identifier.doi | 10.1111/cgf.14541 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 295-307 | |
dc.identifier.pages | 13 pages | |
dc.identifier.uri | https://doi.org/10.1111/cgf.14541 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf14541 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | |
dc.subject | CCS Concepts: Human-centered computing --> Visual analytics; Information visualization | |
dc.subject | Human centered computing | |
dc.subject | Visual analytics | |
dc.subject | Information visualization | |
dc.title | LMFingerprints: Visual Explanations of Language Model Embedding Spaces through Layerwise Contextualization Scores | en_US |