Linguistic Interpretability and Composition of Abstract Meaning Representations
Blodgett, Austin J
Many Natural Language Processing (NLP) and Natural Language Understanding (NLU) tasks require some implicit representation of meaning. Abstract Meaning Representation (AMR; Banarescu et al., 2013) aims to be a scalable way of including explicit representations of meaning, in the form of semantic graphs. This work takes on a goal of augmenting AMR semantic graphs to be made linguistically interpretable---increasing the interpretability they add to a model.I pursue this goal through two avenues of research. First, I improve the analyzability of AMR via a novel, structurally comprehensive and linguistically enriched set of AMR-to-text alignments. I present this new formulation of AMR alignment which addresses a wide variety of linguistic phenomena, as well as a corpus of automatically generated alignments for English sentences, and a probabilistic, structure-aware alignment algorithm which produces alignments without supervision and with higher coverage, accuracy, and variety than alignments from existing AMR aligners.Second, I improve the compositionality of AMR via an extension of Combinatory Categorial Grammar (CCG) which allows AMR semantics and compositional derivation of AMR graphs. This formulation of AMR as graph semantics in CCG and accompanying combinatorial rules of CCG allow derivation of a full AMR graph in an interpretable way. Lastly, I conduct an empirical analysis of the compatibility and structural similarity/dissimilarity of AMR with automatically generated CCG parse data, and identify linguistic sources of complexity for the benefit of future research.