id
stringclasses
179 values
question
stringlengths
8.75k
85.9k
answer
dict
1911.09483
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning <<<Abstract>>> In sequence to sequence learning, the self-attention mechanism proves to be highly effective, and achieves significant improvements in many tasks. However, the self-attention mechanism is not without its own flaws. Although self-attention can model extremely long dependencies, the attention in deep layers tends to overconcentrate on a single token, leading to insufficient use of local information and difficultly in representing long sequences. In this work, we explore parallel multi-scale representation learning on sequence data, striving to capture both long-range and short-range language structures. To this end, we propose the Parallel MUlti-Scale attEntion (MUSE) and MUSE-simple. MUSE-simple contains the basic idea of parallel multi-scale sequence representation learning, and it encodes the sequence in parallel, in terms of different scales with the help from self-attention, and pointwise transformation. MUSE builds on MUSE-simple and explores combining convolution and self-attention for learning sequence representations from more different scales. We focus on machine translation and the proposed approach achieves substantial performance improvements over Transformer, especially on long sequences. More importantly, we find that although conceptually simple, its success in practice requires intricate considerations, and the multi-scale attention must build on unified semantic space. Under common setting, the proposed model achieves substantial performance and outperforms all previous models on three main machine translation tasks. In addition, MUSE has potential for accelerating inference due to its parallelism. Code will be available at this https URL <<</Abstract>>> <<<Introduction>>> In recent years, Transformer has been remarkably adept at sequence learning tasks like machine translation BIBREF0, BIBREF1, text classification BIBREF2, BIBREF3, language modeling BIBREF4, BIBREF5, etc. It is solely based on an attention mechanism that captures global dependencies between input tokens, dispensing with recurrence and convolutions entirely. The key idea of the self-attention mechanism is updating token representations based on a weighted sum of all input representations. However, recent research BIBREF6 has shown that the Transformer has surprising shortcomings in long sequence learning, exactly because of its use of self-attention. As shown in Figure 1 (a), in the task of machine translation, the performance of Transformer drops with the increase of the source sentence length, especially for long sequences. The reason is that the attention can be over-concentrated and disperse, as shown in Figure 1 (b), and only a small number of tokens are represented by attention. It may work fine for shorter sequences, but for longer sequences, it causes insufficient representation of information and brings difficulty for the model to comprehend the source information intactly. In recent work, local attention that constrains the attention to focus on only part of the sequences BIBREF7, BIBREF8 is used to address this problem. However, it costs self-attention the ability to capture long-range dependencies and also does not demonstrate effectiveness in sequence to sequence learning tasks. To build a module with both inductive bias of local and global context modelling in sequence to sequence learning, we hybrid self-attention with convolution and present Parallel multi-scale attention called MUSE. It encodes inputs into hidden representations and then applies self-attention and depth-separable convolution transformations in parallel. The convolution compensates for the insufficient use of local information while the self-attention focuses on capturing the dependencies. Moreover, this parallel structure is highly extensible, and new transformations can be easily introduced as new parallel branches, and is also favourable to parallel computation. The main contributions are summarized as follows: We find that the attention mechanism alone suffers from dispersed weights and is not suitable for long sequence representation learning. The proposed method tries to address this problem and achieves much better performance on generating long sequence. We propose a parallel multi-scale attention and explore a simple but efficient method to successfully combine convolution with self-attention all in one module. MUSE outperforms all previous models with same training data and the comparable model size, with state-of-the-art BLEU scores on three main machine translation tasks. MUSE-simple introduce parallel representation learning and brings expansibility and parallelism. Experiments show that the inference speed can be increased by 31% on GPUs. <<</Introduction>>> <<<MUSE: Parallel Multi-Scale Attention>>> Like other sequence-to-sequence models, MUSE also adopts an encoder-decoder framework. The encoder takes a sequence of word embeddings $(x_1, \cdots , x_n)$ as input where $n$ is the length of input. It transfers word embeddings to a sequence of hidden representation ${z} = (z_1, \cdots , z_n)$. Given ${z}$, the decoder is responsible for generating a sequence of text $(y_1, \cdots , y_m)$ token by token. The encoder is a stack of $N$ MUSE modules. Residual mechanism and layer normalization are used to connect two adjacent layers. The decoder is similar to encoder, except that each MUSE module in the decoder not only captures features from the generated text representations but also performs attention over the output of the encoder stack through additional context attention. Residual mechanism and layer normalization are also used to connect two modules and two adjacent layers. The key part in the proposed model is the MUSE module, which contains three main parts: self-attention for capturing global features, depth-wise separable convolution for capturing local features, and a position-wise feed-forward network for capturing token features. The module takes the output of $(i-1)$ layer as input and generates the output representation in a fusion way: where “Attention” refers to self-attention, “Conv” refers to dynamic convolution, “Pointwise” refers to a position-wise feed-forward network. The followings list the details of each part. We also propose MUSE-simple, a simple version of MUSE, which generates the output representation similar to the MUSE model except for that it dose not the include convolution operation: <<<Attention Mechanism for Global Context Representation>>> Self-attention is responsible for learning representations of global context. For a given input sequence $X$, it first projects $X$ into three representations, key $K$, query $Q$, and value $V$. Then, it uses a self-attention mechanism to get the output representation: Where $W^O$, $W^Q$, $W^K$, and $W^V$ are projection parameters. The self-attention operation $\sigma $ is the dot-production between key, query, and value pairs: Note that we conduct a projecting operation over the value in our self-attention mechanism $V_1=VW^V$ here. <<</Attention Mechanism for Global Context Representation>>> <<<Convolution for Local Context Modeling>>> We introduce convolution operations into MUSE to capture local context. To learn contextual sequence representations in the same hidden space, we choose depth-wise convolution BIBREF9 (we denote it as DepthConv in the experiments) as the convolution operation because it includes two separate transformations, namely, point-wise projecting transformation and contextual transformation. It is because that original convolution operator is not separable, but DepthConv can share the same point-wise projecting transformation with self-attention mechanism. We choose dynamic convolution BIBREF10, the best variant of DepthConv, as our implementation. Each convolution sub-module contains multiple cells with different kernel sizes. They are used for capturing different-range features. The output of the convolution cell with kernel size $k$ is: where $W^{V}$ and $W^{out}$ are parameters, $W^{V}$ is a point-wise projecting transformation matrix. The $Depth\_conv$ refers to depth convolution in the work of BIBREF10. For an input sequence $X$, the output $O$ is computed as: where $d$ is the hidden size. Note that we conduct the same projecting operation over the input in our convolution mechanism $V_2=XW^V$ here with that in self-attention mechanism. Shared projection To learn contextual sequence representations in the same hidden space, the projection in the self-attention mechanism $V_1=VW_V$ and that in the convolution mechanism $V_2=XW^V$ is shared. Because the shared projection can project the input feature into the same hidden space. If we conduct two independent projection here: $V_1=VW_1^V$ and $V_2=XW^V_2$, where $W_1^V$ and $W_2^V$ are two parameter matrices, we call it as separate projection. We will analyze the necessity of applying shared projection here instead of separate projection. Dynamically Selected Convolution Kernels We introduce a gating mechanism to automatically select the weight of different convolution cells. <<</Convolution for Local Context Modeling>>> <<<Point-wise Feed-forward Network for Capturing Token Representations>>> To learn token level representations, MUSE concatenates an self-attention network with a position-wise feed-forward network at each layer. Since the linear transformations are the same across different positions, the position-wise feed-forward network can be seen as a token feature extractor. where $W_1$, $b_1$, $W_2$, and $b_2$ are projection parameters. <<</Point-wise Feed-forward Network for Capturing Token Representations>>> <<</MUSE: Parallel Multi-Scale Attention>>> <<<Experiment>>> We evaluate MUSE on four machine translation tasks. This section describes the datasets, experimental settings, detailed results, and analysis. <<<Datasets>>> WMT14 En-Fr and En-De datasets The WMT 2014 English-French translation dataset, consisting of $36M$ sentence pairs, is adopted as a big dataset to test our model. We use the standard split of development set and test set. We use newstest2014 as the test set and use newstest2012 +newstest2013 as the development set. Following BIBREF11, we also adopt a joint source and target BPE factorization with the vocabulary size of $40K$. For medium dataset, we borrow the setup of BIBREF0 and adopt the WMT 2014 English-German translation dataset which consists of $4.5M$ sentence pairs, the BPE vocabulary size is set to $32K$. The test and validation datasets we used are the same as BIBREF0. IWSLT De-En and En-Vi datasets Besides, we perform experiments on two small IWSLT datasets to test the small version of MUSE with other comparable models. The IWSLT 2014 German-English translation dataset consists of $160k$ sentence pairs. We also adopt a joint source and target BPE factorization with the vocabulary size of $32K$. The IWSLT 2015 English-Vietnamese translation dataset consists of $133K$ training sentence pairs. For the En-Vi task, we build a dictionary including all source and target tokens. The vocabulary size for English is $17.2K$, and the vocabulary size for the Vietnamese is $6.8K$. <<</Datasets>>> <<<Experimental Settings>>> <<<Model>>> For fair comparisons, we only compare models reported with the comparable model size and the same training data. We do not compare BIBREF12 because it is an ensemble method. We build MUSE-base and MUSE-large with the parameter size comparable to Transformer-base and Transformer-large. We adopt multi-head attention BIBREF0 as implementation of self-attention in MUSE module. The number of attention head is set to 4 for MUSE-base and 16 for MUSE-large. We also add the network architecture built by MUSE-simple in the similar way into the comparison. MUSE consists of 12 residual blocks for encoder and 12 residual blocks for decoder, the dimension is set to 384 for MUSE-base and 768 for MUSE-large. The hidden dimension of non linear transformation is set to 768 for MUSE-base and 3072 for MUSE-large. The MUSE-large is trained on 4 Titan RTX GPUs while the MUSE-base is trained on a single NVIDIA RTX 2080Ti GPU. The batch size is calculated at the token level, which is called dynamic batching BIBREF0. We adopt dynamic convolution as the variant of depth-wise separable convolution. We tune the kernel size on the validation set. For convolution with a single kernel, we use the kernel size of 7 for all layers. In case of dynamic selected kernels, the kernel size is 3 for small kernels and 15 for large kernels for all layers. <<</Model>>> <<<Training>>> The training hyper-parameters are tuned on the validation set. MUSE-large For training MUSE-large, following BIBREF13, parameters are updated every 32 steps. We train the model for $80K$ updates with a batch size of 5120 for En-Fr, and train the model for ${30K}$ updates with a batch size of 3584 for En-De. The dropout rate is set to $0.1$ for En-Fr and ${0.3}$ for En-De. We borrow the setup of optimizer from BIBREF10 and use the cosine learning rate schedule with 10000 warmup steps. The max learning rate is set to $0.001$ on En-De translation and ${0.0007}$ on En-Fr translation. For checkpoint averaging, following BIBREF10, we tune the average checkpoints for En-De translation tasks. For En-Fr translation, we do not average checkpoint but use the final single checkpoint. MUSE-base We train and test MUSE-base on two small datasets, IWSLT 2014 De-En translation and IWSLT2015 En-Vi translation. Following BIBREF0, we use Adam optimizer with a learning rate of $0.001$. We use the warmup mechanism and invert the learning rate decay with warmup updates of $4K$. For the De-En dataset, we train the model for $20K$ steps with a batch size of $4K$. The parameters are updated every 4 steps. The dropout rate is set to $0.4$. For the En-Vi dataset, we train the model for $10K$ steps with a batch size of $4K$. The parameters are also updated every 4 steps. The dropout rate is set to $0.3$. We save checkpoints every epoch and average the last 10 checkpoints for inference. <<</Training>>> <<<Evaluation>>> During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks. The length penalty is set to 0.8 for En-Fr according to the validation results, 1 for the two small datasets following the default setting of BIBREF14. We do not tune beam width and length penalty but use the setting reported in BIBREF0. The BLEU metric is adopted to evaluate the model performance during evaluation. <<</Evaluation>>> <<</Experimental Settings>>> <<<Results>>> As shown in Table TABREF24, MUSE outperforms all previously models on En-De and En-Fr translation, including both state-of-the-art models of stand alone self-attention BIBREF0, BIBREF13, and convolutional models BIBREF11, BIBREF15, BIBREF10. This result shows that either self-attention or convolution alone is not enough for sequence to sequence learning. The proposed parallel multi-scale attention improves over them both on En-De and En-Fr. Compared to Evolved Transformer BIBREF19 which is constructed by NAS and also mixes convolutions of different kernel size, MUSE achieves 2.2 BLEU gains in En-Fr translation. Relative position or local attention constraints bring improvements over origin self-attention model, but parallel multi-scale outperforms them. MUSE can also scale to small model and small datasets, as depicted in Table TABREF25, MUSE-base pushes the state-of-the-art from 35.7 to 36.3 on IWSLT De-En translation dataset. It is shown in Table TABREF24 and Table TABREF25 that MUSE-simple which contains the basic idea of parallel multi-scale attention achieves state-of-the-art performance on three major machine translation datasets. <<</Results>>> <<<How do we propose effective parallel multi-scale attention?>>> In this subsection we compare MUSE and its variants on IWSLT 2015 De-En translation to answer the question. Does concatenating self-attention with convolution certainly improve the model? To bridge the gap between point-wise transformation which learns token level representations and self-attention which learns representations of global context, we introduce convolution to enhance our multi-scale attention. As we can see from the first experiment group of Table TABREF27, convolution is important in the parallel multi-scale attention. However, it is not easy to combine convolution and self-attention in one module to build better representations on sequence to sequence tasks. As shown in the first line of both second and third group of Table TABREF27, simply learning local representations by using convolution or depth-wise separable convolution in parallel with self-attention harms the performance. Furthermore, combining depth-wise separable convolution (in this work we choose its best variant dynamic convolution as implementation) is even worse than combining convolution. Why do we choose DepthConv and what is the importance of sharing Projection of DepthConv and self-attention? We conjecture that convolution and self-attention both learn contextual sequence representations and they should share the point transformation and perform the contextual transformation in the same hidden space. We first project the input to a hidden representation and perform a variant of depth-wise convolution and self-attention transformations in parallel. The fist two experiments in third group of Table TABREF27 show that validating the utility of sharing Projection in parallel multi-scale attention, shared projection gain 1.4 BLEU scores over separate projection, and bring improvement of 0.5 BLEU scores over MUSE-simple (without DepthConv). How much is the kernel size? Comparative experiments show that the too large kernel harms performance both for DepthConv and convolution. Since there exists self-attention and point-wise transformations, simply applying the growing kernel size schedule proposed in SliceNet BIBREF15 doesn't work. Thus, we propose to use dynamically selected kernel size to let the learned network decide the kernel size for each layer. <<</How do we propose effective parallel multi-scale attention?>>> <<<Further Analysis>>> <<<Parallel multi-scale attention brings time efficiency on GPUs>>> The underlying parallel structure (compared to the sequential structure in each block of Transformer) allows MUSE to be efficiently computed on GPUs. For example, we can combine small matrices into large matrices, and while it does not reduce the number of actual operations, it can be better paralleled by GPUs to speed up computation. Concretely, for each MUSE module, we first concentrate $W^Q,W^K,W^V$ of self-attention and $W_1$ of point feed-forward transformation into a single encoder matrix $W^{Enc}$, and then perform transformation such as self-attention, depth-separable convolution, and nonlinear transformation, in parallel, to learn multi-scale representations in the hidden layer. $W^O,W_2,W^{out}$ can also be combined a single decoder matrix $W^{Dec}$. The decoder of sequence to sequence architecture can be implemented similarly. In Table TABREF31, we conduct comparisons to show the speed gains with the aforementioned implementation, and the batch size is set to one sample per batch to simulate online inference environment. Under the settings, where the numbers of parameters are similar for MUSE and Transformer, about 31% increase in inference speed can be obtained. The experiments use MUSE with 6 MUSE-simple modules and Transformer with 6 base blocks. The hidden size is set to 512. Parallel multi-scale attention generates much better long sequence As demonstrated in Figure FIGREF32, MUSE generates better sequences of various length than self-attention, but it is remarkably adept at generate long sequence, e.g. for sequence longer than 100, MUSE is two times better. Lower layers prefer local context and higher layers prefer more contextual representations MUSE contains multiple dynamic convolution cells, whose streams are fused by a gated mechanism. The weight for each dynamic cell is a scalar. Here we analyze the weight of different dynamic convolution cells in different layers. Figure FIGREF32 shows that as the layer depth increases, the weight of dynamic convolution cells with small kernel sizes gradually decreases. It demonstrates that lower layers prefer local features while higher layers prefer global features. It is corresponding to the finding in BIBREF26. MUSE not only gains BLEU scores, but also generates more reasonable sentences and increases the translation quality. We conduct the case study on the De-En dataset and the cases are shown in Table TABREF34 in Appendix. In case 1, although the baseline transformer translates many correct words according to the source sentence, the translated sentence is not fluent at all. It indicates that Transformer does not capture the relationship between some words and their neighbors, such as “right” and “clap”. By contrast, MUSE captures them well by combining local convolution with global self-attention. In case 2, the cause adverbial clause is correctly translated by MUSE while transformer misses the word “why” and fails to translate it. <<</Parallel multi-scale attention brings time efficiency on GPUs>>> <<</Further Analysis>>> <<</Experiment>>> <<<Related Work>>> Sequence to sequence learning is an important task in machine learning. It evolves understanding and generating sequence. Machine translation is the touchstone of sequence to sequence learning. Traditional approaches usually adopt long-short term memory networks BIBREF27, BIBREF28 to learn the representation of sequences. However, these models either are built upon auto-regressive structures requiring longer encoding time or perform worse on real-world natural language processing tasks. Recent studies explore convolutional neural networks (CNN) BIBREF11 or self-attention BIBREF0 to support high-parallel sequence modeling and does not require auto-regressive structure during encoding, thus bringing large efficiency improvements. They are strong at capturing local or global dependencies. There are several studies on combining self-attention and convolution. However, they do not surpass both convectional and self-attention mechanisms. BIBREF4 propose to augment convolution with self attention by directly concentrating them in computer vision tasks. However, as demonstrated in Table TABREF27 there method does not work for sequence to sequence learning task. Since state-of-the-art models on question answering tasks still consist on self-attention and do no adopt ideas in QAnet BIBREF29. Both self-attention BIBREF13 and convolution BIBREF10 outperforms Evolved transformer by near 2 BLEU scores on En-Fr translation. It seems that learning global and local context through stacking self-attention and convolution layers does not beat either self-attention or convolution models. In contrast, the proposed parallel multi-scale attention outperforms previous convolution or self-attention based models on main translation tasks, showing its effectiveness for sequence to sequence learning. <<</Related Work>>> <<<Conclusion and Future work>>> Although the self-attention mechanism has been prevalent in sequence modeling, we find that attention suffers from dispersed weights especially for long sequences, resulting from the insufficient local information. To address this problem, we present Parallel Multi-scale Attention (MUSE) and MUSE-simple. MUSE-simple introduces the idea of parallel multi-scale attention into sequence to sequence learning. And MUSE fuses self-attention, convolution, and point-wise transformation together to explicitly learn global, local and token level sequence representations. Especially, we find from empirical results that the shared projection plays important part in its success, and is essential for our multi-scale learning. Beyond the inspiring new state-of-the-art results on three major machine translation datasets, detailed analysis and model variants also verify the effectiveness of MUSE. For future work, the parallel structure is highly extensible and provide many opportunities to improve these models. In addition, given the success of shared projection, we would like to explore its detailed effects on contextual representation learning. Finally, we are exited about future of parallel multi-scale attention and plan to apply this simple but effective idea to other tasks including image and speech. <<</Conclusion and Future work>>> <<</Title>>>
{ "references": [ "Abstract, Introduction" ], "type": "disordered_section" }
1909.05358
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset <<<Abstract>>> A significant barrier to progress in data-driven approaches to building dialog systems is the lack of high quality, goal-oriented conversational data. To help satisfy this elementary requirement, we introduce the initial release of the Taskmaster-1 dataset which includes 13,215 task-based dialogs comprising six domains. Two procedures were used to create this collection, each with unique advantages. The first involves a two-person, spoken "Wizard of Oz" (WOz) approach in which trained agents and crowdsourced workers interact to complete the task while the second is "self-dialog" in which crowdsourced workers write the entire dialog themselves. We do not restrict the workers to detailed scripts or to a small knowledge base and hence we observe that our dataset contains more realistic and diverse conversations in comparison to existing datasets. We offer several baseline models including state of the art neural seq2seq architectures with benchmark performance as well as qualitative human evaluations. Dialogs are labeled with API calls and arguments, a simple and cost effective approach which avoids the requirement of complex annotation schema. The layer of abstraction between the dialog model and the service provider API allows for a given model to interact with multiple services that provide similar functionally. Finally, the dataset will evoke interest in written vs. spoken language, discourse patterns, error handling and other linguistic phenomena related to dialog system research, development and design. <<</Abstract>>> <<<Introduction>>> Voice-based “personal assistants" such as Apple's SIRI, Microsoft's Cortana, Amazon Alexa, and the Google Assistant have finally entered the mainstream. This development is generally attributed to major breakthroughs in speech recognition and text-to-speech (TTS) technologies aided by recent progress in deep learning BIBREF0, exponential gains in compute power BIBREF1, BIBREF2, and the ubiquity of powerful mobile devices. The accuracy of machine learned speech recognizers BIBREF3 and speech synthesizers BIBREF4 are good enough to be deployed in real-world products and this progress has been driven by publicly available labeled datasets. However, conspicuously absent from this list is equal progress in machine learned conversational natural language understanding (NLU) and generation (NLG). The NLU and NLG components of dialog systems starting from the early research work BIBREF5 to the present commercially available personal assistants largely rely on rule-based systems. The NLU and NLG systems are often carefully programmed for very narrow and specific cases BIBREF6, BIBREF7. General understanding of natural spoken behaviors across multiple dialog turns, even in single task-oriented situations, is by most accounts still a long way off. In this way, most of these products are very much hand crafted, with inherent constraints on what users can say, how the system responds and the order in which the various subtasks can be completed. They are high precision but relatively low coverage. Not only are such systems unscalable, but they lack the flexibility to engage in truly natural conversation. Yet none of this is surprising. Natural language is heavily context dependent and often ambiguous, especially in multi-turn conversations across multiple topics. It is full of subtle discourse cues and pragmatic signals whose patterns have yet to be thoroughly understood. Enabling an automated system to hold a coherent task-based conversation with a human remains one of computer science's most complex and intriguing unsolved problems BIBREF5. In contrast to more traditional NLP efforts, interest in statistical approaches to dialog understanding and generation aided by machine learning has grown considerably in the last couple of years BIBREF8, BIBREF9, BIBREF10. However, the dearth of high quality, goal-oriented dialog data is considered a major hindrance to more significant progress in this area BIBREF9, BIBREF11. To help solve the data problem we present Taskmaster-1, a dataset consisting of 13,215 dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations. For the spoken dialogs, we created a “Wizard of Oz” (WOz) system BIBREF12 to collect two-person, spoken conversations. Crowdsourced workers playing the “user" interacted with human operators playing the “digital assistant” using a web-based interface. In this way, users were led to believe they were interacting with an automated system while it was in fact a human, allowing them to express their turns in natural ways but in the context of an automated interface. We refer to this spoken dialog type as “two-person dialogs". For the written dialogs, we engaged crowdsourced workers to write the full conversation themselves based on scenarios outlined for each task, thereby playing roles of both the user and assistant. We refer to this written dialog type as “self-dialogs". In a departure from traditional annotation techniques BIBREF10, BIBREF8, BIBREF13, dialogs are labeled with simple API calls and arguments. This technique is much easier for annotators to learn and simpler to apply. As such it is more cost effective and, in addition, the same model can be used for multiple service providers. Taskmaster-1 has richer and more diverse language than the current popular benchmark in task-oriented dialog, MultiWOZ BIBREF13. Table TABREF2 shows that Taskmaster-1 has more unique words and is more difficult for language models to fit. We also find that Taskmaster-1 is more realistic than MultiWOZ. Specifically, the two-person dialogs in Taskmaster-1 involve more real-word entities than seen in MutliWOZ since we do not restrict conversations to a small knowledge base. Beyond the corpus and the methodologies used to create it, we present several baseline models including state-of-the-art neural seq2seq architectures together with perplexity and BLEU scores. We also provide qualitative human performance evaluations for these models and find that automatic evaluation metrics correlate well with human judgments. We will publicly release our corpus containing conversations, API call and argument annotations, and also the human judgments. <<</Introduction>>> <<<Related work>>> <<<Human-machine vs. human-human dialog>>> BIBREF14 discuss the major features and differences among the existing offerings in an exhaustive and detailed survey of available corpora for data driven learning of dialog systems. One important distinction covered is that of human-human vs. human-machine dialog data, each having its advantages and disadvantages. Many of the existing task-based datasets have been generated from deployed dialog systems such as the Let’s Go Bus Information System BIBREF15 and the various Dialog State Tracking Challenges (DSTCs) BIBREF16. However, it is doubtful that new data-driven systems built with this type of corpus would show much improvement since they would be biased by the existing system and likely mimic its limitations BIBREF17. Since the ultimate goal is to be able to handle complex human language behaviors, it would seem that human-human conversational data is the better choice for spoken dialog system development BIBREF13. However, learning from purely human-human based corpora presents challenges of its own. In particular, human conversation has a different distribution of understanding errors and exhibits turn-taking idiosyncrasies which may not be well suited for interaction with a dialog system BIBREF17, BIBREF14. <<</Human-machine vs. human-human dialog>>> <<<The Wizard of Oz (WOz) Approach and MultiWOZ>>> The WOz framework, first introduced by BIBREF12 as a methodology for iterative design of natural language interfaces, presents a more effective approach to human-human dialog collection. In this setup, users are led to believe they are interacting with an automated assistant but in fact it is a human behind the scenes that controls the system responses. Given the human-level natural language understanding, users quickly realize they can comfortably and naturally express their intent rather than having to modify behaviors as is normally the case with a fully automated assistant. At the same time, the machine-oriented context of the interaction, i.e. the use of TTS and slower turn taking cadence, prevents the conversation from becoming fully fledged, overly complex human discourse. This creates an idealized spoken environment, revealing how users would openly and candidly express themselves with an automated assistant that provided superior natural language understanding. Perhaps the most relevant work to consider here is the recently released MultiWOZ dataset BIBREF13, since it is similar in size, content and collection methodologies. MultiWOZ has roughly 10,000 dialogs which feature several domains and topics. The dialogs are annotated with both dialog states and dialog acts. MultiWOZ is an entirely written corpus and uses crowdsourced workers for both assistant and user roles. In contrast, Taskmaster-1 has roughly 13,000 dialogs spanning six domains and annotated with API arguments. The two-person spoken dialogs in Taskmaster-1 use crowdsourcing for the user role but trained agents for the assistant role. The assistant's speech is played to the user via TTS. The remaining 7,708 conversations in Taskmaster-1 are self-dialogs, in which crowdsourced workers write the entire conversation themselves. As BIBREF18, BIBREF19 show, self dialogs are surprisingly rich in content. <<</The Wizard of Oz (WOz) Approach and MultiWOZ>>> <<</Related work>>> <<<The Taskmaster Corpus>>> <<<Overview>>> There are several key attributes that make Taskmaster-1 both unique and effective for data-driven approaches to building dialog systems and for other research. Spoken and written dialogs: While the spoken sources more closely reflect conversational language BIBREF20, written dialogs are significantly cheaper and easier to gather. This allows for a significant increase in the size of the corpus and in speaker diversity. Goal-oriented dialogs: All dialogs are based on one of six tasks: ordering pizza, creating auto repair appointments, setting up rides for hire, ordering movie tickets, ordering coffee drinks and making restaurant reservations. Two collection methods: The two-person dialogs and self-dialogs each have pros and cons, revealing interesting contrasts. Multiple turns: The average number of utterances per dialog is about 23 which ensures context-rich language behaviors. API-based annotation: The dataset uses a simple annotation schema providing sufficient grounding for the data while making it easy for workers to apply labels consistently. Size: The total of 13,215 dialogs in this corpus is on par with similar, recently released datasets such as MultiWOZ BIBREF13. <<</Overview>>> <<<Two-person, spoken dataset>>> In order to replicate a two-participant, automated digital assistant experience, we built a WOz platform that pairs agents playing the digital assistant with crowdsourced workers playing the user in task-based conversational scenarios. An example dialog from this dataset is given in Figure FIGREF5. <<<WOz platform and data pipeline>>> While it is beyond the scope of this work to describe the entire system in detail, there are several platform features that help illustrate how the process works. Modality: The agents playing the assistant type their input which is in turn played to the user via text-to-speech (TTS) while the crowdsourced workers playing the user speak aloud to the assistant using their laptop and microphone. We use WebRTC to establish the audio channel. This setup creates a digital assistant-like communication style. Conversation and user quality control: Once the task is completed, the agents tag each conversation as either successful or problematic depending on whether the session had technical glitches or user behavioral issues. We are also then able to root out problematic users based on this logging. Agent quality control: Agents are required to login to the system which allows us to monitor performance including the number and length of each session as well as their averages. User queuing: When there are more users trying to connect to the system than available agents, a queuing mechanism indicates their place in line and connects them automatically once they move to the front of the queue. Transcription: Once complete, the user's audio-only portion of the dialog is transcribed by a second set of workers and then merged with the assistant's typed input to create a full text version of the dialog. Finally, these conversations are checked for transcription errors and typos and then annotated, as described in Section SECREF48. <<</WOz platform and data pipeline>>> <<<Agents, workers and training>>> Both agents and crowdsourced workers are given written instructions prior to the session. Examples of each are given in Figure FIGREF6 and Figure FIGREF23. The instructions continue to be displayed on screen to the crowdsourced workers while they interact with the assistant. Instructions are modified at times (for either participant or both) to ensure broader coverage of dialog scenarios that are likely to occur in actual user-assistant interactions. For example, in one case users were asked to change their mind after ordering their first item and in another agents were instructed to tell users that a given item was not available. Finally, in their instructions, crowdsourced workers playing the user are told they will be engaging in conversation with “a digital assistant”. However, it is plausible that some suspect human intervention due to the advanced level of natural language understanding from the assistant side. Agents playing the assistant role were hired from a pool of dialog analysts and given two hours of training on the system interface as well as on how to handle specific scenarios such as uncooperative users and technical glitches. Uncooperative users typically involve those who either ignored agent input or who rushed through the conversation with short phrases. Technical issues involved dropped sessions (e.g. WebRTC connections failed) or cases in which the user could not hear the agent or vice-versa. In addition, weekly meetings were held with the agents to answer questions and gather feedback on their experiences. Agents typically work four hours per day with dialog types changing every hour. Crowdsourced workers playing the user are accessed using Amazon Mechanical Turk. Payment for a completed dialog session lasting roughly five to seven minutes was typically in the range of $\$1.00$ to $\$1.30$. Problematic users are detected either by the agent involved in the specific dialog or by post-session assessment and removed from future requests. <<</Agents, workers and training>>> <<</Two-person, spoken dataset>>> <<<Self-dialogs (one-person written dataset)>>> While the two-person approach to data collection creates a realistic scenario for robust, spoken dialog data collection, this technique is time consuming, complex and expensive, requiring considerable technical implementation as well as administrative procedures to train and manage agents and crowdsourced workers. In order to extend the Taskmaster dataset at minimal cost, we use an alternative self-dialog approach in which crowdsourced workers write the full dialogs themselves (i.e. interpreting the roles of both user and assistant). <<<Task scenarios and instructions>>> Targeting the same six tasks used for the two-person dialogs, we again engaged the Amazon Mechanical Turk worker pool to create self-dialogs, this time as a written exercise. In this case, users are asked to pretend they have a personal assistant who can help them take care of various tasks in real time. They are told to imagine a scenario in which they are speaking to their assistant on the phone while the assistant accesses the services for one of the given tasks. They then write down the entire conversation. Figure FIGREF34 shows a sample set of instructions. <<</Task scenarios and instructions>>> <<<Pros and cons of self-dialogs>>> The self-dialog technique renders quality data and avoids some of the challenges seen with the two-person approach. To begin, since the same person is writing both sides of the conversation, we never see misunderstandings that lead to frustration as is sometimes experienced between interlocutors in the two-person approach. In addition, all the self-dialogs follow a reasonable path even when the user is constructing conversations that include understanding errors or other types of dialog glitches such as when a particular choice is not available. As it turns out, crowdsourced workers are quite effective at recreating various types of interactions, both error-free and those containing various forms of linguistic repair. The sample dialog in Figure FIGREF44 shows the result of a self-dialog exercise in which workers were told to write a conversation with various ticket availability issues that is ultimately unsuccessful. Two more benefits of the self-dialog approach are its efficiency and cost effectiveness. We were able to gather thousands of dialogs in just days without transcription or trained agents, and spent roughly six times less per dialog. Despite these advantages, the self-dialog written technique cannot recreate the disfluencies and other more complex error patterns that occur in the two-person spoken dialogs which are important for model accuracy and coverage. <<</Pros and cons of self-dialogs>>> <<</Self-dialogs (one-person written dataset)>>> <<<Annotation>>> We chose a highly simplified annotation approach for Taskmaster-1 as compared to traditional, detailed strategies which require robust agreement among workers and usually include dialog state and slot information, among other possible labels. Instead we focus solely on API arguments for each type of conversation, meaning just the variables required to execute the transaction. For example, in dialogs about setting up UBER rides, we label the “to" and “from" locations along with the car type (UberX, XL, Pool, etc). For movie tickets, we label the movie name, theater, time, number of tickets, and sometimes screening type (e.g. 3D vs. standard). A complete list of labels is included with the corpus release. As discussed in Section SECREF33, to encourage diversity, at times we explicitly ask users to change their mind in the middle of the conversation, and the agents to tell the user that the requested item is not available. This results in conversations having multiple instances of the same argument type. To handle this ambiguity, in addition to the labels mentioned above, the convention of either “accept” or “reject" was added to all labels used to execute the transaction, depending on whether or not that transaction was successful. In Figure FIGREF49, both the number of people and the time variables in the assistant utterance would have the “.accept" label indicating the transaction was completed successfully. If the utterance describing a transaction does not include the variables by name, the whole sentence is marked with the dialog type. For example, a statement such as The table has been booked for you would be labeled as reservation.accept. <<</Annotation>>> <<</The Taskmaster Corpus>>> <<<Dataset Analysis>>> <<<Self-dialogs vs MultiWOZ>>> We quantitatively compare our self-dialogs (Section SECREF45) with the MultiWOZ dataset in Table TABREF2. Compared to MultiWOZ, we do not ask the users and assistants to stick to detailed scripts and do not restrict them to have conversations surrounding a small knowledge base. Table TABREF2 shows that our dataset has more unique words, and has almost twice the number of utterances per dialog than the MultiWOZ corpus. Finally, when trained with the Transformer BIBREF21 model, we observe significantly higher perplexities and lower BLEU scores for our dataset compared to MultiWOZ suggesting that our dataset conversations are difficult to model. Finally, Table TABREF2 also shows that our dataset contains close to 10 times more real-world named entities than MultiWOZ and thus, could potentially serve as a realistic baseline when designing goal oriented dialog systems. MultiWOZ has only 1338 unique named entities and only 4510 unique values (including date, time etc.) in their datatset. <<</Self-dialogs vs MultiWOZ>>> <<<Self-dialogs vs Two-person>>> In this section, we quantitatively compare 5k conversations each of self-dialogs (Section SECREF45) and two-person (Section SECREF31). From Table TABREF50, we find that self-dialogs exhibit higher perplexity ( almost 3 times) compared to the two-person conversations suggesting that self-dialogs are more diverse and contains more non-conventional conversational flows which is inline with the observations in Section-SECREF47. While the number of unique words are higher in the case of self-dialogs, conversations are longer in the two-person conversations. We also report metrics by training a single model on both the datasets together. <<</Self-dialogs vs Two-person>>> <<<Baseline Experiments: Response Generation>>> We evaluate various seq2seq architectures BIBREF22 on our self-dialog corpus using both automatic evaluation metrics and human judgments. Following the recent line of work on generative dialog systems BIBREF23, we treat the problem of response generation given the dialog history as a conditional language modeling problem. Specifically we want to learn a conditional probability distribution $P_{\theta }(U_{t}|U_{1:t-1})$ where $U_{t}$ is the next response given dialog history $U_{1:t-1}$. Each utterance $U_i$ itself is comprised of a sequence of words $w_{i_1}, w_{i_2} \ldots w_{i_k}$. The overall conditional probability is factorized autoregressively as $P_{\theta }$, in this work, is parameterized by a recurrent, convolution or Transformer-based seq2seq model. n-gram: We consider 3-gram and 4-gram conditional language model baseline with interpolation. We use random grid search for the best coefficients for the interpolated model. Convolution: We use the fconv architecture BIBREF24 and default hyperparameters from the fairseq BIBREF25 framework. We train the network with ADAM optimizer BIBREF26 with learning rate of 0.25 and dropout probability set to 0.2. LSTM: We consider LSTM models BIBREF27 with and without attention BIBREF28 and use the tensor2tensor BIBREF29 framework for the LSTM baselines. We use a two-layer LSTM network for both the encoder and the decoder with 128 dimensional hidden vectors. Transformer: As with LSTMs, we use the tensor2tensor framework for the Transformer model. Our Transformer BIBREF21 model uses 256 dimensions for both input embedding and hidden state, 2 layers and 4 attention heads. For both LSTMs and Transformer, we train the model with ADAM optimizer ($\beta _{1} = 0.85$, $\beta _{2} = 0.997$) and dropout probability set to 0.2. GPT-2: Apart from supervised seq2seq models, we also include results from pre-trained GPT-2 BIBREF30 containing 117M parameters. We evaluate all the models with perplexity and BLEU scores (Table TABREF55). Additionally, we perform two kinds of human evaluation - Ranking and Rating (LIKERT scale) for the top-3 performing models - Convolution, LSTM-attention and Transformer. For the ranking task, we randomly show 500 partial dialogs and generated responses of the top-3 models from the test set to three different crowdsourced workers and ask them to rank the responses based on their relevance to the dialog history. For the rating task, we show the model responses individually to three different crowdsourced workers and ask them to rate the responses on a 1-5 LIKERT scale based on their appropriateness to the dialog history. From Table-TABREF56, we see that inter-annotator reliability scores (Krippendorf’s Alpha) are higher for the ranking task compared to the rating task. From Table TABREF55, we see that Transformer is the best performing model on automatic evaluation metrics. It is interesting to note that there is a strong correlation between BLEU score and human ranking judgments. <<</Baseline Experiments: Response Generation>>> <<<Baseline Experiments: Argument Prediction>>> Next, we discuss a set of baseline experiments for the task of argument prediction. API arguments are annotated as spans in the dialog (Section SECREF48). We formulate this problem as mapping text conversation to a sequence of output arguments. Apart from the seq2seq Transformer baseline, we consider an additional model - an enhanced Transformer seq2seq model where the decoder can choose to copy from the input or generate from the vocabulary BIBREF31, BIBREF32. Since all the API arguments are input spans, the copy model having the correct inductive bias achieves the best performance. <<</Baseline Experiments: Argument Prediction>>> <<</Dataset Analysis>>> <<<Conclusion>>> To address the lack of quality corpora for data-driven dialog system research and development, this paper introduces Taskmaster-1, a dataset that provides richer and more diverse language as compared to current benchmarks since it is based on unrestricted, task-oriented conversations involving more real-word entities. In addition, we present two data collection methodologies, both spoken and written, that ensure both speaker diversity and conversational accuracy. Our straightforward, API-oriented annotation technique is much easier for annotators to learn and simpler to apply. We give several baseline models including state-of-the-art neural seq2seq architectures, provide qualitative human performance evaluations for these models, and find that automatic evaluation metrics correlate well with human judgments. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Conclusion, Related work" ], "type": "disordered_section" }
2004.03744
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> e-SNLI-VE-2.0: Corrected Visual-Textual Entailment with Natural Language Explanations <<<Abstract>>> The recently proposed SNLI-VE corpus for recognising visual-textual entailment is a large, real-world dataset for fine-grained multimodal reasoning. However, the automatic way in which SNLI-VE has been assembled (via combining parts of two related datasets) gives rise to a large number of errors in the labels of this corpus. In this paper, we first present a data collection effort to correct the class with the highest error rate in SNLI-VE. Secondly, we re-evaluate an existing model on the corrected corpus, which we call SNLI-VE-2.0, and provide a quantitative comparison with its performance on the non-corrected corpus. Thirdly, we introduce e-SNLI-VE-2.0, which appends human-written natural language explanations to SNLI-VE-2.0. Finally, we train models that learn from these explanations at training time, and output such explanations at testing time. <<</Abstract>>> <<<Introduction>>> Inspired by textual entailment BIBREF0, Xie BIBREF1 introduced the visual-textual entailment (VTE) task, which considers semantic entailment between a premise image and a textual hypothesis. Semantic entailment consists in determining if the hypothesis can be concluded from the premise, and assigning to each pair of (premise image, textual hypothesis) a label among entailment, neutral, and contradiction. In Figure FIGREF3, the label for the first image-sentence pair is entailment, because the hypothesis states that “a bunch of people display different flags”, which can be clearly derived from the image. On the contrary, the second image-sentence pair is labelled as contradiction, because the hypothesis stating that “people [are] running a marathon” contradicts the image with static people. Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\sim }31\%$ errors in this class, and ${\sim }1\%$ for the contradiction and entailment classes. Xie BIBREF1 introduced the VTE task under the name of “visual entailment”, which could imply recognizing entailment between images only. This paper prefers to follow Suzuki BIBREF4 and call it “visual-textual entailment” instead, as it involves reasoning on image-sentence pairs. In this work, we first focus on decreasing the error in the neutral class by collecting new labels for the neutral pairs in the validation and test sets of SNLI-VE, using Amazon Mechanical Turk (MTurk). To ensure high quality annotations, we used a series of quality control measures, such as in-browser checks, inserting trusted examples, and collecting three annotations per instance. Secondly, we re-evaluate current image-text understanding systems, such as the bottom-up top-down attention network (BUTD) BIBREF5 on VTE using our corrected dataset, which we call SNLI-VE-2.0. Thirdly, we introduce the e-SNLI-VE-2.0 corpus, which we form by appending human-written natural language explanations to SNLI-VE-2.0. These explanations were collected in e-SNLI BIBREF6 to support textual entailment for SNLI. For the same reasons as above, we re-annotate the explanations for the neutral pairs in the validation and test sets, while keeping the explanations from e-SNLI for all the rest. Finally, we extend a current VTE model with the capacity of learning from these explanations at training time and outputting an explanation for each predicted label at testing time. <<</Introduction>>> <<<SNLI-VE-2.0>>> The goal of VTE is to determine if a textual hypothesis $H_{text}$ can be concluded, given the information in a premise image $P_{image}$ BIBREF1. There are three possible labels: Entailment: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is true. Contradiction: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is false. Neutral: if neither of the earlier two are true. The SNLI-VE dataset proposed by Xie BIBREF1 is the combination of Flickr30k, a popular image dataset for image captioning BIBREF2 and SNLI, an influential dataset for natural language inference BIBREF0. Textual premises from SNLI are replaced with images from Flickr30k, which is possible, as these premises were originally collected as captions of these images (see Figure FIGREF3). However, in practice, a sensible proportion of labels are wrong due to the additional information contained in images. This mostly affects neutral pairs, since images may contain the necessary information to ground a hypothesis for which a simple premise caption was not sufficient. An example is shown in Figure FIGREF3. Vu BIBREF3 report that the label is wrong for ${\sim }31\%$ of neutral examples, based on a random subset of 171 neutral points from the test set. We also annotated 150 random neutral examples from the test set and found a similar percentage of 30.6% errors. Our annotations are available at https://github.com/virginie-do/e-SNLI-VE/tree/master/annotations/gt_labels.csv <<<Re-annotation details>>> In this work, we only collect new labels for the neutral pairs in the validation and test sets of SNLI-VE. While the procedure of re-annotation is generic, we limit our re-annotation to these splits as a first step to verify the difference in performance that current models have when evaluated on the corrected test set as well as the effect of model selection on the corrected validation set. We leave for future work re-annotation of the training set, which would likely lead to training better VTE models. We also chose not to re-annotate entailment and contradiction classes, as their error rates are much lower ($<$1% as reported by Vu BIBREF3). The main question that we want our dataset to answer is: “What is the relationship between the image premise and the sentence hypothesis?”. We provide workers with the definitions of entailment, neutral, and contradiction for image-sentence pairs and one example for each label. As shown in Figure FIGREF8, for each image-sentence pair, workers are required to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using at least half of the words that they highlighted. The collected explanations will be presented in more detail in Section SECREF20, as we focus here on the label correction. We point out that it is likely that requiring an explanation at the same time as requiring a label has a positive effect on the correctness of the label, since having to justify in writing the picked label may make workers pay an increased attention. Moreover, we implemented additional quality control measures for crowdsourced annotations, such as (a) collecting three annotations for every input, (b) injecting trusted annotations into the task for verification BIBREF7, and (c) restricting to workers with at least 90% previous approval rate. First, we noticed that some instances in SNLI-VE are ambiguous. We show some examples in Figure FIGREF3 and in Appendix SECREF43. In order to have a better sense of this ambiguity, three authors of this paper independently annotated 100 random examples. All three authors agreed on 54% of the examples, exactly two authors agreed on 45%, and there was only one example on which all three authors disagreed. We identified the following three major sources of ambiguity: mapping an emotion in the hypothesis to a facial expression in the image premise, e.g., “people enjoy talking”, “angry people”, “sad woman”. Even when the face is seen, it may be subjective to infer an emotion from a static image (see Figure FIGREF44 in Appendix SECREF43). personal taste, e.g., “the sign is ugly”. lack of consensus on terms such as “many people” or “crowded”. To account for the ambiguity that the neutral labels seem to present, we considered that an image-sentence pair is too ambiguous and not suitable for a well-defined visual-textual entailment task when three different labels were assigned by the three workers. Hence, we removed these examples from the validation (5.2%) and test (5.5%) sets. To ensure that our workers are correctly performing the task, we randomly inserted trusted pairs, i.e., pairs among the 54% on which all three authors agreed on the label. For each set of 10 pairs presented to a worker, one trusted pair was introduced at a random location, so that the worker, while being told that there is such a test pair, cannot figure out which one it is. Via an in-browser check, we only allow workers to submit their answers for each set of 10 instances only if the trusted pair was correctly labelled. Other in-browser checks were done for the collection of explanations, as we will describe in Section SECREF20. More details about the participants and design of the Mechanical Turk task can be found in Appendix SECREF41. After collecting new labels for the neutral instances in the validation and testing sets, we randomly select and annotate 150 instances from the validation set that were neutral in SNLI-VE. Based on this sample, the error rate went down from 31% to 12% in SNLI-VE-2.0. Looking at the 18 instances where we disagreed with the label assigned by MTurk workers, we noticed that 12 were due to ambiguity in the examples, and 6 were due to workers' errors. Further investigation into potentially eliminating ambiguous instances would likely be beneficial. However, we leave it as future work, and we proceed in this work with using our corrected labels, since our error rate is significantly lower than that of the original SNLI-VE. Finally, we note that only about 62% of the originally neutral pairs remain neutral, while 21% become contradiction and 17% entailment pairs. Therefore, we are now facing an imbalance between the neutral, entailment, and contradiction instances in the validation and testing sets of SNLI-VE-2.0. The neutral class becomes underrepresented and the label distributions in the corrected validation and testing sets both become E / N / C: 39% / 20% / 41%. To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class. <<</Re-annotation details>>> <<<Re-evaluation of Visual-Textual Entailment>>> Since we decreased the error rate of labels in the validation and test set, we are interested in the performance of a VTE model when using the corrected sets. <<<Model.>>> To tackle SNLI-VE, Xie BIBREF1 used EVE (for “Explainable Visual Entailment”), a modified version of the BUTD architecture, the winner of the Visual Question Answering (VQA) challenge in 2017 BIBREF5. Since the EVE implementation is not available at the time of this work, we used the original BUTD architecture, with the same hyperparameters as reported in BIBREF1. BUTD contains an image processing module and a text processing module. The image processing module encodes each image region proposed by FasterRCNN BIBREF8 into a feature vector using a bottom-up attention mechanism. In the text processing module, the text hypothesis is encoded into a fixed-length vector, which is the last output of a recurrent neural network with 512-GRU units BIBREF9. To input each token into the recurrent network, we use the pretrained GloVe vectors BIBREF10. Finally, a top-down attention mechanism is used between the hypothesis vector and each of the image region vectors to obtain an attention weight for each region. The weighted sum of these image region vectors is then fused with the text hypothesis vector. The multimodal fusion is fed to a multilayer percetron (MLP) with tanh activations and a final softmax layer to classify the image-sentence relation as entailment, contradiction, or neutral. Using the implementation from https://github.com/claudiogreco/coling18-gte. We use the original training set from SNLI-VE. To see the impact of correcting the validation and test sets, we do the following three experiments: model selection as well as testing are done on the original uncorrected SNLI-VE. model selection is done on the uncorrected SNLI-VE validation set, while testing is done on the corrected SNLI-VE-2.0 test set. model selection as well as testing are done on the corrected SNLI-VE-2.0. Models are trained with cross-entropy loss optimized by the Adam optimizer BIBREF11 with batch size 64. The maximum number of training epochs is set to 100, with early stopping when no improvement is observed on validation accuracy for 3 epochs. The final model checkpoint selected for testing is the one with the highest validation accuracy. <<</Model.>>> <<<Results.>>> The results of the three experiments enumerated above are reported in Table TABREF18. Surprisingly, we obtained an accuracy of 73.02% on SNLI-VE using BUTD, which is better than the 71.16% reported by Xie BIBREF1 for the EVE system which meant to be an improvement over BUTD. It is also better than their reproduction of BUTD, which gave 68.90%. The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness. Finally, when we run the training loop again, this time doing the model selection on the corrected validation set from SNLI-VE-2.0, we obtain a slightly worse performance of 72.52%, although the difference is not clearly significant. Finally, we recall that the training set has not been re-annotated, and hence approximately 31% image-sentence pairs are wrongly labelled as neutral, which likely affects the performance of the model. <<</Results.>>> <<</Re-evaluation of Visual-Textual Entailment>>> <<</SNLI-VE-2.0>>> <<<Visual-Textual Entailment with Natural Language Explanations>>> In this work, we also introduce e-SNLI-VE-2.0, a dataset combining SNLI-VE-2.0 with human-written explanations from e-SNLI BIBREF6, which were originally collected to support textual entailment. We replace the explanations for the neutral pairs in the validation and test sets with new ones collected at the same time as the new labels. We extend a current VTE model with an explanation module able to learn from these explanations at training time and generate an explanation for each predicted label at testing time. <<<e-SNLI-VE-2.0>>> e-SNLI BIBREF6 is an extension of the SNLI corpus with human-annotated natural language explanations for the ground-truth labels. The authors use the explanations to train models to also generate natural language justifications for their predictions. They collected one explanation for each instance in the training set of SNLI and three explanations for each instance in the validation and testing sets. We randomly selected 100 image-sentence pairs in the validation set of SNLI-VE and their corresponding explanations in e-SNLI and examined how relevant these explanations are for the VTE task. More precisely, we say that an explanation is relevant if it brings information that justifies the relationship between the image and the sentence. We restricted the count to correctly labelled inputs and found that 57% explanations were relevant. For example, the explanation for entailment in Figure FIGREF21 (“Cooking in his apartment is cooking”) was counted as irrelevant in our statistics, because it would not be the best explanation for an image-sentence pair, even though it is coherent with the textual pair. We investigate whether these explanations improve a VTE model when enhanced with a component that can process explanations at train time and output them at test time. To form e-SNLI-VE-2.0, we append to SNLI-VE-2.0 the explanations from e-SNLI for all except the neutral pairs in the validation and test sets of SNLI-VE, which we replace with newly crowdsourced explanations collected at the same time as the labels for these splits (see Figure FIGREF21). Statistics of e-SNLI-VE-2.0 are shown in Appendix SECREF39, Table TABREF40. <<</e-SNLI-VE-2.0>>> <<<Collecting Explanations>>> As mentioned before, in order to submit the annotation of an image-sentence pair, three steps must be completed: workers must choose a label, highlight words in the hypothesis, and use at least half of the highlighted words to write an explanation for their decision. The last two steps thus follow the quality control of crowd-sourced explanations introduced by Camburu BIBREF6. We also ensured that workers do not simply use a copy of the given hypothesis as explanation. We ensured all the above via in-browser checks before workers' submission. An example of collected explanations is given in Figure FIGREF21. To check the success of our crowdsourcing, we manually assessed the relevance of explanations among a random subset of 100 examples. A marking scale between 0 and 1 was used, assigning a score of $k$/$n$ when $k$ required attributes were given in an explanation out of $n$. We report an 83.5% relevance of explanations from workers. We note that, since our explanations are VTE-specific, they were phrased differently from the ones in e-SNLI, with more specific mentions to the images (e.g., “There is no labcoat in the picture, just a man wearing a blue shirt.”, “There are no apples or oranges shown in the picture, only bananas.”). Therefore, it would likely be beneficial to collect new explanations for all SNLI-VE-2.0 (not only for the neutral pairs in the validation and test sets) such that models can learn to output convincing explanations for the task at hand. However, we leave this as future work, and we show in this work the results that one obtains when using the explanations from e-SNLI-VE-2.0. <<</Collecting Explanations>>> <<<VTE Models with Natural Language Explanations>>> This section presents two VTE models that generate natural language explanations for their own decisions. We name them PaE-BUTD-VE and EtP-BUTD-VE, where PaE (resp. EtP) is for PredictAndExplain (resp. ExplainThenPredict), two models with similar principles introduced by Camburu BIBREF6. The first system learns to generate an explanation conditioned on the image premise, textual hypothesis, and predicted label. In contrast, the second system learns to first generate an explanation conditioned on the image premise and textual hypothesis, and subsequently makes a prediction solely based on the explanation. <<<Predict and Explain>>> PaE-BUTD-VE is a system for solving VTE and generating natural language explanations for the predicted labels. The explanations are conditioned on the image premise, the text hypothesis, and the predicted label (ground-truth label at train time), as shown in Figure FIGREF24. <<<Loss.>>> The training loss is a weighted combination of the classification loss and the explanation loss, both computed using softmax cross entropy: $\mathcal {L} = \alpha \mathcal {L}_{label} + (1-\alpha ) \mathcal {L}_{explanation} \; \textrm {;} \; \alpha \in [0,1]$. <<</Loss.>>> <<<Model selection.>>> In this experiment, we are first interested in examining if a neural network can generate explanations at no cost for label accuracy. Therefore, only balanced accuracy on label is used for the model selection criterion. However, future work can investigate other selection criteria involving a combination between the label and explanation performances. We performed hyperparameter search on $\alpha $, considering values between 0.2 and 0.8 with a step of 0.2. We found $\alpha =0.4$ to produce the best validation balanced accuracy of 72.81%, while BUTD trained without explanations yielded a similar 72.58% validation balanced accuracy. <<</Model selection.>>> <<</Predict and Explain>>> <<<Explain Then Predict>>> When assigning a label, an explanation is naturally part of the decision-making process. This motivates the design of a system that explains itself before deciding on a label, called EtP-BUTD-VE. For this system, a first neural network is trained to generate an explanation given an image-sentence input. Separately, a second neural network, called ExplToLabel-VE, is trained to predict a label from an explanation (see Figure FIGREF32). <<</Explain Then Predict>>> <<<Qualitative Analysis of Generated Explanations>>> We complement our quantitative results with a qualitative analysis of the explanations generated by our enhanced VTE systems. In Figures FIGREF36 and FIGREF37, we present examples of the predicted labels and generated explanations. Figure FIGREF36 shows an example where the EtP-BUTD-VE model produces both a correct label and a relevant explanation. The label is contradiction, because in the image, the students are playing with a soccer ball and not a basketball, thus contradicting the text hypothesis. Given the composition of the generated sentence (“Students cannot be playing soccer and baseball at the same time.”), ExplToLabel-VE was able to detect a contradiction in the image-sentence input. In comparison, the explanation from e-SNLI-VE-2.0 is not correct, even if it was valid for e-SNLI when the text premise was given. This emphasizes the difficulty that we are facing with generating proper explanations when training on a noisy dataset. Even when the generated explanations are irrelevant, we noticed that they are on-topic and that most of the time the mistakes come from repetitions of certain sub-phrases. For example, in Figure FIGREF37, PaE-BUTD-VE predicts the label neutral, which is correct, but the explanation contains an erroneous repetition of the n-gram “are in a car”. However, it appears that the system learns to generate a sentence in the form “Just because ...doesn't mean ...”, which is frequently found for the justification of neutral pairs in the training set. The explanation generated by EtP-BUTD-VE adopts the same structure, and the ExplToLabel-VE component correctly classifies the instance as neutral. However, even if the explanation is semantically correct, it is not relevant for the input and fails to explain the classification. <<</Qualitative Analysis of Generated Explanations>>> <<</VTE Models with Natural Language Explanations>>> <<</Visual-Textual Entailment with Natural Language Explanations>>> <<<Conclusion>>> In this paper, we first presented SNLI-VE-2.0, which corrects the neutral instances in the validation and test sets of SNLI-VE. Secondly, we re-evaluated an existing model on the corrected sets in order to update the estimate of its performance on this task. Thirdly, we introduced e-SNLI-VE-2.0, a dataset which extends SNLI-VE-2.0 with natural language explanations. Finally, we trained two types of models that learn from these explanations at training time, and output such explanations at test time, as a stepping stone in explainable artificial intelligence. Our work is a jumping-off point for both the identification and correction of SNLI-VE, as well as in the extension to explainable VTE. We hope that the community will build on our findings to create more robust as well as explainable multimodal systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, SNLI-VE-2.0" ], "type": "disordered_section" }
1911.12579
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> A New Corpus for Low-Resourced Sindhi Language with Word Embeddings <<<Abstract>>> Representing words and phrases into dense vectors of real numbers which encode semantic and syntactic properties is a vital constituent in natural language processing (NLP). The success of neural network (NN) models in NLP largely rely on such dense word representations learned on the large unlabeled corpus. Sindhi is one of the rich morphological language, spoken by large population in Pakistan and India lacks corpora which plays an essential role of a test-bed for generating word embeddings and developing language independent NLP systems. In this paper, a large corpus of more than 61 million words is developed for low-resourced Sindhi language for training neural word embeddings. The corpus is acquired from multiple web-resources using web-scrappy. Due to the unavailability of open source preprocessing tools for Sindhi, the prepossessing of such large corpus becomes a challenging problem specially cleaning of noisy data extracted from web resources. Therefore, a preprocessing pipeline is employed for the filtration of noisy text. Afterwards, the cleaned vocabulary is utilized for training Sindhi word embeddings with state-of-the-art GloVe, Skip-Gram (SG), and Continuous Bag of Words (CBoW) word2vec algorithms. The intrinsic evaluation approach of cosine similarity matrix and WordSim-353 are employed for the evaluation of generated Sindhi word embeddings. Moreover, we compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) word representations. Our intrinsic evaluation results demonstrate the high quality of our generated Sindhi word embeddings using SG, CBoW, and GloVe as compare to SdfastText word representations. <<</Abstract>>> <<<Introduction>>> Sindhi is a rich morphological, mutltiscript, and multidilectal language. It belongs to the Indo-Aryan language family BIBREF0, with significant cultural and historical background. Presently, it is recognized as is an official language BIBREF1 in Sindh province of Pakistan, also being taught as a compulsory subject in Schools and colleges. Sindhi is also recognized as one of the national languages in India. Ulhasnagar, Rajasthan, Gujarat, and Maharashtra are the largest Indian regions of Sindhi native speakers. It is also spoken in other countries except for Pakistan and India, where native Sindhi speakers have migrated, such as America, Canada, Hong Kong, British, Singapore, Tanzania, Philippines, Kenya, Uganda, and South, and East Africa. Sindhi has rich morphological structure BIBREF2 due to a large number of homogeneous words. Historically, it was written in multiple writing systems, which differ from each other in terms of orthography and morphology. The Persian-Arabic is the standard script of Sindhi, which was officially accepted in 1852 by the British government. However, the Sindhi-Devanagari is also a popular writing system in India being written in left to right direction like the Hindi language. Formerly, Khudabadi, Gujrati, Landa, Khojki, and Gurumukhi were also adopted as its writing systems. Even though, Sindhi has great historical and literal background, presently spoken by nearly 75 million people BIBREF1. The research on SNLP was coined in 2002, however, IT grabbed research attention after the development of its Unicode system BIBREF3. But still, Sindhi stands among the low-resourced languages due to the scarcity of core language processing resources of the raw and annotated corpus, which can be utilized for training robust word embeddings or the use of machine learning algorithms. Since the development of annotated datasets requires time and human resources. The Language Resources (LRs) are fundamental elements for the development of high quality NLP systems based on automatic or NN based approaches. The LRs include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. The development of such resources has received great research interest for the digitization of human languages BIBREF4. Many world languages are rich in such language processing resources integrated in their software tools including English BIBREF5 BIBREF6, Chinese BIBREF7 and other languages BIBREF8 BIBREF9. The Sindhi language lacks the basic computational resources BIBREF10 of a large text corpus, which can be utilized for training robust word embeddings and developing language independent NLP applications including semantic analysis, sentiment analysis, parts of the speech tagging, named entity recognition, machine translation BIBREF11, multitasking BIBREF12, BIBREF13. Presently Sindhi Persian-Arabic is frequently used for online communication, newspapers, public institutions in Pakistan, and India BIBREF1. But little work has been carried out for the development of LRs such as raw corpus BIBREF14, BIBREF15, annotated corpus BIBREF16, BIBREF17, BIBREF1, BIBREF18. In the best of our knowledge, Sindhi lacks the large unlabelled corpus which can be utilized for generating and evaluating word embeddings for Statistical Sindhi Language Processing (SSLP). One way to to break out this loop is to learn word embeddings from unlabelled corpora, which can be utilized to bootstrap other downstream NLP tasks. The word embedding is a new term of semantic vector space BIBREF19, distributed representations BIBREF20, and distributed semantic models. It is a language modeling approach BIBREF21 used for the mapping of words and phrases into $n$-dimensional dense vectors of real numbers that effectively capture the semantic and syntactic relationship with neighboring words in a geometric way BIBREF22 BIBREF23. Such as “Einstein” and “Scientist” would have greater similarity compared with “Einstein” and “doctor.” In this way, word embeddings accomplish the important linguistic concept of “a word is characterized by the company it keeps". More recently NN based models yield state-of-the-art performance in multiple NLP tasks BIBREF24 BIBREF25 with the word embeddings. One of the advantages of such techniques is they use unsupervised approaches for learning representations and do not require annotated corpus which is rare for low-resourced Sindhi language. Such representions can be trained on large unannotated corpora, and then generated representations can be used in the NLP tasks which uses a small amount of labelled data. In this paper, we address the problems of corpus construction by collecting a large corpus of more than 61 million words from multiple web resources using the web-scrappy framework. After the collection of the corpus, we carefully preprocessed for the filtration of noisy text, e.g., the HTML tags and vocabulary of the English language. The statistical analysis is also presented for the letter, word frequencies and identification of stop-words. Finally, the corpus is utilized to generate Sindhi word embeddings using state-of-the-art GloVe BIBREF26 SG and CBoW BIBREF27 BIBREF20 BIBREF24 algorithms. The popular intrinsic evaluation method BIBREF20 BIBREF28 BIBREF29 of calculating cosine similarity between word vectors and WordSim353 BIBREF30 are employed to measure the performance of the learned Sindhi word embeddings. We translated English WordSim353 word pairs into Sindhi using bilingual English to Sindhi dictionary. The intrinsic approach typically involves a pre-selected set of query terms BIBREF23 and semantically related target words, which we refer to as query words. Furthermore, we also compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) BIBREF25 word representations. To the best of our knowledge, this is the first comprehensive work on the development of large corpus and generating word embeddings along with systematic evaluation for low-resourced Sindhi Persian-Arabic. The synopsis of our novel contributions is listed as follows: We present a large corpus of more than 61 million words obtained from multiple web resources and reveal a list of Sindhi stop words. We develop a text cleaning pipeline for the preprocessing of the raw corpus. Generate word embeddings using GloVe, CBoW, and SG Word2Vec algorithms also evaluate and compare them using the intrinsic evaluation approaches of cosine similarity matrix and WordSim353. We are the first to evaluate SdfastText word representations and compare them with our proposed Sindhi word embeddings. The remaining sections of the paper are organized as; Section SECREF2 presents the literature survey regarding computational resources, Sindhi corpus construction, and word embedding models. Afterwards, Section SECREF3 presents the employed methodology, Section SECREF4 consist of statistical analysis of the developed corpus. Section SECREF5 present the experimental setup. The intrinsic evaluation results along with comparison are given in Section SECREF6. The discussion and future work are given in Section SECREF7, and lastly, Section SECREF8 presents the conclusion. <<</Introduction>>> <<<Related work>>> The natural language resources refer to a set of language data and descriptions BIBREF31 in machine readable form, used for building, improving, and evaluating NLP algorithms or softwares. Such resources include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. Many world languages are rich in such language processing resources integrated in the software tools including NLTK for English BIBREF5, Stanford CoreNLP BIBREF6, LTP for Chinese BIBREF7, TectoMT for German, Russian, Arabic BIBREF8 and multilingual toolkit BIBREF9. But Sindhi language is at an early stage for the development of such resources and software tools. The corpus construction for NLP mainly involves important steps of acquisition, preprocessing, and tokenization. Initially, BIBREF14 discussed the morphological structure and challenges concerned with the corpus development along with orthographical and morphological features in the Persian-Arabic script. The raw and annotated corpus BIBREF1 for Sindhi Persian-Arabic is a good supplement towards the development of resources, including raw and annotated datasets for parts of speech tagging, morphological analysis, transliteration between Sindhi Persian-Arabic and Sindhi-Devanagari, and machine translation system. But the corpus is acquired only form Wikipedia-dumps. A survey-based study BIBREF4 provides all the progress made in the Sindhi Natural Language Processing (SNLP) with the complete gist of adopted techniques, developed tools and available resources which show that work on resource development on Sindhi needs more sophisticated efforts. The raw corpus is utilized for word segmentation BIBREF32 of Sindhi Persian-Arabic. More recently, an initiative towards the development of resources is taken BIBREF16 by open sourcing annotated dataset of Sindhi Persian-Arabic obtained from news and social blogs. The existing and proposed work is presented in Table TABREF9 on the corpus development, word segmentation, and word embeddings, respectively. The power of word embeddings in NLP was empirically estimated by proposing a neural language model BIBREF21 and multitask learning BIBREF12, but recently usage of word embeddings in deep neural algorithms has become integral element BIBREF33 for performance acceleration in deep NLP applications. The CBoW and SG BIBREF27 BIBREF20 popular word2vec neural architectures yielded high quality vector representations in lower computational cost with integration of character-level learning on large corpora in terms of semantic and syntactic word similarity later extended BIBREF33 BIBREF24. Both approaches produce state-of-the-art accuracy with fast training performance, better representations of less frequent words and efficient representation of phrases as well. BIBREF34 proposed NN based approach for generating morphemic-level word embeddings, which surpassed all the existing embedding models in intrinsic evaluation. A count-based GloVe model BIBREF26 also yielded state-of-the-art results in an intrinsic evaluation and downstream NLP tasks. The performance of Word embeddings is evaluated using intrinsic BIBREF23 BIBREF29 and extrinsic evaluation BIBREF28 methods. The performance of word embeddings can be measured with intrinsic and extrinsic evaluation approaches. The intrinsic approach is used to measure the internal quality of word embeddings such as querying nearest neighboring words and calculating the semantic or syntactic similarity between similar word pairs. A method of direct comparison for intrinsic evaluation of word embeddings measures the neighborhood of a query word in vector space. The key advantage of that method is to reduce bias and create insight to find data-driven relevance judgment. An extrinsic evaluation approach is used to evaluate the performance in downstream NLP tasks, such as parts-of-speech tagging or named-entity recognition BIBREF23, but the Sindhi language lacks annotated corpus for such type of evaluation. Moreover, extrinsic evaluation is time consuming and difficult to interpret. Therefore, we opt intrinsic evaluation method BIBREF28 to get a quick insight into the quality of proposed Sindhi word embeddings by measuring the cosine distance between similar words and using WordSim353 dataset. A study reveals that the choice of optimized hyper-parameters BIBREF35 has a great impact on the quality of pretrained word embeddings as compare to desing a novel algorithm. Therefore, we optimized the hyperparameters for generating robust Sindhi word embeddings using CBoW, SG and GloVe models. The embedding visualization is also useful to visualize the similarity of word clusters. Therefore, we use t-SNE BIBREF36 dimensionality reduction algorithm for compressing high dimensional embedding into 2-dimensional $x$,$y$ coordinate pairs with PCA BIBREF37. The PCA is useful to combine input features by dropping the least important features while retaining the most valuable features. <<</Related work>>> <<<Methodology>>> This section presents the employed methodology in detail for corpus acquisition, preprocessing, statistical analysis, and generating Sindhi word embeddings. <<<Task description>>> We initiate this work from scratch by collecting large corpus from multiple web resources. After preprocessing and statistical analysis of the corpus, we generate Sindhi word embeddings with state-of-the-art CBoW, SG, and GloVe algorithms. The generated word embeddings are evaluated using the intrinsic evaluation approaches of cosine similarity between nearest neighbors, word pairs, and WordSim-353 for distributional semantic similarity. Moreover, we use t-SNE with PCA for the comparison of the distance between similar words via visualization. <<</Task description>>> <<<Corpus acquisition>>> The corpus is a collection of human language text BIBREF31 built with a specific purpose. However, the statistical analysis of the corpus provides quantitative, reusable data, and an opportunity to examine intuitions and ideas about language. Therefore, the corpus has great importance for the study of written language to examine the text. In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter. <<</Corpus acquisition>>> <<<Preprocessing>>> The preprocessing of text corpus obtained from multiple web resources is a challenging task specially it becomes more complicated when working on low-resourced language like Sindhi due to the lack of open-source preprocessing tools such as NLTK BIBREF5 for English. Therefore, we design a preprocessing pipeline depicted in Figure FIGREF22 for the filtration of unwanted data and vocabulary of other languages such as English to prepare input for word embeddings. Whereas, the involved preprocessing steps are described in detail below the Figure FIGREF22. Moreover, we reveal the list of Sindhi stop words BIBREF38 which is labor intensive and requires human judgment as well. Hence, the most frequent and least important words are classified as stop words with the help of a Sindhi linguistic expert. The partial list of Sindhi stop words is given in TABREF61. We use python programming language for designing the preprocessing pipeline using regex and string functions. Input: The collected text documents were concatenated for the input in UTF-8 format. Replacement symbols: The punctuation marks of a full stop, hyphen, apostrophe, comma, quotation, and exclamation marks replaced with white space for authentic tokenization because without replacing these symbols with white space the words were found joined with their next or previous corresponding words. Filtration of noisy data: The text acquisition from web resources contain a huge amount of noisy data. Therefore, we filtered out unimportant data such as the rest of the punctuation marks, special characters, HTML tags, all types of numeric entities, email, and web addresses. Normalization: In this step, We tokenize the corpus then normalize to lower-case for the filtration of multiple white spaces, English vocabulary, and duplicate words. The stop words were only filtered out for preparing input for GloVe. However, the sub-sampling approach in CBoW and SG can discard most frequent or stop words automatically. <<</Preprocessing>>> <<<Word embedding models>>> The NN based approaches have produced state-of-the-art performance in NLP with the usage of robust word embedings generated from the large unlabelled corpus. Therefore, word embeddings have become the main component for setting up new benchmarks in NLP using deep learning approaches. Most recently, the use cases of word embeddings are not only limited to boost statistical NLP applications but can also be used to develop language resources such as automatic construction of WordNet BIBREF39 using the unsupervised approach. The word embedding can be precisely defined as the encoding of vocabulary $V$ into $N$ and the word $w$ from $V$ to vector $\overrightarrow{w} $ into $N$-dimensional embedding space. They can be broadly categorized into predictive and count based methods, being generated by employing co-occurrence statistics, NN algorithms, and probabilistic models. The GloVe BIBREF26 algorithm treats each word as a single entity in the corpus and generates a vector of each word. However, CBoW and SG BIBREF27 BIBREF20, later extended BIBREF33 BIBREF24, well-known as word2vec rely on simple two layered NN architecture which uses linear activation function in hidden layer and softmax in the output layer. The work2vec model treats each word as a bag-of-character n-gram. <<</Word embedding models>>> <<<GloVe>>> The GloVe is a log-bilinear regression model BIBREF26 which combines two methods of local context window and global matrix factorization for training word embeddings of a given vocabulary in an unsupervised way. It weights the contexts using the harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. The Glove’s implementation represents word $w \in V_{w}$ and context $c \in V_{c}$ in $D$-dimensional vectors $\overrightarrow{w}$ and $\overrightarrow{c}$ in a following way, Where, $b^{\overrightarrow{w}}$ is row vector $\left|V_{w}\right|$ and $b^{\overrightarrow{c}}$ is $\left|V_{c}\right|$ is column vector. <<</GloVe>>> <<<Continuous bag-of-words>>> The standard CBoW is the inverse of SG BIBREF27 model, which predicts input word on behalf of the context. The length of input in the CBoW model depends on the setting of context window size which determines the distance to the left and right of the target word. Hence the context is a window that contain neighboring words such as by giving $w=\left\lbrace w_{1}, w_{2}, \dots \dots w_{t}\right\rbrace $ a sequence of words $T$, the objective of the CBoW is to maximize the probability of given neighboring words such as, Where, $c_{t}$ is context of $t^{\text{th}}$ word for example with window $w_{t-c}, \ldots w_{t-1}, w_{t+1}, \ldots w_{t+c}$ of size $2 c$. <<</Continuous bag-of-words>>> <<<Skip gram>>> The SG model predicts surrounding words by giving input word BIBREF20 with training objective of learning good word embeddings that efficiently predict the neighboring words. The goal of skip-gram is to maximize average log-probability of words $w=\left\lbrace w_{1}, w_{2}, \dots \dots w_{t}\right\rbrace $ across the entire training corpus, Where, $c_{t}$ denotes the context of words indices set of nearby $w_{t}$ words in the training corpus. <<</Skip gram>>> <<<Hyperparameters>>> <<<Sub-sampling>>> Th sub-sampling BIBREF20 approach is useful to dilute most frequent or stop words, also accelerates learning rate, and increases accuracy for learning rare word vectors. Numerous words in English, e.g., ‘the’, ‘you’, ’that’ do not have more importance, but these words appear very frequently in the text. However, considering all the words equally would also lead to over-fitting problem of model parameters BIBREF24 on the frequent word embeddings and under-fitting on the rest. Therefore, it is useful to count the imbalance between rare and repeated words. The sub-sampling technique randomly removes most frequent words with some threshold $t$ and probability $p$ of words and frequency $f$ of words in the corpus. Where each word$w_{i}$ is discarded with computed probability in training phase, $f(w_i )$ is frequency of word $w_{i}$ and $t>0$ are parameters. <<</Sub-sampling>>> <<<Dynamic context window>>> The traditional word embedding models usually use a fixed size of a context window. For instance, if the window size ws=6, then the target word apart from 6 tokens will be treated similarity as the next word. The scheme is used to assign more weight to closer words, as closer words are generally considered to be more important to the meaning of the target word. The CBoW, SG and GloVe models employ this weighting scheme. The GloVe model weights the contexts using a harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. However, CBoW and SG implementation equally consider the contexts by dividing the ws with the distance from target word, e.g. ws=6 will weigh its context by $\frac{6}{6} \frac{5}{6} \frac{4}{6} \frac{3}{6} \frac{2}{6} \frac{1}{6}$. <<</Dynamic context window>>> <<<Sub-word model>>> The sub-word model BIBREF24 can learn the internal structure of words by sharing the character representations across words. In that way, the vector for each word is made of the sum of those character $n-gram$. Such as, a vector of a word “table” is a sum of $n-gram$ vectors by setting the letter $n-gram$ size $min=3$ to $max=6$ as, $<ta, tab, tabl, table, table>, abl, able, able>, ble, ble>, le>$, we can get all sub-words of "table" with minimum length of $minn=3$ and maximum length of $maxn=6$. The $<$ and $>$ symbols are used to separate prefix and suffix words from other character sequences. In this way, the sub-word model utilizes the principles of morphology, which improves the quality of infrequent word representations. In addition to character $n-grams$, the input word $w$ is also included in the set of character $n-gram$, to learn the representation of each word. We obtain scoring function using a input dictionary of $n-grams$ with size $K$ by giving word $w$ , where $K_{w} \subset \lbrace 1, \ldots , K\rbrace $. A word representation $Z_{k}$ is associated to each $n-gram$ $Z$. Hence, each word is represented by the sum of character $n-gram$ representations, where, $s$ is the scoring function in the following equation, <<</Sub-word model>>> <<<Position-dependent weights>>> The position-dependent weighting approach BIBREF40 is used to avoid direct encoding of representations for words and their positions which can lead to over-fitting problem. The approach learns positional representations in contextual word representations and used to reweight word embedding. Thus, it captures good contextual representations at lower computational cost, Where, $p$ is individual position in context window associated with $d_{p}$ vector. Afterwards the context vector reweighted by their positional vectors is average of context words. The relative positional set is $P$ in context window and $v_{C}$ is context vector of $w_{t}$ respectively. <<</Position-dependent weights>>> <<<Shifted point-wise mutual information>>> The use sparse Shifted Positive Point-wise Mutual Information (SPPMI) BIBREF41 word-context matrix in learning word representations improves results on two word similarity tasks. The CBoW and SG have $k$ (number of negatives) BIBREF27 BIBREF20 hyperparameter, which affects the value that both models try to optimize for each $(w, c): P M I(w, c)-\log k$. Parameter $k$ has two functions of better estimation of negative examples, and it performs as before observing the probability of positive examples (actual occurrence of $w,c$). <<</Shifted point-wise mutual information>>> <<<Deleting rare words>>> Before creating a context window, the automatic deletion of rare words also leads to performance gain in CBoW, SG and GloVe models, which further increases the actual size of context windows. <<</Deleting rare words>>> <<</Hyperparameters>>> <<<Evaluation methods>>> The intrinsic evaluation is based on semantic similarity BIBREF23 in word embeddings. The word similarity measure approach states BIBREF35 that the words are similar if they appear in the similar context. We measure word similarity of proposed Sindhi word embeddings using dot product method and WordSim353. <<<Cosine similarity>>> The cosine similarity between two non-zero vectors is a popular measure that calculates the cosine of the angle between them which can be derived by using the Euclidean dot product method. The dot product is a multiplication of each component from both vectors added together. The result of a dot product between two vectors isn’t another vector but a single value or a scalar. The dot product for two vectors can be defined as: $\overrightarrow{a}=\left(a_{1}, a_{2}, a_{3}, \dots , a_{n}\right)$ and $\overrightarrow{b}=\left({b}_{1}, {b}_{2}, {b}_{3}, \ldots , {b}_{n}\right)$ where $a_{n}$ and $b_{n}$ are the components of the vector and $n$ is dimension of vectors such as, However, the cosine of two non-zero vectors can be derived by using the Euclidean dot product formula, Given $a_{i}$ two vectors of attributes $a$ and $b$, the cosine similarity, $\cos ({\theta })$, is represented using a dot product and magnitude as, where $a_{i}$ and $b_{i}$ are components of vector $\overrightarrow{a}$ and $\overrightarrow{b}$, respectively. <<</Cosine similarity>>> <<<WordSim353>>> The WordSim353 BIBREF42 is popular for the evaluation of lexical similarity and relatedness. The similarity score is assigned with 13 to 16 human subjects with semantic relations BIBREF30 for 353 English noun pairs. Due to the lack of annotated datasets in the Sindhi language, we translated WordSim353 using English to Sindhi bilingual dictionary for the evaluation of our proposed Sindhi word embeddings and SdfastText. We use the Spearman correlation coefficient for the semantic and syntactic similarity comparison which is used to used to discover the strength of linear or nonlinear relationships if there are no repeated data values. A perfect Spearman’s correlation of $+1$ or $-1$ discovers the strength of a link between two sets of data (word-pairs) when observations are monotonically increasing or decreasing functions of each other in a following way, where $r_s$ is the rank correlation coefficient, $n$ denote the number of observations, and $d^i$ is the rank difference between $i^{th}$ observations. <<</WordSim353>>> <<</Evaluation methods>>> <<</Methodology>>> <<<Statistical analysis of corpus>>> The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens. <<<Letter occurrences>>> The frequency of letter occurrences in human language is not arbitrarily organized but follow some specific rules which enable us to describe some linguistic regularities. The Zipf’s law BIBREF43 suggests that if the frequency of letter or word occurrence ranked in descending order such as, Where, $F_{r}$ is the letter frequency of rth rank, $a$ and $b$ are parameters of input text. The comparative letter frequency in the corpus is the total number of occurrences of a letter divided by the total number of letters present in the corpus. The letter frequencies in our developed corpus are depicted in Figure FIGREF55; however, the corpus contains 187,620,276 total number of the character set. Sindhi Persian-Arabic alphabet consists of 52 letters but in the vocabulary 59 letters are detected, additional seven letters are modified uni-grams and standalone honorific symbols. <<</Letter occurrences>>> <<<Letter n-grams frequency>>> We denote the combination of letter occurrences in a word as n-grams, where each letter is a gram in a word. The letter n-gram frequency is carefully analyzed in order to find the length of words which is essential to develop NLP systems, including learning of word embeddings such as choosing the minimum or maximum length of sub-word for character-level representation learning BIBREF24. We calculate the letter n-grams in words along with their percentage in the developed corpus (see Table TABREF57). The bi-gram words are most frequent, mostly consists of stop words and secondly, 4-gram words have a higher frequency. <<</Letter n-grams frequency>>> <<<Word Frequencies>>> The word frequency count is an observation of word occurrences in the text. The commonly used words are considered to be with higher frequency, such as the word “the" in English. Similarly, the frequency of rarely used words to be lower. Such frequencies can be calculated at character or word-level. We calculate word frequencies by counting a word $w$ occurrence in the corpus $c$, such as, Where the frequency of $w$ is the sum of every occurrence $k$ of $w$ in $c$. <<</Word Frequencies>>> <<<Stop words>>> The most frequent and least important words in NLP are often classified as stop words. The removal of such words can boost the performance of the NLP model BIBREF38, such as sentiment analysis and text classification. But the construction of such words list is time consuming and requires user decisions. Firstly, we determined Sindhi stop words by counting their term frequencies using Eq. DISPLAY_FORM59, and secondly, by analysing their grammatical status with the help of Sindhi linguistic expert because all the frequent words are not stop words (see Figure FIGREF62). After determining the importance of such words with the help of human judgment, we placed them in the list of stop words. The total number of detected stop words is 340 in our developed corpus. The partial list of most frequent Sindhi stop words is depicted in Table TABREF61 along with their frequency. The filtration of stop words is an essential preprocessing step for learning GloVe BIBREF26 word embeddings; therefore, we filtered out stop words for preparing input for the GloVe model. However, the sub-sampling approach BIBREF33 BIBREF24 is used to discard such most frequent words in CBoW and SG models. <<</Stop words>>> <<</Statistical analysis of corpus>>> <<<Experiments and results>>> Hyperparameter optimization BIBREF23is more important than designing a novel algorithm. We carefully choose to optimize the dictionary and algorithm-based parameters of CBoW, SG and GloVe algorithms. Hence, we conducted a large number of experiments for training and evaluation until the optimization of most suitable hyperparameters depicted in Table TABREF64 and discussed in Section SECREF63. The choice of optimized hyperparameters is based on The high cosine similarity score in retrieving nearest neighboring words, the semantic, syntactic similarity between word pairs, WordSim353, and visualization of the distance between twenty nearest neighbours using t-SNE respectively. All the experiments are conducted on GTX 1080-TITAN GPU. <<<Hyperparameter optimization>>> The state-of-the-art SG, CBoW BIBREF27 BIBREF33 BIBREF20 BIBREF24 and Glove BIBREF26 word embedding algorithms are evaluated by parameter tuning for development of Sindhi word embeddings. These parameters can be categories into dictionary and algorithm based, respectively. The integration of character n-gram in learning word representations is an ideal method especially for rich morphological languages because this approach has the ability to compute rare and misspelled words. Sindhi is also a rich morphological language. Therefore more robust embeddings became possible to train with the hyperparameter optimization of SG, CBoW and GloVe algorithms. We tuned and evaluated the hyperparameters of three algorithms individually which are discussed as follows: Number of Epochs: Generally, more epochs on the corpus often produce better results but more epochs take long training time. Therefore, we evaluate 10, 20, 30 and 40 epochs for each word embedding model, and 40 epochs constantly produce good results. Learning rate (lr): We tried lr of $0.05$, $0.1$, and $0.25$, the optimal lr $(0.25)$ gives the better results for training all the embedding models. Dimensions ($D$): We evaluate and compare the quality of $100-D$, $200-D$, and $300-D$ using WordSim353 on different $ws$, and the optimal $300-D$ are evaluated with cosine similarity matrix for querying nearest neighboring words and calculating the similarity between word pairs. The embedding dimensions have little affect on the quality of the intrinsic evaluation process. However, the selection of embedding dimensions might have more impact on the accuracy in certain downstream NLP applications. The lower embedding dimensions are faster to train and evaluate. Character n-grams: The selection of minimum (minn) and the maximum (maxn) length of character $n-grams$ is an important parameter for learning character-level representations of words in CBoW and SG models. Therefore, the n-grams from $3-9$ were tested to analyse the impact on the accuracy of embedding. We optimized the length of character n-grams from $minn=2$ and $maxn=7$ by keeping in view the word frequencies depicted in Table TABREF57. Window size (ws): The large ws means considering more context words and similarly less ws means to limit the size of context words. By changing the size of the dynamic context window, we tried the ws of 3, 5, 7 the optimal ws=7 yield consistently better performance. Negative Sampling (NS): : The more negative examples yield better results, but more negatives take long training time. We tried 10, 20, and 30 negative examples for CBoW and SG. The best negative examples of 20 for CBoW and SG significantly yield better performance in average training time. Minimum word count (minw): We evaluated the range of minimum word counts from 1 to 8 and analyzed that the size of input vocabulary is decreasing at a large scale by ignoring more words similarly the vocabulary size was increasing by considering rare words. Therefore, by ignoring words with a frequency of less than 4 in CBoW, SG, and GloVe consistently yields better results with the vocabulary of 200,000 words. Loss function (ls): we use hierarchical softmax (hs) for CBoW, negative sampling (ns) for SG and default loss function for GloVe BIBREF26. The recommended verbosity level, number of buckets, sampling threshold, number of threads are used for training CBoW, SG BIBREF24, and GloVe BIBREF26. <<</Hyperparameter optimization>>> <<</Experiments and results>>> <<<Word similarity comparison of Word Embeddings>>> <<<Nearest neighboring words>>> The cosine similarity matrix BIBREF35 is a popular approach to compute the relationship between all embedding dimensions of their distinct relevance to query word. The words with similar context get high cosine similarity and geometrical relatedness to Euclidean distance, which is a common and primary method to measure the distance between a set of words and nearest neighbors. Each word contains the most similar top eight nearest neighboring words determined by the highest cosine similarity score using Eq. DISPLAY_FORM48. We present the English translation of both query and retrieved words also discuss with their English meaning for ease of relevance judgment between the query and retrieved words.To take a closer look at the semantic and syntactic relationship captured in the proposed word embeddings, Table TABREF74 shows the top eight nearest neighboring words of five different query words Friday, Spring, Cricket, Red, Scientist taken from the vocabulary. As the first query word Friday returns the names of days Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday in an unordered sequence. The SdfastText returns five names of days Sunday, Thursday, Monday, Tuesday and Wednesday respectively. The GloVe model also returns five names of days. However, CBoW and SG gave six names of days except Wednesday along with different writing forms of query word Friday being written in the Sindhi language which shows that CBoW and SG return more relevant words as compare to SdfastText and GloVe. The CBoW returned Add and GloVe returns Honorary words which are little similar to the querry word but SdfastText resulted two irrelevant words Kameeso (N) which is a name (N) of person in Sindhi and Phrase is a combination of three Sindhi words which are not tokenized properly. Similarly, nearest neighbors of second query word Spring are retrieved accurately as names and seasons and semantically related to query word Spring by CBoW, SG and Glove but SdfastText returned four irrelevant words of Dilbahar (N), Pharase, Ashbahar (N) and Farzana (N) out of eight. The third query word is Cricket, the name of a popular game. The first retrieved word in CBoW is Kabadi (N) that is a popular national game in Pakistan. Including Kabadi (N) all the returned words by CBoW, SG and GloVe are related to Cricket game or names of other games. But the first word in SdfastText contains a punctuation mark in retrieved word Gone.Cricket that are two words joined with a punctuation mark (.), which shows the tokenization error in preprocessing step, sixth retrieved word Misspelled is a combination of three words not related to query word, and Played, Being played are also irrelevant and stop words. Moreover, fourth query word Red gave results that contain names of closely related to query word and different forms of query word written in the Sindhi language. The last returned word Unknown by SdfastText is irrelevant and not found in the Sindhi dictionary for translation. The last query word Scientist also contains semantically related words by CBoW, SG, and GloVe, but the first Urdu word given by SdfasText belongs to the Urdu language which means that the vocabulary may also contain words of other languages. Another unknown word returned by SdfastText does not have any meaning in the Sindhi dictionary. More interesting observations in the presented results are the diacritized words retrieved from our proposed word embeddings and The authentic tokenization in the preprocessing step presented in Figure FIGREF22. However, SdfastText has returned tri-gram words of Phrase in query words Friday, Spring, a Misspelled word in Cricket and Scientist query words. Hence, the overall performance of our proposed SG, CBoW, and GloVe demonstrate high semantic relatedness in retrieving the top eight nearest neighbor words. <<</Nearest neighboring words>>> <<<Word pair relationship>>> Generally, closer words are considered more important to a word’s meaning. The word embeddings models have the ability to capture the lexical relations between words. Identifying such relationship that connects words is important in NLP applications. We measure that semantic relationship by calculating the dot product of two vectors using Eq. DISPLAY_FORM48. The high cosine similarity score denotes the closer words in the embedding matrix, while less cosine similarity score means the higher distance between word pairs. We present the cosine similarity score of different semantically or syntactically related word pairs taken from the vocabulary in Table TABREF77 along with English translation, which shows the average similarity of 0.632, 0.650, 0.591 yields by CBoW, SG and GloVe respectively. The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText. This shows that along with performance, the vocabulary in SdfastText is also limited as compared to our proposed word embeddings. Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table TABREF78 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391. The first query word China-Beijing is not available the vocabulary of SdfastText. However, the similarity score between Afghanistan-Kabul is lower in our proposed CBoW, SG, GloVe models because the word Kabul is the name of the capital of Afghanistan as well as it frequently appears as an adjective in Sindhi text which means able. <<</Word pair relationship>>> <<<Comparison with WordSim353>>> We evaluate the performance of our proposed word embeddings using the WordSim353 dataset by translation English word pairs to Sindhi. Due to vocabulary differences between English and Sindhi, we were unable to find the authentic meaning of six terms, so we left these terms untranslated. So our final Sindhi WordSim353 consists of 347 word pairs. Table TABREF80 shows the Spearman correlation results using Eq. DISPLAY_FORM51 on different dimensional embeddings on the translated WordSim353. The Table TABREF80 presents complete results with the different ws for CBoW, SG and GloVe in which the ws=7 subsequently yield better performance than ws of 3 and 5, respectively. The SG model outperforms CBoW and GloVe in semantic and syntactic similarity by achieving the performance of 0.629 with ws=7. In comparison with English BIBREF27 achieved the average semantic and syntactic similarity of 0.637, 0.656 with CBoW and SG, respectively. Therefore, despite the challenges in translation from English to Sindhi, our proposed Sindhi word embeddings have efficiently captured the semantic and syntactic relationship. <<</Comparison with WordSim353>>> <<<Visualization>>> We use t-Distributed Stochastic Neighboring (t-SNE) dimensionality BIBREF36 reduction algorithm with PCA BIBREF37 for exploratory embeddings analysis in 2-dimensional map. The t-SNE is a non-linear dimensionality reduction algorithm for visualization of high dimensional datasets. It starts the probability calculation of similar word clusters in high-dimensional space and calculates the probability of similar points in the corresponding low-dimensional space. The purpose of t-SNE for visualization of word embeddings is to keep similar words close together in 2-dimensional $x,y$ coordinate pairs while maximizing the distance between dissimilar words. The t-SNE has a perplexity (PPL) tunable parameter used to balance the data points at both the local and global levels. We visualize the embeddings using PPL=20 on 5000-iterations of 300-D models. We use the same query words (see Table TABREF74) by retrieving the top 20 nearest neighboring word clusters for a better understanding of the distance between similar words. Every query word has a distinct color for the clear visualization of a similar group of words. The closer word clusters show the high similarity between the query and retrieved word clusters. The word clusters in SG (see Fig. FIGREF83) are closer to their group of semantically related words. Secondly, the CBoW model depicted in Fig. FIGREF82 and GloVe Fig. FIGREF84 also show the better cluster formation of words than SdfastText Fig. FIGREF85, respectively. <<</Visualization>>> <<</Word similarity comparison of Word Embeddings>>> <<<Discussion and future work>>> In this era of the information age, the existence of LRs plays a vital role in the digital survival of natural languages because the NLP tools are used to process a flow of un-structured data from disparate sources. It is imperative to mention that presently, Sindhi Persian-Arabic is frequently used in online communication, newspapers, public institutions in Pakistan and India. Due to the growing use of Sindhi on web platforms, the need for its LRs is also increasing for the development of language technology tools. But little work has been carried out for the development of resources which is not sufficient to design a language independent or machine learning algorithms. The present work is a first comprehensive initiative on resource development along with their evaluation for statistical Sindhi language processing. More recently, the NN based approaches have produced a state-of-the-art performance in NLP by exploiting unsupervised word embeddings learned from the large unlabelled corpus. Such word embeddings have also motivated the work on low-resourced languages. Our work mainly consists of novel contributions of resource development along with comprehensive evaluation for the utilization of NN based approaches in SNLP applications. The large corpus obtained from multiple web resources is utilized for the training of word embeddings using SG, CBoW and Glove models. The intrinsic evaluation along with comparative results demonstrates that the proposed Sindhi word embeddings have accurately captured the semantic information as compare to recently revealed SdfastText word vectors. The SG yield best results in nearest neighbors, word pair relationship and semantic similarity. The performance of CBoW is also close to SG in all the evaluation matrices. The GloVe also yields better word representations; however SG and CBoW models surpass the GloVe model in all evaluation matrices. Hyperparameter optimization is as important as designing a new algorithm. The choice of optimal parameters is a key aspect of performance gain in learning robust word embeddings. Moreover, We analysed that the size of the corpus and careful preprocessing steps have a large impact on the quality of word embeddings. However, in algorithmic perspective, the character-level learning approach in SG and CBoW improves the quality of representation learning, and overall window size, learning rate, number of epochs are the core parameters that largely influence the performance of word embeddings models. Ultimately, the new corpus of low-resourced Sindhi language, list of stop words and pretrained word embeddings along with empirical evaluation, will be a good supplement for future research in SSLP applications. In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition. The proposed word embeddings will be refined further by creating custom benchmarks and the extrinsic evaluation approach will be employed for the performance analysis of proposed word embeddings. Moreover, we will also utilize the corpus using Bi-directional Encoder Representation Transformer BIBREF13 for learning deep contextualized Sindhi word representations. Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet. <<</Discussion and future work>>> <<<Conclusion>>> In this paper, we mainly present three novel contributions of large corpus development contains large vocabulary of more than 61 million tokens, 908,456 unique words. Secondly, the list of Sindhi stop words is constructed by finding their high frequency and least importance with the help of Sindhi linguistic expert. Thirdly, the unsupervised Sindhi word embeddings are generated using state-of-the-art CBoW, SG and GloVe algorithms and evaluated using popular intrinsic evaluation approaches of cosine similarity matrix and WordSim353 for the first time in Sindhi language processing. We translate English WordSim353 using the English-Sindhi bilingual dictionary, which will also be a good resource for the evaluation of Sindhi word embeddings. Moreover, the proposed word embeddings are also compared with recently revealed SdfastText word representations. Our empirical results demonstrate that our proposed Sindhi word embeddings have captured high semantic relatedness in nearest neighboring words, word pair relationship, country, and capital and WordSim353. The SG yields the best performance than CBoW and GloVe models subsequently. However, the performance of GloVe is low on the same vocabulary because of character-level learning of word representations and sub-sampling approaches in SG and CBoW. Our proposed Sindhi word embeddings have surpassed SdfastText in the intrinsic evaluation matrix. Also, the vocabulary of SdfastText is limited because they are trained on a small Wikipedia corpus of Sindhi Persian-Arabic. We will further investigate the extrinsic performance of proposed word embeddings on the Sindhi text classification task in the future. The proposed resources along with systematic evaluation will be a sophisticated addition to the computational resources for statistical Sindhi language processing. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Experiments and results, Conclusion" ], "type": "disordered_section" }
2004.02929
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> An Annotated Corpus of Emerging Anglicisms in Spanish Newspaper Headlines <<<Abstract>>> The extraction of anglicisms (lexical borrowings from English) is relevant both for lexicographic purposes and for NLP downstream tasks. We introduce a corpus of European Spanish newspaper headlines annotated with anglicisms and a baseline model for anglicism extraction. In this paper we present: (1) a corpus of 21,570 newspaper headlines written in European Spanish annotated with emergent anglicisms and (2) a conditional random field baseline model with handcrafted features for anglicism extraction. We present the newspaper headlines corpus, describe the annotation tagset and guidelines and introduce a CRF model that can serve as baseline for the task of detecting anglicisms. The presented work is a first step towards the creation of an anglicism extractor for Spanish newswire. <<</Abstract>>> <<<Introduction>>> The study of English influence in the Spanish language has been a hot topic in Hispanic linguistics for decades, particularly concerning lexical borrowing or anglicisms BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. Lexical borrowing is a phenomenon that affects all languages and constitutes a productive mechanism for word-formation, especially in the press. chesleypaulapredicting2010 estimated that a reader of French newspapers encountered a new lexical borrowing for every 1,000 words. In Chilean newspapers, lexical borrowings account for approximately 30% of neologisms, 80% of those corresponding to English loanwords BIBREF7. Detecting lexical borrowings is relevant both for lexicographic purposes and for NLP downstream tasks BIBREF8, BIBREF9. However, strategies to track and register lexical borrowings have traditionally relied on manual review of corpora. In this paper we present: (1) a corpus of newspaper headlines in European Spanish annotated with emerging anglicisms and (2) a CRF baseline model for anglicism automatic extraction in Spanish newswire. <<</Introduction>>> <<<Related Work>>> Corpus-based studies of English borrowings in Spanish media have traditionally relied on manual evaluation of either previously compiled general corpora such as CREA BIBREF10, BIBREF11, BIBREF12, BIBREF13, either new tailor-made corpora designed to analyze specific genres, varieties or phenomena BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20. In terms of automatic detection of anglicisms, previous approaches in different languages have mostly depended on resource lookup (lexicon or corpus frequencies), character n-grams and pattern matching. alex-2008-comparing combined lexicon lookup and a search engine module that used the web as a corpus to detect English inclusions in a corpus of German texts and compared her results with a maxent Markov model. furiassi2007retrieval explored corpora lookup and character n-grams to extract false anglicisms from a corpus of Italian newspapers. andersen2012semi used dictionary lookup, regular expressions and lexicon-derived frequencies of character n-grams to detect anglicism candidates in the Norwegian Newspaper Corpus (NNC) BIBREF21, while losnegaard2012data explored a Machine Learning approach to anglicism detection in Norwegian by using TiMBL (Tilburg Memory-Based Learner, an implementation of a k-nearest neighbor classifier) with character trigrams as features. garley-hockenmaier-2012-beefmoves trained a maxent classifier with character n-gram and morphological features to identify anglicisms in German online communities. In Spanish, serigos2017using extracted anglicisms from a corpus of Argentinian newspapers by combining dictionary lookup (aided by TreeTagger and the NLTK lemmatizer) with automatic filtering of capitalized words and manual inspection. In serigos2017applying, a character n-gram module was added to estimate the probabilities of a word being English or Spanish. moreno2018configuracion used different pattern-matching filters and lexicon lookup to extract anglicism cadidates from a corpus of tweets in US Spanish. Work within the code-switching community has also dealt with language identification on multilingual corpora. Due to the nature of code-switching, these models have primarily focused on oral copora and social media datasets BIBREF22, BIBREF23, BIBREF24. In the last shared task of language identification in code-switched data BIBREF23, approaches to English-Spanish included CRFs models BIBREF25, BIBREF26, BIBREF27, BIBREF28, logistic regression BIBREF29 and LSTMs models BIBREF30, BIBREF31. The scope and nature of lexical borrowing is, however, somewhat different to that of code-switching. In fact, applying code-switching models to lexical borrowing detection has previously proved to be unsuccessful, as they tend to overestimate the number of anglicisms BIBREF32. In the next section we address the differences between both phenomena and set the scope of this project. <<</Related Work>>> <<<Anglicism: Scope of the Phenomenon>>> Linguistic borrowing can be defined as the transference of linguistic elements between two languages. Borrowing and code-switching have frequently been described as a continuum BIBREF33, with a fuzzy frontier between the two. As a result, a precise definition of what borrowing is remains elusive BIBREF34 and some authors prefer to talk about code-mixing in general BIBREF35 or “lone other-language incorporations" BIBREF36. Lexical borrowing in particular involves the incorporation of single lexical units from one language into another language and is usually accompanied by morphological and phonological modification to conform with the patterns of the recipient language BIBREF37, BIBREF38. By definition, code-switches are not integrated into a recipient language, unlike established loanwords BIBREF39. While code-switches are usually fluent multiword interferences that normally comply with grammatical restrictions in both languages and that are produced by bilingual speakers in bilingual discourses, lexical borrowings are words used by monolingual individuals that eventually become lexicalized and assimilated as part of the recipient language lexicon until the knowledge of “foreign" origin disappears BIBREF40. In terms of approaching the problem, automatic code-switching identification has been framed as a sequence modeling problem where every token receives a language ID label (as in a POS-tagging task). Borrowing detection, on the other hand, while it can also be transformed into a sequence labeling problem, is an extraction task, where only certain spans of texts will be labeled (in the fashion of a NER task). Various typologies have been proposed that aim to classify borrowings according to different criteria, both with a cross-linguistic perspective and also specifically aimed to characterize English inclusions in Spanish BIBREF34, BIBREF41, BIBREF42, BIBREF5. In this work, we will be focusing on unassimilated lexical borrowings (sometimes called foreignisms), i.e. words from English origin that are introduced into Spanish without any morphological or orthographic adaptation. <<</Anglicism: Scope of the Phenomenon>>> <<<Corpus description and annotation>>> <<<Corpus description>>> In this subsection we describe the characteristics of the corpus. We first introduce the main corpus, with the usual train/development/test split that was used to train, tune and evaluate the model. We then present an additional test set that was designed to assess the performance of the model on more naturalistic data. <<<Main Corpus>>> The main corpus consists of a collection of monolingual newspaper headlines written in European Spanish. The corpus contains 16,553 headlines, which amounts to 244,114 tokens. Out of those 16,553 headlines, 1,109 contain at least one anglicism. The total number of anglicisms is 1,176 (most of them are a single word, although some of them were multiword expressions). The corpus was divided into training, development and test set. The proportions of headlines, tokens and anglicisms in each corpus split can be found in Table TABREF6. The headlines in this corpus come from the Spanish newspaper eldiario.es, a progressive online newspaper based in Spain. eldiario.es is one of the main national newspapers from Spain and, to the best of our knowledge, the only one that publishes its content under a Creative Commons license, which made it ideal for making the corpus publicly available. The headlines were extracted from the newspaper website through web scraping and range from September 2012 to January 2020. Only the following sections were included: economy, technology, lifestyle, music, TV and opinion. These sections were chosen as they were the most likely to contain anglicisms. The proportion of headlines with anglicisms per section can be found in Table TABREF7. Using headlines (instead of full articles) was beneficial for several reasons. First of all, annotating a headline is faster and easier than annotating a full article; this helps ensure that a wider variety of topics will be covered in the corpus. Secondly, anglicisms are abundant in headlines, because they are frequently used as a way of calling the attention of the reader BIBREF43. Finally, borrowings that make it to the headline are likely to be particularly salient or relevant, and therefore are good candidates for being extracted and tracked. <<</Main Corpus>>> <<<Supplemental Test Set>>> In addition to the usual train/development/test split we have just presented, a supplemental test set of 5,017 headlines was collected. The headlines included in this additional test set also belong to eldiario.es. These headlines were retrieved daily through RSS during February 2020 and included all sections from the newspaper. The headlines in the supplemental corpus therefore do not overlap in time with the main corpus and include more sections. The number of headlines, tokens and anglicisms in the supplemental test set can be found in Table TABREF6. The motivation behind this supplemental test set is to assess the model performance on more naturalistic data, as the headlines in the supplemental corpus (1) belong to the future of the main corpus and (2) come from a less borrowing-dense sample. This supplemental test set better mimics the real scenario that an actual anglicism extractor would face and can be used to assess how well the model generalizes to detect anglicisms in any section of the daily news, which is ultimately the aim of this project. <<</Supplemental Test Set>>> <<</Corpus description>>> <<<Annotation guidelines>>> The term anglicism covers a wide range of linguistic phenomena. Following the typology proposed by gomez1997towards, we focused on direct, unadapted, emerging Anglicisms, i.e. lexical borrowings from the English language into Spanish that have recently been imported and that have still not been assimilated into Spanish. Other phenomena such as semantic calques, syntactic anglicisms, acronyms and proper names were considered beyond the scope of this annotation project. Lexical borrowings can be adapted (the spelling of the word is modified to comply with the phonological and orthographic patterns of the recipient language) or unadapted (the word preserves its original spelling). For this annotation task, adapted borrowings were ignored and only unadapted borrowings were annotated. Therefore, Spanish adaptations of anglicisms like fútbol (from football), mitin (from meeting) and such were not annotated as borrowings. Similarly, words derived from foreign lexemes that do not comply with Spanish orthotactics but that have been morphologically derived following the Spanish paradigm (hacktivista, hackear, shakespeariano) were not annotated either. However, pseudo-anglicisms (words that are formed as if they were English, but do not exist in English, such as footing or balconing) were annotated. Words that were not adapted but whose original spelling complies with graphophonological rules of Spanish (and are therefore unlikely to be ever adapted, such as web, internet, fan, club, videoclip) were annotated or not depending on how recent or emergent they were. After all, a word like club, that has been around in Spanish language for centuries, cannot be considered emergent anymore and, for this project, would not be as interesting to retrieve as real emerging anglicisms. The notion of emergent is, however, time-dependent and quite subjective: in order to determine which unadapted, graphophonologically acceptable borrowings were to be annotated, the online version of the Diccionario de la lengua española dle was consulted. This dictionary is compiled by the Royal Spanish Academy, a prescriptive institution on Spanish language. This decision was motivated by the fact that, if a borrowing was already registered by this dictionary (that has conservative approach to language change) and is considered assimilated (that is, the institution recommended no italics or quotation marks to write that word) then it could be inferred that the word was not emergent anymore. Although the previous guidelines covered most cases, they proved insufficient. Some anglicisms were unadapted (they preserved their original spelling), unacceptable according to the Spanish graphophonological rules, and yet did not satisfy the condition of being emergent. That was the case of words like jazz or whisky, words that do not comply with Spanish graphophonological rules but that were imported decades ago, cannot be considered emergent anymore and are unlikely to ever be adapted into the Spanish spelling system. To adjudicate these examples on those cases, the criterion of pragmatic markedness proposed by winter2012proposing (that distinguishes between catachrestic and non-catachrestic borrowing) was applied: if a borrowing was not adapted (i.e. its form remained exactly as it came from English) but referred to a particular invention or innovation that came via the English language, that was not perceived as new anymore and that had never competed with a Spanish equivalent, then it was ignored. This criteria proved to be extremely useful to deal with old unadapted anglicisms in the fields of music and food. Figure 1 summarizes the decision steps followed during the annotation process. The corpus was annotated by a native speaker of Spanish using Doccano doccano. The annotation tagset includes two labels: ENG, to annotate the English borrowings just described, and OTHER. This OTHER tag was used to tag lexical borrowings from languages other than English. After all, although English is today by far the most prevalent donor of borrowings, there are other languages that also provide new borrowings to Spanish. Furthermore, the tag OTHER allows to annotate borrowings such as première or tempeh, borrowings that etymologically do not come from English but that have entered the Spanish language via English influence, even when their spelling is very different to English borrowings. In general, we considered that having such a tag could also help assess how successful a classifier is detecting foreign borrowings in general in Spanish newswire (without having to create a label for every possible donor language, as the number of examples would be too sparse). In total, the training set contained 40 entities labeled as OTHER, the development set contained 14 and the test set contained 13. The supplemental test set contained 35 OTHER entities. <<</Annotation guidelines>>> <<</Corpus description and annotation>>> <<<Baseline Model>>> A baseline model for automatic extraction of anglicisms was created using the annotated corpus we just presented as training material. As mentioned in Section 3, the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data BIBREF23, BIBREF24. The model was built using pycrfsuite korobov2014python, the Python wrapper for crfsuite CRFsuite that implements CRF for labeling sequential data. It also used the Token and Span utilities from spaCy library honnibal2017spacy. The following handcrafted features were used for the model: Bias feature Token feature Uppercase feature (y/n) Titlecase feature (y/n) Character trigram feature Quotation feature (y/n) Word suffix feature (last three characters) POS tag (provided by spaCy utilities) Word shape (provided by spaCy utilities) Word embedding (see Table TABREF26) Given that anglicisms can be multiword expressions (such as best seller, big data) and that those units should be treated as one borrowing and not as two independent borrowings, we used multi-token BIO encoding to denote the boundaries of each span BIBREF44. A window of two tokens in each direction was set for the feature extractor. The algorithm used was gradient descent with the L-BFGS method. The model was tuned on the development set doing grid search; the hyperparameters considered were c1 (L1 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), c2 (L2 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), embedding scaling ($0.5$, $1.0$, $2.0$, $4.0$), and embedding type bojanowski2017enriching,josecanete20193255001,cardellinoSBWCE,grave2018learning,honnibal2017spacy,perezfasttext,perezglove (see Table TABREF26). The best results were obtained with c1 = $0.05$, c2 = $0.01$, scaling = $0.5$ and word2vec Spanish embeddings by cardellinoSBWCE. The threshold for the stopping criterion delta was selected through observing the loss during preliminary experiments (delta = $1\mathrm {e}-3$). In order to assess the significance of the the handcrafted features, a feature ablation study was done on the tuned model, ablating one feature at a time and testing on the development set. Due to the scarcity of spans labeled with the OTHER tag on the development set (only 14) and given that the main purpose of the model is to detect anglicisms, the baseline model was run ignoring the OTHER tag both during tuning and the feature ablation experiments. Table TABREF27 displays the results on the development set with all features and for the different feature ablation runs. The results show that all features proposed for the baseline model contribute to the results, with the character trigram feature being the one that has the biggest impact on the feature ablation study. <<</Baseline Model>>> <<<Results>>> The baseline model was then run on the test set and the supplemental test set with the set of features and hyperparameters mentioned on Section SECREF5 Table TABREF28 displays the results obtained. The model was run both with and without the OTHER tag. The metrics for ENG display the results obtained only for the spans labeled as anglicisms; the metrics for OTHER display the results obtained for any borrowing other than anglicisms. The metrics for BORROWING discard the type of label and consider correct any labeled span that has correct boundaries, regardless of the label type (so any type of borrowing, regardless if it is ENG or OTHER). In all cases, only full matches were considered correct and no credit was given to partial matching, i.e. if only fake in fake news was retrieved, it was considered wrong and no partial score was given. Results on all sets show an important difference between precision and recall, precision being significantly higher than recall. There is also a significant difference between the results obtained on development and test set (F1 = 89.60, F1 = 87.82) and the results on the supplemental test set (F1 = 71.49). The time difference between the supplemental test set and the development and test set (the headlines from the the supplemental test set being from a different time period to the training set) can probably explain these differences. Comparing the results with and without the OTHER tag, it seems that including it on the development and test set produces worse results (or they remain roughly the same, at best). However, the best precision result on the supplemental test was obtained when including the OTHER tag and considering both ENG and OTHER spans as BORROWING (precision = 87.62). This is caused by the fact that, while the development and test set were compiled from anglicism-rich newspaper sections (similar to the training set), the supplemental test set contained headlines from all the sections in the newspaper, and therefore included borrowings from other languages such as Catalan, Basque or French. When running the model without the OTHER tag on the supplemental test set, these non-English borrowings were labeled as anglicisms by the model (after all, their spelling does not resemble Spanish spelling), damaging the precision score. When the OTHER tag was included, these non-English borrowings got correctly labeled as OTHER, improving the precision score. This proves that, although the OTHER tag might be irrelevant or even damaging when testing on the development or test set, it can be useful when testing on more naturalistic data, such as the one in the supplemental test set. Concerning errors, two types of errors were recurrent among all sets: long titles of songs, films or series written in English were a source of false positives, as the model tended to mistake some of the uncapitalized words in the title for anglicisms (for example, it darker in “`You want it darker', la oscura y brillante despedida de Leonard Cohen"). On the other hand, anglicisms that appear on the first position of the sentence (and were, therefore, capitalized) were consistently ignored (as the model probably assumed they were named entities) and produced a high number of false negatives (for example, vamping in “Vamping: la recurrente leyenda urbana de la luz azul `asesina'"). The results on Table TABREF28 cannot, however, be compared to the ones reported by previous work: the metric that we report is span F-measure, as the evaluation was done on span level (instead of token level) and credit was only given to full matches. Secondly, there was no Spanish tag assigned to non-borrowings, that means that no credit was given if a Spanish token was identified as such. <<</Results>>> <<<Future Work>>> This is an on-going project. The corpus we have just presented is a first step towards the development of an extractor of emerging anglicisms in the Spanish press. Future work includes: assessing whether to keep the OTHER tag, improving the baseline model (particularly to improve recall), assessing the suitability and contribution of different sets of features and exploring different models. In terms of the corpus development, the training set is now closed and stable, but the test set could potentially be increased in order to have more and more diverse anglicisms. <<</Future Work>>> <<<Conclusions>>> In this paper we have presented a new corpus of 21,570 newspaper headlines written in European Spanish. The corpus is annotated with emergent anglicisms and, up to our very best knowledge, is the first corpus of this type to be released publicly. We have presented the annotation scope, tagset and guidelines, and we have introduced a CRF baseline model for anglicism extraction trained with the described corpus. The results obtained show that the the corpus and baseline model are appropriate for automatic anglicism extraction. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Conclusions, Related Work" ], "type": "disordered_section" }
1910.00825
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Abstractive Dialog Summarization with Semantic Scaffolds <<<Abstract>>> The demand for abstractive dialog summary is growing in real-world applications. For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction. However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets. We propose an abstractive dialog summarization dataset based on MultiWOZ. If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched. To address these two drawbacks, we propose Scaffold Pointer Network (SPNet)to utilize the existing annotation on speaker role, semantic slot and dialog domain. SPNet incorporates these semantic scaffolds for dialog summarization. Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text. On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics. <<</Abstract>>> <<<Introduction>>> Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track. There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary. In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics. <<</Introduction>>> <<<Related Work>>> BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15. Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization. Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work. <<</Related Work>>> <<<Proposed Method>>> As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain. <<<Background>>> We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9: where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters. With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text: Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch" to choose from copy and generation: where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$. The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows: where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\prime }$, $V$, $b$ and $b^{\prime }$ are learnable parameters used to calculate such distribution. <<</Background>>> <<<Scaffold Pointer Network (SPNet)>>> Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold. <<<Speaker Role Scaffold>>> Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as: The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$: <<</Speaker Role Scaffold>>> <<<Semantic Slot Scaffold>>> We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism. We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values: Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5. <<</Semantic Slot Scaffold>>> <<<Dialog Domain Scaffold>>> We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain: where $U$, $U^{\prime }$, $b_{d}$ and $b_{d}^{\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence: The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\hat{d_i}$ and predict probability $d_i$ for this task: where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\lambda $ and the objective function is: <<</Dialog Domain Scaffold>>> <<</Scaffold Pointer Network (SPNet)>>> <<</Proposed Method>>> <<<Experimental Settings>>> <<<Dataset>>> We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing. <<</Dataset>>> <<<Evaluation Metrics>>> ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations: Reference: You are going to [restaurant_name] at [time]. Summary: You are going to [restaurant_name] at. In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows: where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance. CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain. <<</Evaluation Metrics>>> <<<Implementation Details>>> We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\beta _1=0.9$, $\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively. <<</Implementation Details>>> <<</Experimental Settings>>> <<<Results and Discussions>>> <<<Automatic Evaluation Results>>> To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation. We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics. <<</Automatic Evaluation Results>>> <<<Human Evaluation Results>>> We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed). We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary. <<</Human Evaluation Results>>> <<<Case study>>> Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property. Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix). Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting. <<</Case study>>> <<</Results and Discussions>>> <<<Conclusion and Future Work>>> We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene. Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "Proposed Method, Introduction" ], "type": "disordered_section" }
1910.00825
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Abstractive Dialog Summarization with Semantic Scaffolds <<<Abstract>>> The demand for abstractive dialog summary is growing in real-world applications. For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction. However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets. We propose an abstractive dialog summarization dataset based on MultiWOZ. If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched. To address these two drawbacks, we propose Scaffold Pointer Network (SPNet)to utilize the existing annotation on speaker role, semantic slot and dialog domain. SPNet incorporates these semantic scaffolds for dialog summarization. Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text. On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics. <<</Abstract>>> <<<Introduction>>> Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track. There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary. In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics. <<</Introduction>>> <<<Related Work>>> BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15. Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization. Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work. <<</Related Work>>> <<<Proposed Method>>> As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain. <<<Background>>> We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9: where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters. With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text: Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch" to choose from copy and generation: where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$. The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows: where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\prime }$, $V$, $b$ and $b^{\prime }$ are learnable parameters used to calculate such distribution. <<</Background>>> <<<Scaffold Pointer Network (SPNet)>>> Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold. <<<Speaker Role Scaffold>>> Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as: The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$: <<</Speaker Role Scaffold>>> <<<Semantic Slot Scaffold>>> We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism. We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values: Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5. <<</Semantic Slot Scaffold>>> <<<Dialog Domain Scaffold>>> We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain: where $U$, $U^{\prime }$, $b_{d}$ and $b_{d}^{\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence: The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\hat{d_i}$ and predict probability $d_i$ for this task: where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\lambda $ and the objective function is: <<</Dialog Domain Scaffold>>> <<</Scaffold Pointer Network (SPNet)>>> <<</Proposed Method>>> <<<Experimental Settings>>> <<<Dataset>>> We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing. <<</Dataset>>> <<<Evaluation Metrics>>> ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations: Reference: You are going to [restaurant_name] at [time]. Summary: You are going to [restaurant_name] at. In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows: where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance. CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain. <<</Evaluation Metrics>>> <<<Implementation Details>>> We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\beta _1=0.9$, $\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively. <<</Implementation Details>>> <<</Experimental Settings>>> <<<Results and Discussions>>> <<<Automatic Evaluation Results>>> To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation. We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics. <<</Automatic Evaluation Results>>> <<<Human Evaluation Results>>> We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed). We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary. <<</Human Evaluation Results>>> <<<Case study>>> Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property. Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix). Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting. <<</Case study>>> <<</Results and Discussions>>> <<<Conclusion and Future Work>>> We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene. Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "Abstract, Experimental Settings" ], "type": "disordered_section" }
1910.00458
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension <<<Abstract>>> Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language. Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it often requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations, compared to the extractive counterpart where answers are usually spans of text within given passages. Moreover, most existing MCQA datasets are small in size, making the learning task even harder. We introduce MMM, a Multi-stage Multi-task learning framework for Multi-choice reading comprehension. Our method involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset to help model generalize better with limited data. Furthermore, we propose a novel multi-step attention network (MAN) as the top-level classifier for this task. We demonstrate MMM significantly advances the state-of-the-art on four representative MCQA datasets. <<</Abstract>>> <<<Introduction>>> Building a system that comprehends text and answers questions is challenging but fascinating, which can be used to test the machine's ability to understand human language BIBREF0, BIBREF1. Many machine reading comprehension (MRC) based question answering (QA) scenarios and datasets have been introduced over the past few years, which differ from each other in various ways, including the source and format of the context documents, whether external knowledge is needed, the format of the answer, to name a few. We can divide these QA tasks into two categories: 1) extractive/abstractive QA such as SQuAD BIBREF2, and HotPotQA BIBREF3. 2) multiple-choice QA (MCQA) tasks such as MultiRC BIBREF4, and MCTest BIBREF5. In comparison to extractive/abstractive QA tasks, the answers of the MCQA datasets are in the form of open, natural language sentences and not restricted to spans in text. Various question types exist such as arithmetic, summarization, common sense, logical reasoning, language inference, and sentiment analysis. Therefore it requires more advanced reading skills for the machine to perform well on this task. Table TABREF1 shows one example from one of MCQA datasets, DREAM BIBREF6. To answer the first question in Table TABREF1, the system needs to comprehend the whole dialogue and use some common sense knowledge to infer that such a conversation can only happen between classmates rather than brother and sister. For the second question, the implicit inference relationship between the utterance “You'll forget your head if you're not careful.” in the passage and the answer option “He is too careless.” must be figured out by the model to obtain the correct answer. Many MCQA datasets were collected from language or science exams, which were purposely designed by educational experts and consequently require non-trivial reasoning techniques BIBREF7. As a result, the performance of machine readers on these tasks can more accurately gauge comprehension ability of a model. Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11. We proposed MMM, a Multi-stage Multi-task learning framework for Multi-choice question answering. Our framework involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset. For the first stage, we coarse-tuned our model with natural language inference (NLI) tasks. For the second multi-task fine-tuning stage, we leveraged the current largest MCQA dataset, RACE, as the in-domain source dataset and simultaneously fine-tuned the model on both source and target datasets via multi-task learning. Through extensive experiments, we demonstrate that the two-stage sequential fine-tuning strategy is the optimal choice for BERT-based model on MCQA datasets. Moreover, we also proposed a Multi-step Attention Network (MAN) as the top-level classifier instead of the typical fully-connected neural network for this task and obtained better performance. Our proposed method improves BERT-based baseline models by at least 7% in absolute accuracy for all the MCQA datasets (except the SemEval dataset that already achieves 88.1% for the baseline). As a result, by leveraging BERT and its variant, RoBERTa BIBREF10, our approach advanced the SOTA results for all the MCQA datasets, surpassing the previous SOTA by at least 16% in absolute accuracy (except the SemEval dataset). <<</Introduction>>> <<<Methods>>> In MCQA, the inputs to the model are a passage, a question, and answer options. The passage, denoted as $P$, consists of a list of sentences. The question and each of the answer options, denoted by $Q$ and $O$, are both single sentences. A MCQA model aims to choose one correct answer from answer options based on $P$ and $Q$. <<<Model Architecture>>> Figure FIGREF3 illustrates the model architecture. Specifically, we concatenate the passage, question and one of the answer options into a long sequence. For a question with $n$ answer options, we obtain $n$ token sequences of length $l$. Afterwards, each sequence will be encoded by a sentence encoder to get the representation vector $H \in \mathbb {R}^{d\times l}$, which is then projected into a single value $p=C(H)$ ($p\in \mathbb {R}^{1}$) via a top-level classifier $C$. In this way, we obtain the logit vector $\mathbf {p}=[p_1,p_2,...,p_n]$ for all options of a question, which is then transformed into the probability vector through a softmax layer. We choose the option with highest logit value $p$ as the answer. Cross entropy loss is used as the loss function. We used the pre-trained bidirectional transformer encoder, i.e., BERT and RoBERTa as the sentence encoder. The top-level classifier will be detailed in the next subsection. <<</Model Architecture>>> <<<Multi-step Attention Network>>> For the top-level classifier upon the sentence encoder, the simplest choice is a two-layer full-connected neural network (FCNN), which consist of one hidden layer with $tanh$ activation and one output layer without activation. This has been widely adopted when BERT is fine-tuned for the down-streaming classification tasks and performs very well BIBREF8. Inspired from the success of the attention network widely used in the span-based QA task BIBREF11, we propose the multi-step attention network (MAN) as our top-level classifier. Similar to the dynamic or multi-hop memory network BIBREF12, BIBREF13, MAN maintains a state and iteratively refines its prediction via the multi-step reasoning. The MAN classifier works as follows. A pair of question and answer option together is considered as a whole segment, denoted as $QO$. Suppose the sequence length of the passage is $p$ and that of the question and option pair is $q$. We first construct the working memory of the passage $H^P\in \mathbb {R}^{d\times p}$ by extracting the hidden state vectors of the tokens that belong to $P$ from $H$ and concatenating them together in the original sequence order. Similarly, we obtain the working memory of the (question, option) pair, denoted as $H^{QO}\in \mathbb {R}^{d\times q}$. Alternatively, we can also encode the passage and (question, option) pair individually to get their representation vectors $H^P$ and $H^{QO}$, but we found that processing them in a pair performs better. We then perform $K$-step reasoning over the memory to output the final prediction. Initially, the initial state $\mathbf {s}^0$ in step 0 is the summary of $H^P$ via self-attention: $\mathbf {s}^0=\sum _i \alpha _i H_i^P$, where $\alpha _i=\frac{exp(w_1^TH_i^P)}{\sum _j exp(w_1^TH_j^P)}$. In the following steps $k \in {1,2,...,K-1}$, the state is calculated by: where $\mathbf {x}^k=\sum _i\beta _iH_i^{QO}$ and $\beta _i=\frac{exp(w_2^T[\mathbf {s}^{k-1};H_i^{QO}])}{\sum _j exp(w_2^T[\mathbf {s}^{k-1};H_j^{QO}])}$. Here $[x;y]$ is concatenation of the vectors $x$ and $y$. The final logit value is determined using the last step state: Basically, the MAN classifier calculates the attention scores between the passage and (question, option) pair step by step dynamically such that the attention can refine itself through several steps of deliberation. The attention mechanism can help filter out irrelevant information in the passage against (question, option) pair. <<</Multi-step Attention Network>>> <<<Two Stage Training>>> We adopt a two-stage procedure to train our model with both in-domain and out-of-domain datasets as shown in Figure FIGREF10. <<<Coarse-tuning Stage>>> We first fine-tune the sentence encoder of our model with natural language inference (NLI) tasks. For exploration, we have also tried to fine-tune the sentence encoder on other types of tasks such as sentiment analysis, paraphrasing, and span-based question answering at this stage. However, we found that only NLI task shows robust and significant improvements for our target multi-choice task. See Section SECREF5 for details. <<</Coarse-tuning Stage>>> <<<Multi-task Learning Stage>>> After corase-tuning stage, we simultaneously fine-tune our model on a large in-domain source dataset and the target dataset together via multi-task learning. We share all model parameters including the sentence encoder as well as the top-level classifier for these two datasets. <<</Multi-task Learning Stage>>> <<</Two Stage Training>>> <<</Methods>>> <<<Experimental Setup>>> <<<Datasets>>> We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits. <<</Datasets>>> <<<Speaker Normalization>>> Passages in DREAM dataset are dialogues between two persons or more. Every utterance in a dialogue starts with the speaker name. For example, in utterance “m: How would he know?”, “m” is the abbreviation of “man” indicating that this utterance is from a man. More than 90% utterances have the speaker names as “w,” “f,” and “m,” which are all abbreviations. However, the speaker names mentioned in the questions are full names such as “woman” and “man.” In order to make it clear for the model to learn which speaker the question is asking about, we used a speaker normalization strategy by replacing “w” or “f” with “woman” and “m” with “man” for the speaker names in the utterances. We found this simple strategy is quite effective, providing us with 1% improvement. We will always use this strategy for the DREAM dataset for our method unless explicitly mentioned. <<</Speaker Normalization>>> <<<Multi-task Learning>>> For the multi-task learning stage, at each training step, we randomly selected a dataset from the two datasets (RACE and the target dataset) and then randomly fetched a batch of data from that dataset to train the model. This process was repeated until the predefined maximum number of steps or the early stopping criterion has been met. We adopted the proportional sampling strategy, where the probability of sampling a task is proportional to the relative size of each dataset compared to the cumulative size of all datasets BIBREF17. <<</Multi-task Learning>>> <<<Training Details>>> We used a linear learning rate decay schedule with warm-up proportion of $0.1$. We set the dropout rate as $0.1$. The maximum sequence length is set to 512. We clipped the gradient norm to 5 for DREAM dataset and 0 for other datasets. The learning rate and number of training epochs vary for different datasets and encoder types, which are summarized in Section 1 of the Supplementary Material. More than 90% of passages have more than 512 words in the TOEFL dataset, which exceed the maximum sequence length that BERT supports, thus we cannot process the whole passage within one forward pass. To solve this issue, we propose the sliding window strategy, in which we split the long passage into several snippets of length 512 with overlaps between subsequent snippets and each snippet from the same passage will be assigned with the same label. In training phase, all snippets will be used for training, and in inference phase, we aggregate the logit vectors of all snippets from the same passage and pick the option with highest logit value as the prediction. In experiments, we found the overlap of 256 words is the optimal, which can improve the BERT-Base model from accuracy of 50.0% to 53.2%. We adopted this sliding window strategy only for the TOEFL dataset. <<</Training Details>>> <<</Experimental Setup>>> <<<Results>>> We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%. We also test our method on three other MCQA datasets: MCTest including MC160 and MC500, TOEFL, and SemEval-2018 Task 11. The results are summarized in Table TABREF17. Similarly, we list the previous SOTA models with their scores for comparison. We compared our method with the baselines that use the same sentence encoder. Except for the SemEval dataset, our method can improve the BERT-Large model by at least 10%. For both MCTest and SemEval datasets, our best scores are very close to the reported human performance. The MC160 and MC500 datasets were curated in almost the same way BIBREF9 with only one difference that MC160 is around three times smaller than MC500. We can see from Table TABREF17 that both the BERT and RoBERTa baselines perform much worse on MC160 than MC500. We think the reason is that the data size of MC160 is not enough to well fine-tune the large models with a huge amount of trainable parameters. However, by leveraging the transfer learning techniques we proposed, we can significantly improve the generalization capability of BERT and RoBERTa models on the small datasets so that the best performance of MC160 can even surpass that of MC500. This demonstrates the effectiveness of our method. To better understand why MMM can be successful, we conducted an ablation study be removing one feature at a time on the BERT-Base model. The results are shown in Table TABREF18. We see that the removal of the second stage multi-task learning part hurts our method most significantly, indicating that the majority of improvement is coming from the knowledge transferred from the in-domain dataset. The first stage of coarse-tuning using NLI datasets is also very important, which provides the model with enhanced language inference ability. As for the top-level classifier, i.e., the MAN module, if we replace it with a typical two-layer FCNN as in BIBREF8, we have 1–2% performance drop. Lastly, for the DREAM dataset, the speaker normalization strategy gives us another $\sim $1% improvement. <<</Results>>> <<<Discussion>>> <<<Why does natural language inference help?>>> As shown in Table TABREF18, coarse-tuning on NLI tasks can help improve the performance of MCQA. We conjecture one of the reasons is that, in order to pick the correct answer, we need to rely on the language inference capability in many cases. As an example in Table TABREF1, the utterance highlighted in the bold and italic font in the dialogue is the evidence sentence from which we can obtain the correct answer to Question 2. There is no token overlap between the evidence sentence and the correct answer, indicating that the model cannot solve this question by surface matching. Nevertheless, the correct answer is an entailment to the evidence sentence while the wrong answers are not. Therefore, the capability of language inference enables the model to correctly predict the answer. On the other hand, we can deem the passage and the pair of (question, answer) as a pair of premise and hypothesis. Then the process of choosing the right answer to a certain question is similar to the process of choosing the hypothesis that can best entail the premise. In this sense, the part of MCQA task can be deemed as a NLI task. This also agrees with the argument that NLI is a fundamental ability of a natural language processing model and it can help support other tasks that require higher level of language processing abilities BIBREF21. We provided several more examples that require language inference reading skills in the Section 2 of the Supplementary Material; they are wrongly predicted by the BERT-Base baseline model but can be correctly solved by exposing the model to NLI data with the coarse-tuning stage. <<</Why does natural language inference help?>>> <<<Can other tasks help with MCQA?>>> By analyzing the MCQA datasets, we found that some questions ask about the attitude of one person towards something and in some cases, the correct answer is simply a paraphrase of the evidence sentence in the passage. This finding naturally leads to the question: could other kinds of tasks such as sentiment classification, paraphrasing also help with MCQA problems? To answer this question, we select several representative datasets for five categories as the up-stream tasks: sentiment analysis, paraphrase, span-based QA, NLI, and MCQA. We conduct experiments where we first train the BERT-Base models on each of the five categories and then further fine-tune our models on the target dataset: DREAM and MC500 (MCTest-MC500). For the sentiment analysis category, we used the Stanford Sentiment Treebank (SST-2) dataset from the GLUE benchmark BIBREF22 (around 60k train examples) and the Yelp dataset (around 430k train examples). For the paraphrase category, three paraphrasing datasets are used from the GLUE benchmark: Microsoft Research Paraphrase Corpus (MRPC), Semantic Textual Similarity Benchmark (STS-B), and Quora Question Pairs (QQP), which are denoted as “GLUE-Para.”. For the span-based QA, we use the SQuAD 1.1, SQuAD 2.0 , and MRQA which is a joint dataset including six popular span-based QA datasets. Table TABREF23 summarizes the results. We see that sentiment analysis datasets do not help much with our target MCQA datasets. But the paraphrase datasets do bring some improvements for MCQA. For span-based QA, only SQuAD 2.0 helps to improve the performance of the target dataset. Interestingly, although MRQA is much larger than other QA datasets (at least six times larger), it makes the performance worst. This suggests that span-based QA might not the appropriate source tasks for transfer learning for MCQA. We hypothesis this could due to the fact that most of the questions are non-extractive (e.g., 84% of questions in DREAM are non-extractive) while all answers are extractive in the span-based QA datasets. For the completeness of our experiments, we also used various NLI datasets: MultiNLI, SNLI, Question NLI (QLI), Recognizing Textual Entailment (RTE), and Winograd NLI (WNLI) from the GLUE benchmark. We used them in three kinds of combinations: MultiNLI alone, MultiNLI plus SNLI denoted as “NLI”, and combining all five datasets together, denoted as “GLUE-NLI”. As the results shown in Table TABREF23, NLI and GLUE-NLI are comparable and both can improve the target dataset by a large margin. Lastly, among all these tasks, using the MCQA task itself, i.e., pretraining on RACE dataset, can help boost the performance, most. This result agrees with the intuition that the in-domain dataset can be the most ideal data for transfer learning. In conclusion, we find that for out-of-domain datasets, the NLI datasets can be most helpful to the MCQA task, indicating that the natural language inference capability should be an important foundation of the MCQA systems. Besides, a larger in-domain dataset, i.e. another MCQA dataset, can also be very useful. <<</Can other tasks help with MCQA?>>> <<<NLI dataset helps with convergence>>> The first stage of coarse-tuning with NLI data can not only improve the accuracy but also help the model converge faster and better. Especially for the BERT-Large and RoBERTa-Large models that have much larger amount of trainable parameters, convergence is very sensitive to the optimization settings. However, with the help of NLI datasets , convergence for large models is no longer an issue, as shown in Figure FIGREF25. Under the same optimization hyper-parameters, compared with the baseline, coarse-tuning can make the training loss of the BERT-Base model decrease much faster. More importantly, for the BERT-Large model, without coarse-tuning, the model does not converge at all at the first several epochs, which can be completely resolved by the help of NLI data. <<</NLI dataset helps with convergence>>> <<<Multi-stage or Multi-task>>> In a typical scenario where we have one source and one target dataset, we naturally have a question about whether we should simultaneously train a model on them via multi-task learning or first train on the source dataset then on the target sequentially. Many previous works adopted the latter way BIBREF19, BIBREF20, BIBREF23 and BIBREF20 demonstrated that the sequential fine-tuning approach outperforms the multi-task learning setting in their experiments. However, we had contradictory observations in our experiments. Specifically, we conducted a pair of control experiments: one is that we first fine-tune the BERT-Base model on the source dataset RACE and then further fine-tune on the target dataset, and the other is that we simultaneously train the model on RACE and the target dataset via multi-task learning. The comparison results are shown in Table TABREF27. We see that compared with sequential fine-tuning, the multi-task learning achieved better performance. We conjecture that in the sequential fine-tuning setting, while the model is being fine-tuned on the target dataset, some information or knowledge learned from the source dataset may be lost since the model is no longer exposed to the source dataset in this stage. In comparison, this information can be kept in the multi-task learning setting and thus can better help improve the target dataset. Now that the multi-task learning approach outperforms the sequential fine-tuning setting, we naturally arrive at another question: what if we merged the coarse-tuning and multi-task learning stages together? That is, what if we simultaneously trained the NLI, source, and target datasets altogether under the multi-task learning framework? We also conducted a pair of control experiments for investigation. The results in Table TABREF27, show that casting the fine-tuning process on three datasets into separate stages performs better, indicating that multi-stage training is also necessary. This verifies our MMM framework with coarse-tuning on out-of-domain datasets and fine-tuning on in-domain datesets. <<</Multi-stage or Multi-task>>> <<<Multi-steps reasoning is important>>> Previous results show that the MAN classifier shows improvement compared with the FCNN classifier, but we are also interested in how the performance change while varying the number of reasoning steps $K$ as shown in Figure FIGREF29. $K=0$ means that we do not use MAN but FCNN as the classifier. We observe that there is a gradual improvement as we increase $K=1$ to $K=5$, but after 5 steps the improvements have saturated. This verifies that an appropriate number of steps of reasoning is important for the memory network to reflect its benefits. <<</Multi-steps reasoning is important>>> <<<Could the source dataset be benefited?>>> So far we have been discussing the case where we do multi-task learning with the source dataset RACE and various much smaller target datasets to help improve the targets. We also want to see whether our proposed techniques can also benefit the source dataset itself. Table TABREF31 summarizes the results of BERT-Base model on the RACE dataset obtained by adding the coarse-tuning stage, adding the multi-task training together with DREAM, and adding the MAN module. From this table, we see that all three techniques can bring in improvements over the baseline model for the source dataset RACE, among which NLI coarse-tuning stage can help elevate the scores most. Since we found all parts of MMM can work well for the source dataset, we tried to use them to improve the accuracy on RACE. The results are shown in Table TABREF32. We used four kinds of pre-trained sentence encoders: BERT-Base, BERT-Large, XLNet-Large, and RoBERTa-Large. For each encoder, we listed the official report of scores from the leaderboard. Compared with the baselines, MMM leads to improvements ranging from 0.5% to 3.0% in accuracy. Our best result is obtained by the RoBERTa-Large encoder. <<</Could the source dataset be benefited?>>> <<<Error Analysis>>> In order to investigate how well our model performs for different types of questions, we did an error analysis by first randomly selecting 150 samples that had wrong predictions by the BERT-Base baseline model from the development set of DREAM dataset. We then manually classified them into several question types, as shown in Table TABREF34. The annotation criterion is described in the Section 3 of the Supplementary Material. We see that the BERT-Base baseline model still does not do well on matching problems. We then evaluate our best model on these samples and report the accuracy of each question type in the last column of Table TABREF34. We find that our best model can improve upon every question type significantly especially for the matching problems, and most surprisingly, our best model can even greatly improve its ability on solving the arithmetic problems, achieving the accuracy of 73.7%. However, could our model really do math? To investigate this question, we sampled some arithmetic questions that are correctly predicted by our model, made small alterations to the passage or question, and then checked whether our model can still make correct choices. We found our model is very fragile to these minor alterations, implicating that the model is actually not that good at arithmetic problems. We provided one interesting example in the Section 3 of the Supplementary Material. <<</Error Analysis>>> <<</Discussion>>> <<<Related Work>>> There are increasing interests in machine reading comprehension (MRC) for question answering (QA). The extractive QA tasks primarily focus on locating text spans from the given document/corpus to answer questions BIBREF2. Answers in abstractive datasets such as MS MARCO BIBREF24, SearchQA BIBREF25, and NarrativeQA BIBREF26 are human-generated and based on source documents or summaries in free text format. However, since annotators tend to copy spans as answers BIBREF27, the majority of answers are still extractive in these datasets. The multi-choice QA datasets are collected either via crowd sourcing, or collected from examinations designed by educational experts BIBREF7. In this type of QA datasets, besides token matching, a significant portion of questions require multi-sentence reasoning and external knowledge BIBREF5. Progress of research for MRC first relies on the breakthrough of the sentence encoder, from the basic LSTM to the pre-trained transformer based model BIBREF8, which has elevated the performance of all MRC models by a large margin. Besides, the attention mechanisms between the context and the query can empower the neural models with higher performance BIBREF11. In addition, some techniques such as answer verification BIBREF28, multi-hop reasoning BIBREF29, and synthetic data augmentation can be also helpful. Transfer learning has been widely proved to be effective across many domain in NLP. In the QA domain, the most well-known example of transfer learning would be fine-tuning the pre-trained language model such as BERT to the down-streaming QA datasets such as SQuAD BIBREF8. Besides, multi-task learning can also be deemed as a type of transfer learning, since during the training of multiple datasets from different domains for different tasks, knowledge will be shared and transferred from each task to others, which has been used to build a generalized QA model BIBREF30. However, no previous works have investigated that the knowledge from the NLI datasets can also be transferred to improve the MCQA task. <<</Related Work>>> <<<Conclusions>>> We propose MMM, a multi-stage multi-task transfer learning method on the multiple-choice question answering tasks. Our two-stage training strategy and the multi-step attention network achieved significant improvements for MCQA. We also did detailed analysis to explore the importance of both our training strategies as well as different kinds of in-domain and out-of-domain datasets. We hope our work here can also shed light on new directions for other NLP domains. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Abstract, Introduction" ], "type": "disordered_section" }
1910.00458
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension <<<Abstract>>> Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language. Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it often requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations, compared to the extractive counterpart where answers are usually spans of text within given passages. Moreover, most existing MCQA datasets are small in size, making the learning task even harder. We introduce MMM, a Multi-stage Multi-task learning framework for Multi-choice reading comprehension. Our method involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset to help model generalize better with limited data. Furthermore, we propose a novel multi-step attention network (MAN) as the top-level classifier for this task. We demonstrate MMM significantly advances the state-of-the-art on four representative MCQA datasets. <<</Abstract>>> <<<Introduction>>> Building a system that comprehends text and answers questions is challenging but fascinating, which can be used to test the machine's ability to understand human language BIBREF0, BIBREF1. Many machine reading comprehension (MRC) based question answering (QA) scenarios and datasets have been introduced over the past few years, which differ from each other in various ways, including the source and format of the context documents, whether external knowledge is needed, the format of the answer, to name a few. We can divide these QA tasks into two categories: 1) extractive/abstractive QA such as SQuAD BIBREF2, and HotPotQA BIBREF3. 2) multiple-choice QA (MCQA) tasks such as MultiRC BIBREF4, and MCTest BIBREF5. In comparison to extractive/abstractive QA tasks, the answers of the MCQA datasets are in the form of open, natural language sentences and not restricted to spans in text. Various question types exist such as arithmetic, summarization, common sense, logical reasoning, language inference, and sentiment analysis. Therefore it requires more advanced reading skills for the machine to perform well on this task. Table TABREF1 shows one example from one of MCQA datasets, DREAM BIBREF6. To answer the first question in Table TABREF1, the system needs to comprehend the whole dialogue and use some common sense knowledge to infer that such a conversation can only happen between classmates rather than brother and sister. For the second question, the implicit inference relationship between the utterance “You'll forget your head if you're not careful.” in the passage and the answer option “He is too careless.” must be figured out by the model to obtain the correct answer. Many MCQA datasets were collected from language or science exams, which were purposely designed by educational experts and consequently require non-trivial reasoning techniques BIBREF7. As a result, the performance of machine readers on these tasks can more accurately gauge comprehension ability of a model. Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11. We proposed MMM, a Multi-stage Multi-task learning framework for Multi-choice question answering. Our framework involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset. For the first stage, we coarse-tuned our model with natural language inference (NLI) tasks. For the second multi-task fine-tuning stage, we leveraged the current largest MCQA dataset, RACE, as the in-domain source dataset and simultaneously fine-tuned the model on both source and target datasets via multi-task learning. Through extensive experiments, we demonstrate that the two-stage sequential fine-tuning strategy is the optimal choice for BERT-based model on MCQA datasets. Moreover, we also proposed a Multi-step Attention Network (MAN) as the top-level classifier instead of the typical fully-connected neural network for this task and obtained better performance. Our proposed method improves BERT-based baseline models by at least 7% in absolute accuracy for all the MCQA datasets (except the SemEval dataset that already achieves 88.1% for the baseline). As a result, by leveraging BERT and its variant, RoBERTa BIBREF10, our approach advanced the SOTA results for all the MCQA datasets, surpassing the previous SOTA by at least 16% in absolute accuracy (except the SemEval dataset). <<</Introduction>>> <<<Methods>>> In MCQA, the inputs to the model are a passage, a question, and answer options. The passage, denoted as $P$, consists of a list of sentences. The question and each of the answer options, denoted by $Q$ and $O$, are both single sentences. A MCQA model aims to choose one correct answer from answer options based on $P$ and $Q$. <<<Model Architecture>>> Figure FIGREF3 illustrates the model architecture. Specifically, we concatenate the passage, question and one of the answer options into a long sequence. For a question with $n$ answer options, we obtain $n$ token sequences of length $l$. Afterwards, each sequence will be encoded by a sentence encoder to get the representation vector $H \in \mathbb {R}^{d\times l}$, which is then projected into a single value $p=C(H)$ ($p\in \mathbb {R}^{1}$) via a top-level classifier $C$. In this way, we obtain the logit vector $\mathbf {p}=[p_1,p_2,...,p_n]$ for all options of a question, which is then transformed into the probability vector through a softmax layer. We choose the option with highest logit value $p$ as the answer. Cross entropy loss is used as the loss function. We used the pre-trained bidirectional transformer encoder, i.e., BERT and RoBERTa as the sentence encoder. The top-level classifier will be detailed in the next subsection. <<</Model Architecture>>> <<<Multi-step Attention Network>>> For the top-level classifier upon the sentence encoder, the simplest choice is a two-layer full-connected neural network (FCNN), which consist of one hidden layer with $tanh$ activation and one output layer without activation. This has been widely adopted when BERT is fine-tuned for the down-streaming classification tasks and performs very well BIBREF8. Inspired from the success of the attention network widely used in the span-based QA task BIBREF11, we propose the multi-step attention network (MAN) as our top-level classifier. Similar to the dynamic or multi-hop memory network BIBREF12, BIBREF13, MAN maintains a state and iteratively refines its prediction via the multi-step reasoning. The MAN classifier works as follows. A pair of question and answer option together is considered as a whole segment, denoted as $QO$. Suppose the sequence length of the passage is $p$ and that of the question and option pair is $q$. We first construct the working memory of the passage $H^P\in \mathbb {R}^{d\times p}$ by extracting the hidden state vectors of the tokens that belong to $P$ from $H$ and concatenating them together in the original sequence order. Similarly, we obtain the working memory of the (question, option) pair, denoted as $H^{QO}\in \mathbb {R}^{d\times q}$. Alternatively, we can also encode the passage and (question, option) pair individually to get their representation vectors $H^P$ and $H^{QO}$, but we found that processing them in a pair performs better. We then perform $K$-step reasoning over the memory to output the final prediction. Initially, the initial state $\mathbf {s}^0$ in step 0 is the summary of $H^P$ via self-attention: $\mathbf {s}^0=\sum _i \alpha _i H_i^P$, where $\alpha _i=\frac{exp(w_1^TH_i^P)}{\sum _j exp(w_1^TH_j^P)}$. In the following steps $k \in {1,2,...,K-1}$, the state is calculated by: where $\mathbf {x}^k=\sum _i\beta _iH_i^{QO}$ and $\beta _i=\frac{exp(w_2^T[\mathbf {s}^{k-1};H_i^{QO}])}{\sum _j exp(w_2^T[\mathbf {s}^{k-1};H_j^{QO}])}$. Here $[x;y]$ is concatenation of the vectors $x$ and $y$. The final logit value is determined using the last step state: Basically, the MAN classifier calculates the attention scores between the passage and (question, option) pair step by step dynamically such that the attention can refine itself through several steps of deliberation. The attention mechanism can help filter out irrelevant information in the passage against (question, option) pair. <<</Multi-step Attention Network>>> <<<Two Stage Training>>> We adopt a two-stage procedure to train our model with both in-domain and out-of-domain datasets as shown in Figure FIGREF10. <<<Coarse-tuning Stage>>> We first fine-tune the sentence encoder of our model with natural language inference (NLI) tasks. For exploration, we have also tried to fine-tune the sentence encoder on other types of tasks such as sentiment analysis, paraphrasing, and span-based question answering at this stage. However, we found that only NLI task shows robust and significant improvements for our target multi-choice task. See Section SECREF5 for details. <<</Coarse-tuning Stage>>> <<<Multi-task Learning Stage>>> After corase-tuning stage, we simultaneously fine-tune our model on a large in-domain source dataset and the target dataset together via multi-task learning. We share all model parameters including the sentence encoder as well as the top-level classifier for these two datasets. <<</Multi-task Learning Stage>>> <<</Two Stage Training>>> <<</Methods>>> <<<Experimental Setup>>> <<<Datasets>>> We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits. <<</Datasets>>> <<<Speaker Normalization>>> Passages in DREAM dataset are dialogues between two persons or more. Every utterance in a dialogue starts with the speaker name. For example, in utterance “m: How would he know?”, “m” is the abbreviation of “man” indicating that this utterance is from a man. More than 90% utterances have the speaker names as “w,” “f,” and “m,” which are all abbreviations. However, the speaker names mentioned in the questions are full names such as “woman” and “man.” In order to make it clear for the model to learn which speaker the question is asking about, we used a speaker normalization strategy by replacing “w” or “f” with “woman” and “m” with “man” for the speaker names in the utterances. We found this simple strategy is quite effective, providing us with 1% improvement. We will always use this strategy for the DREAM dataset for our method unless explicitly mentioned. <<</Speaker Normalization>>> <<<Multi-task Learning>>> For the multi-task learning stage, at each training step, we randomly selected a dataset from the two datasets (RACE and the target dataset) and then randomly fetched a batch of data from that dataset to train the model. This process was repeated until the predefined maximum number of steps or the early stopping criterion has been met. We adopted the proportional sampling strategy, where the probability of sampling a task is proportional to the relative size of each dataset compared to the cumulative size of all datasets BIBREF17. <<</Multi-task Learning>>> <<<Training Details>>> We used a linear learning rate decay schedule with warm-up proportion of $0.1$. We set the dropout rate as $0.1$. The maximum sequence length is set to 512. We clipped the gradient norm to 5 for DREAM dataset and 0 for other datasets. The learning rate and number of training epochs vary for different datasets and encoder types, which are summarized in Section 1 of the Supplementary Material. More than 90% of passages have more than 512 words in the TOEFL dataset, which exceed the maximum sequence length that BERT supports, thus we cannot process the whole passage within one forward pass. To solve this issue, we propose the sliding window strategy, in which we split the long passage into several snippets of length 512 with overlaps between subsequent snippets and each snippet from the same passage will be assigned with the same label. In training phase, all snippets will be used for training, and in inference phase, we aggregate the logit vectors of all snippets from the same passage and pick the option with highest logit value as the prediction. In experiments, we found the overlap of 256 words is the optimal, which can improve the BERT-Base model from accuracy of 50.0% to 53.2%. We adopted this sliding window strategy only for the TOEFL dataset. <<</Training Details>>> <<</Experimental Setup>>> <<<Results>>> We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%. We also test our method on three other MCQA datasets: MCTest including MC160 and MC500, TOEFL, and SemEval-2018 Task 11. The results are summarized in Table TABREF17. Similarly, we list the previous SOTA models with their scores for comparison. We compared our method with the baselines that use the same sentence encoder. Except for the SemEval dataset, our method can improve the BERT-Large model by at least 10%. For both MCTest and SemEval datasets, our best scores are very close to the reported human performance. The MC160 and MC500 datasets were curated in almost the same way BIBREF9 with only one difference that MC160 is around three times smaller than MC500. We can see from Table TABREF17 that both the BERT and RoBERTa baselines perform much worse on MC160 than MC500. We think the reason is that the data size of MC160 is not enough to well fine-tune the large models with a huge amount of trainable parameters. However, by leveraging the transfer learning techniques we proposed, we can significantly improve the generalization capability of BERT and RoBERTa models on the small datasets so that the best performance of MC160 can even surpass that of MC500. This demonstrates the effectiveness of our method. To better understand why MMM can be successful, we conducted an ablation study be removing one feature at a time on the BERT-Base model. The results are shown in Table TABREF18. We see that the removal of the second stage multi-task learning part hurts our method most significantly, indicating that the majority of improvement is coming from the knowledge transferred from the in-domain dataset. The first stage of coarse-tuning using NLI datasets is also very important, which provides the model with enhanced language inference ability. As for the top-level classifier, i.e., the MAN module, if we replace it with a typical two-layer FCNN as in BIBREF8, we have 1–2% performance drop. Lastly, for the DREAM dataset, the speaker normalization strategy gives us another $\sim $1% improvement. <<</Results>>> <<<Discussion>>> <<<Why does natural language inference help?>>> As shown in Table TABREF18, coarse-tuning on NLI tasks can help improve the performance of MCQA. We conjecture one of the reasons is that, in order to pick the correct answer, we need to rely on the language inference capability in many cases. As an example in Table TABREF1, the utterance highlighted in the bold and italic font in the dialogue is the evidence sentence from which we can obtain the correct answer to Question 2. There is no token overlap between the evidence sentence and the correct answer, indicating that the model cannot solve this question by surface matching. Nevertheless, the correct answer is an entailment to the evidence sentence while the wrong answers are not. Therefore, the capability of language inference enables the model to correctly predict the answer. On the other hand, we can deem the passage and the pair of (question, answer) as a pair of premise and hypothesis. Then the process of choosing the right answer to a certain question is similar to the process of choosing the hypothesis that can best entail the premise. In this sense, the part of MCQA task can be deemed as a NLI task. This also agrees with the argument that NLI is a fundamental ability of a natural language processing model and it can help support other tasks that require higher level of language processing abilities BIBREF21. We provided several more examples that require language inference reading skills in the Section 2 of the Supplementary Material; they are wrongly predicted by the BERT-Base baseline model but can be correctly solved by exposing the model to NLI data with the coarse-tuning stage. <<</Why does natural language inference help?>>> <<<Can other tasks help with MCQA?>>> By analyzing the MCQA datasets, we found that some questions ask about the attitude of one person towards something and in some cases, the correct answer is simply a paraphrase of the evidence sentence in the passage. This finding naturally leads to the question: could other kinds of tasks such as sentiment classification, paraphrasing also help with MCQA problems? To answer this question, we select several representative datasets for five categories as the up-stream tasks: sentiment analysis, paraphrase, span-based QA, NLI, and MCQA. We conduct experiments where we first train the BERT-Base models on each of the five categories and then further fine-tune our models on the target dataset: DREAM and MC500 (MCTest-MC500). For the sentiment analysis category, we used the Stanford Sentiment Treebank (SST-2) dataset from the GLUE benchmark BIBREF22 (around 60k train examples) and the Yelp dataset (around 430k train examples). For the paraphrase category, three paraphrasing datasets are used from the GLUE benchmark: Microsoft Research Paraphrase Corpus (MRPC), Semantic Textual Similarity Benchmark (STS-B), and Quora Question Pairs (QQP), which are denoted as “GLUE-Para.”. For the span-based QA, we use the SQuAD 1.1, SQuAD 2.0 , and MRQA which is a joint dataset including six popular span-based QA datasets. Table TABREF23 summarizes the results. We see that sentiment analysis datasets do not help much with our target MCQA datasets. But the paraphrase datasets do bring some improvements for MCQA. For span-based QA, only SQuAD 2.0 helps to improve the performance of the target dataset. Interestingly, although MRQA is much larger than other QA datasets (at least six times larger), it makes the performance worst. This suggests that span-based QA might not the appropriate source tasks for transfer learning for MCQA. We hypothesis this could due to the fact that most of the questions are non-extractive (e.g., 84% of questions in DREAM are non-extractive) while all answers are extractive in the span-based QA datasets. For the completeness of our experiments, we also used various NLI datasets: MultiNLI, SNLI, Question NLI (QLI), Recognizing Textual Entailment (RTE), and Winograd NLI (WNLI) from the GLUE benchmark. We used them in three kinds of combinations: MultiNLI alone, MultiNLI plus SNLI denoted as “NLI”, and combining all five datasets together, denoted as “GLUE-NLI”. As the results shown in Table TABREF23, NLI and GLUE-NLI are comparable and both can improve the target dataset by a large margin. Lastly, among all these tasks, using the MCQA task itself, i.e., pretraining on RACE dataset, can help boost the performance, most. This result agrees with the intuition that the in-domain dataset can be the most ideal data for transfer learning. In conclusion, we find that for out-of-domain datasets, the NLI datasets can be most helpful to the MCQA task, indicating that the natural language inference capability should be an important foundation of the MCQA systems. Besides, a larger in-domain dataset, i.e. another MCQA dataset, can also be very useful. <<</Can other tasks help with MCQA?>>> <<<NLI dataset helps with convergence>>> The first stage of coarse-tuning with NLI data can not only improve the accuracy but also help the model converge faster and better. Especially for the BERT-Large and RoBERTa-Large models that have much larger amount of trainable parameters, convergence is very sensitive to the optimization settings. However, with the help of NLI datasets , convergence for large models is no longer an issue, as shown in Figure FIGREF25. Under the same optimization hyper-parameters, compared with the baseline, coarse-tuning can make the training loss of the BERT-Base model decrease much faster. More importantly, for the BERT-Large model, without coarse-tuning, the model does not converge at all at the first several epochs, which can be completely resolved by the help of NLI data. <<</NLI dataset helps with convergence>>> <<<Multi-stage or Multi-task>>> In a typical scenario where we have one source and one target dataset, we naturally have a question about whether we should simultaneously train a model on them via multi-task learning or first train on the source dataset then on the target sequentially. Many previous works adopted the latter way BIBREF19, BIBREF20, BIBREF23 and BIBREF20 demonstrated that the sequential fine-tuning approach outperforms the multi-task learning setting in their experiments. However, we had contradictory observations in our experiments. Specifically, we conducted a pair of control experiments: one is that we first fine-tune the BERT-Base model on the source dataset RACE and then further fine-tune on the target dataset, and the other is that we simultaneously train the model on RACE and the target dataset via multi-task learning. The comparison results are shown in Table TABREF27. We see that compared with sequential fine-tuning, the multi-task learning achieved better performance. We conjecture that in the sequential fine-tuning setting, while the model is being fine-tuned on the target dataset, some information or knowledge learned from the source dataset may be lost since the model is no longer exposed to the source dataset in this stage. In comparison, this information can be kept in the multi-task learning setting and thus can better help improve the target dataset. Now that the multi-task learning approach outperforms the sequential fine-tuning setting, we naturally arrive at another question: what if we merged the coarse-tuning and multi-task learning stages together? That is, what if we simultaneously trained the NLI, source, and target datasets altogether under the multi-task learning framework? We also conducted a pair of control experiments for investigation. The results in Table TABREF27, show that casting the fine-tuning process on three datasets into separate stages performs better, indicating that multi-stage training is also necessary. This verifies our MMM framework with coarse-tuning on out-of-domain datasets and fine-tuning on in-domain datesets. <<</Multi-stage or Multi-task>>> <<<Multi-steps reasoning is important>>> Previous results show that the MAN classifier shows improvement compared with the FCNN classifier, but we are also interested in how the performance change while varying the number of reasoning steps $K$ as shown in Figure FIGREF29. $K=0$ means that we do not use MAN but FCNN as the classifier. We observe that there is a gradual improvement as we increase $K=1$ to $K=5$, but after 5 steps the improvements have saturated. This verifies that an appropriate number of steps of reasoning is important for the memory network to reflect its benefits. <<</Multi-steps reasoning is important>>> <<<Could the source dataset be benefited?>>> So far we have been discussing the case where we do multi-task learning with the source dataset RACE and various much smaller target datasets to help improve the targets. We also want to see whether our proposed techniques can also benefit the source dataset itself. Table TABREF31 summarizes the results of BERT-Base model on the RACE dataset obtained by adding the coarse-tuning stage, adding the multi-task training together with DREAM, and adding the MAN module. From this table, we see that all three techniques can bring in improvements over the baseline model for the source dataset RACE, among which NLI coarse-tuning stage can help elevate the scores most. Since we found all parts of MMM can work well for the source dataset, we tried to use them to improve the accuracy on RACE. The results are shown in Table TABREF32. We used four kinds of pre-trained sentence encoders: BERT-Base, BERT-Large, XLNet-Large, and RoBERTa-Large. For each encoder, we listed the official report of scores from the leaderboard. Compared with the baselines, MMM leads to improvements ranging from 0.5% to 3.0% in accuracy. Our best result is obtained by the RoBERTa-Large encoder. <<</Could the source dataset be benefited?>>> <<<Error Analysis>>> In order to investigate how well our model performs for different types of questions, we did an error analysis by first randomly selecting 150 samples that had wrong predictions by the BERT-Base baseline model from the development set of DREAM dataset. We then manually classified them into several question types, as shown in Table TABREF34. The annotation criterion is described in the Section 3 of the Supplementary Material. We see that the BERT-Base baseline model still does not do well on matching problems. We then evaluate our best model on these samples and report the accuracy of each question type in the last column of Table TABREF34. We find that our best model can improve upon every question type significantly especially for the matching problems, and most surprisingly, our best model can even greatly improve its ability on solving the arithmetic problems, achieving the accuracy of 73.7%. However, could our model really do math? To investigate this question, we sampled some arithmetic questions that are correctly predicted by our model, made small alterations to the passage or question, and then checked whether our model can still make correct choices. We found our model is very fragile to these minor alterations, implicating that the model is actually not that good at arithmetic problems. We provided one interesting example in the Section 3 of the Supplementary Material. <<</Error Analysis>>> <<</Discussion>>> <<<Related Work>>> There are increasing interests in machine reading comprehension (MRC) for question answering (QA). The extractive QA tasks primarily focus on locating text spans from the given document/corpus to answer questions BIBREF2. Answers in abstractive datasets such as MS MARCO BIBREF24, SearchQA BIBREF25, and NarrativeQA BIBREF26 are human-generated and based on source documents or summaries in free text format. However, since annotators tend to copy spans as answers BIBREF27, the majority of answers are still extractive in these datasets. The multi-choice QA datasets are collected either via crowd sourcing, or collected from examinations designed by educational experts BIBREF7. In this type of QA datasets, besides token matching, a significant portion of questions require multi-sentence reasoning and external knowledge BIBREF5. Progress of research for MRC first relies on the breakthrough of the sentence encoder, from the basic LSTM to the pre-trained transformer based model BIBREF8, which has elevated the performance of all MRC models by a large margin. Besides, the attention mechanisms between the context and the query can empower the neural models with higher performance BIBREF11. In addition, some techniques such as answer verification BIBREF28, multi-hop reasoning BIBREF29, and synthetic data augmentation can be also helpful. Transfer learning has been widely proved to be effective across many domain in NLP. In the QA domain, the most well-known example of transfer learning would be fine-tuning the pre-trained language model such as BERT to the down-streaming QA datasets such as SQuAD BIBREF8. Besides, multi-task learning can also be deemed as a type of transfer learning, since during the training of multiple datasets from different domains for different tasks, knowledge will be shared and transferred from each task to others, which has been used to build a generalized QA model BIBREF30. However, no previous works have investigated that the knowledge from the NLI datasets can also be transferred to improve the MCQA task. <<</Related Work>>> <<<Conclusions>>> We propose MMM, a multi-stage multi-task transfer learning method on the multiple-choice question answering tasks. Our two-stage training strategy and the multi-step attention network achieved significant improvements for MCQA. We also did detailed analysis to explore the importance of both our training strategies as well as different kinds of in-domain and out-of-domain datasets. We hope our work here can also shed light on new directions for other NLP domains. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Conclusions, Results" ], "type": "disordered_section" }
1910.00458
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension <<<Abstract>>> Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language. Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it often requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations, compared to the extractive counterpart where answers are usually spans of text within given passages. Moreover, most existing MCQA datasets are small in size, making the learning task even harder. We introduce MMM, a Multi-stage Multi-task learning framework for Multi-choice reading comprehension. Our method involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset to help model generalize better with limited data. Furthermore, we propose a novel multi-step attention network (MAN) as the top-level classifier for this task. We demonstrate MMM significantly advances the state-of-the-art on four representative MCQA datasets. <<</Abstract>>> <<<Introduction>>> Building a system that comprehends text and answers questions is challenging but fascinating, which can be used to test the machine's ability to understand human language BIBREF0, BIBREF1. Many machine reading comprehension (MRC) based question answering (QA) scenarios and datasets have been introduced over the past few years, which differ from each other in various ways, including the source and format of the context documents, whether external knowledge is needed, the format of the answer, to name a few. We can divide these QA tasks into two categories: 1) extractive/abstractive QA such as SQuAD BIBREF2, and HotPotQA BIBREF3. 2) multiple-choice QA (MCQA) tasks such as MultiRC BIBREF4, and MCTest BIBREF5. In comparison to extractive/abstractive QA tasks, the answers of the MCQA datasets are in the form of open, natural language sentences and not restricted to spans in text. Various question types exist such as arithmetic, summarization, common sense, logical reasoning, language inference, and sentiment analysis. Therefore it requires more advanced reading skills for the machine to perform well on this task. Table TABREF1 shows one example from one of MCQA datasets, DREAM BIBREF6. To answer the first question in Table TABREF1, the system needs to comprehend the whole dialogue and use some common sense knowledge to infer that such a conversation can only happen between classmates rather than brother and sister. For the second question, the implicit inference relationship between the utterance “You'll forget your head if you're not careful.” in the passage and the answer option “He is too careless.” must be figured out by the model to obtain the correct answer. Many MCQA datasets were collected from language or science exams, which were purposely designed by educational experts and consequently require non-trivial reasoning techniques BIBREF7. As a result, the performance of machine readers on these tasks can more accurately gauge comprehension ability of a model. Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11. We proposed MMM, a Multi-stage Multi-task learning framework for Multi-choice question answering. Our framework involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset. For the first stage, we coarse-tuned our model with natural language inference (NLI) tasks. For the second multi-task fine-tuning stage, we leveraged the current largest MCQA dataset, RACE, as the in-domain source dataset and simultaneously fine-tuned the model on both source and target datasets via multi-task learning. Through extensive experiments, we demonstrate that the two-stage sequential fine-tuning strategy is the optimal choice for BERT-based model on MCQA datasets. Moreover, we also proposed a Multi-step Attention Network (MAN) as the top-level classifier instead of the typical fully-connected neural network for this task and obtained better performance. Our proposed method improves BERT-based baseline models by at least 7% in absolute accuracy for all the MCQA datasets (except the SemEval dataset that already achieves 88.1% for the baseline). As a result, by leveraging BERT and its variant, RoBERTa BIBREF10, our approach advanced the SOTA results for all the MCQA datasets, surpassing the previous SOTA by at least 16% in absolute accuracy (except the SemEval dataset). <<</Introduction>>> <<<Methods>>> In MCQA, the inputs to the model are a passage, a question, and answer options. The passage, denoted as $P$, consists of a list of sentences. The question and each of the answer options, denoted by $Q$ and $O$, are both single sentences. A MCQA model aims to choose one correct answer from answer options based on $P$ and $Q$. <<<Model Architecture>>> Figure FIGREF3 illustrates the model architecture. Specifically, we concatenate the passage, question and one of the answer options into a long sequence. For a question with $n$ answer options, we obtain $n$ token sequences of length $l$. Afterwards, each sequence will be encoded by a sentence encoder to get the representation vector $H \in \mathbb {R}^{d\times l}$, which is then projected into a single value $p=C(H)$ ($p\in \mathbb {R}^{1}$) via a top-level classifier $C$. In this way, we obtain the logit vector $\mathbf {p}=[p_1,p_2,...,p_n]$ for all options of a question, which is then transformed into the probability vector through a softmax layer. We choose the option with highest logit value $p$ as the answer. Cross entropy loss is used as the loss function. We used the pre-trained bidirectional transformer encoder, i.e., BERT and RoBERTa as the sentence encoder. The top-level classifier will be detailed in the next subsection. <<</Model Architecture>>> <<<Multi-step Attention Network>>> For the top-level classifier upon the sentence encoder, the simplest choice is a two-layer full-connected neural network (FCNN), which consist of one hidden layer with $tanh$ activation and one output layer without activation. This has been widely adopted when BERT is fine-tuned for the down-streaming classification tasks and performs very well BIBREF8. Inspired from the success of the attention network widely used in the span-based QA task BIBREF11, we propose the multi-step attention network (MAN) as our top-level classifier. Similar to the dynamic or multi-hop memory network BIBREF12, BIBREF13, MAN maintains a state and iteratively refines its prediction via the multi-step reasoning. The MAN classifier works as follows. A pair of question and answer option together is considered as a whole segment, denoted as $QO$. Suppose the sequence length of the passage is $p$ and that of the question and option pair is $q$. We first construct the working memory of the passage $H^P\in \mathbb {R}^{d\times p}$ by extracting the hidden state vectors of the tokens that belong to $P$ from $H$ and concatenating them together in the original sequence order. Similarly, we obtain the working memory of the (question, option) pair, denoted as $H^{QO}\in \mathbb {R}^{d\times q}$. Alternatively, we can also encode the passage and (question, option) pair individually to get their representation vectors $H^P$ and $H^{QO}$, but we found that processing them in a pair performs better. We then perform $K$-step reasoning over the memory to output the final prediction. Initially, the initial state $\mathbf {s}^0$ in step 0 is the summary of $H^P$ via self-attention: $\mathbf {s}^0=\sum _i \alpha _i H_i^P$, where $\alpha _i=\frac{exp(w_1^TH_i^P)}{\sum _j exp(w_1^TH_j^P)}$. In the following steps $k \in {1,2,...,K-1}$, the state is calculated by: where $\mathbf {x}^k=\sum _i\beta _iH_i^{QO}$ and $\beta _i=\frac{exp(w_2^T[\mathbf {s}^{k-1};H_i^{QO}])}{\sum _j exp(w_2^T[\mathbf {s}^{k-1};H_j^{QO}])}$. Here $[x;y]$ is concatenation of the vectors $x$ and $y$. The final logit value is determined using the last step state: Basically, the MAN classifier calculates the attention scores between the passage and (question, option) pair step by step dynamically such that the attention can refine itself through several steps of deliberation. The attention mechanism can help filter out irrelevant information in the passage against (question, option) pair. <<</Multi-step Attention Network>>> <<<Two Stage Training>>> We adopt a two-stage procedure to train our model with both in-domain and out-of-domain datasets as shown in Figure FIGREF10. <<<Coarse-tuning Stage>>> We first fine-tune the sentence encoder of our model with natural language inference (NLI) tasks. For exploration, we have also tried to fine-tune the sentence encoder on other types of tasks such as sentiment analysis, paraphrasing, and span-based question answering at this stage. However, we found that only NLI task shows robust and significant improvements for our target multi-choice task. See Section SECREF5 for details. <<</Coarse-tuning Stage>>> <<<Multi-task Learning Stage>>> After corase-tuning stage, we simultaneously fine-tune our model on a large in-domain source dataset and the target dataset together via multi-task learning. We share all model parameters including the sentence encoder as well as the top-level classifier for these two datasets. <<</Multi-task Learning Stage>>> <<</Two Stage Training>>> <<</Methods>>> <<<Experimental Setup>>> <<<Datasets>>> We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits. <<</Datasets>>> <<<Speaker Normalization>>> Passages in DREAM dataset are dialogues between two persons or more. Every utterance in a dialogue starts with the speaker name. For example, in utterance “m: How would he know?”, “m” is the abbreviation of “man” indicating that this utterance is from a man. More than 90% utterances have the speaker names as “w,” “f,” and “m,” which are all abbreviations. However, the speaker names mentioned in the questions are full names such as “woman” and “man.” In order to make it clear for the model to learn which speaker the question is asking about, we used a speaker normalization strategy by replacing “w” or “f” with “woman” and “m” with “man” for the speaker names in the utterances. We found this simple strategy is quite effective, providing us with 1% improvement. We will always use this strategy for the DREAM dataset for our method unless explicitly mentioned. <<</Speaker Normalization>>> <<<Multi-task Learning>>> For the multi-task learning stage, at each training step, we randomly selected a dataset from the two datasets (RACE and the target dataset) and then randomly fetched a batch of data from that dataset to train the model. This process was repeated until the predefined maximum number of steps or the early stopping criterion has been met. We adopted the proportional sampling strategy, where the probability of sampling a task is proportional to the relative size of each dataset compared to the cumulative size of all datasets BIBREF17. <<</Multi-task Learning>>> <<<Training Details>>> We used a linear learning rate decay schedule with warm-up proportion of $0.1$. We set the dropout rate as $0.1$. The maximum sequence length is set to 512. We clipped the gradient norm to 5 for DREAM dataset and 0 for other datasets. The learning rate and number of training epochs vary for different datasets and encoder types, which are summarized in Section 1 of the Supplementary Material. More than 90% of passages have more than 512 words in the TOEFL dataset, which exceed the maximum sequence length that BERT supports, thus we cannot process the whole passage within one forward pass. To solve this issue, we propose the sliding window strategy, in which we split the long passage into several snippets of length 512 with overlaps between subsequent snippets and each snippet from the same passage will be assigned with the same label. In training phase, all snippets will be used for training, and in inference phase, we aggregate the logit vectors of all snippets from the same passage and pick the option with highest logit value as the prediction. In experiments, we found the overlap of 256 words is the optimal, which can improve the BERT-Base model from accuracy of 50.0% to 53.2%. We adopted this sliding window strategy only for the TOEFL dataset. <<</Training Details>>> <<</Experimental Setup>>> <<<Results>>> We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%. We also test our method on three other MCQA datasets: MCTest including MC160 and MC500, TOEFL, and SemEval-2018 Task 11. The results are summarized in Table TABREF17. Similarly, we list the previous SOTA models with their scores for comparison. We compared our method with the baselines that use the same sentence encoder. Except for the SemEval dataset, our method can improve the BERT-Large model by at least 10%. For both MCTest and SemEval datasets, our best scores are very close to the reported human performance. The MC160 and MC500 datasets were curated in almost the same way BIBREF9 with only one difference that MC160 is around three times smaller than MC500. We can see from Table TABREF17 that both the BERT and RoBERTa baselines perform much worse on MC160 than MC500. We think the reason is that the data size of MC160 is not enough to well fine-tune the large models with a huge amount of trainable parameters. However, by leveraging the transfer learning techniques we proposed, we can significantly improve the generalization capability of BERT and RoBERTa models on the small datasets so that the best performance of MC160 can even surpass that of MC500. This demonstrates the effectiveness of our method. To better understand why MMM can be successful, we conducted an ablation study be removing one feature at a time on the BERT-Base model. The results are shown in Table TABREF18. We see that the removal of the second stage multi-task learning part hurts our method most significantly, indicating that the majority of improvement is coming from the knowledge transferred from the in-domain dataset. The first stage of coarse-tuning using NLI datasets is also very important, which provides the model with enhanced language inference ability. As for the top-level classifier, i.e., the MAN module, if we replace it with a typical two-layer FCNN as in BIBREF8, we have 1–2% performance drop. Lastly, for the DREAM dataset, the speaker normalization strategy gives us another $\sim $1% improvement. <<</Results>>> <<<Discussion>>> <<<Why does natural language inference help?>>> As shown in Table TABREF18, coarse-tuning on NLI tasks can help improve the performance of MCQA. We conjecture one of the reasons is that, in order to pick the correct answer, we need to rely on the language inference capability in many cases. As an example in Table TABREF1, the utterance highlighted in the bold and italic font in the dialogue is the evidence sentence from which we can obtain the correct answer to Question 2. There is no token overlap between the evidence sentence and the correct answer, indicating that the model cannot solve this question by surface matching. Nevertheless, the correct answer is an entailment to the evidence sentence while the wrong answers are not. Therefore, the capability of language inference enables the model to correctly predict the answer. On the other hand, we can deem the passage and the pair of (question, answer) as a pair of premise and hypothesis. Then the process of choosing the right answer to a certain question is similar to the process of choosing the hypothesis that can best entail the premise. In this sense, the part of MCQA task can be deemed as a NLI task. This also agrees with the argument that NLI is a fundamental ability of a natural language processing model and it can help support other tasks that require higher level of language processing abilities BIBREF21. We provided several more examples that require language inference reading skills in the Section 2 of the Supplementary Material; they are wrongly predicted by the BERT-Base baseline model but can be correctly solved by exposing the model to NLI data with the coarse-tuning stage. <<</Why does natural language inference help?>>> <<<Can other tasks help with MCQA?>>> By analyzing the MCQA datasets, we found that some questions ask about the attitude of one person towards something and in some cases, the correct answer is simply a paraphrase of the evidence sentence in the passage. This finding naturally leads to the question: could other kinds of tasks such as sentiment classification, paraphrasing also help with MCQA problems? To answer this question, we select several representative datasets for five categories as the up-stream tasks: sentiment analysis, paraphrase, span-based QA, NLI, and MCQA. We conduct experiments where we first train the BERT-Base models on each of the five categories and then further fine-tune our models on the target dataset: DREAM and MC500 (MCTest-MC500). For the sentiment analysis category, we used the Stanford Sentiment Treebank (SST-2) dataset from the GLUE benchmark BIBREF22 (around 60k train examples) and the Yelp dataset (around 430k train examples). For the paraphrase category, three paraphrasing datasets are used from the GLUE benchmark: Microsoft Research Paraphrase Corpus (MRPC), Semantic Textual Similarity Benchmark (STS-B), and Quora Question Pairs (QQP), which are denoted as “GLUE-Para.”. For the span-based QA, we use the SQuAD 1.1, SQuAD 2.0 , and MRQA which is a joint dataset including six popular span-based QA datasets. Table TABREF23 summarizes the results. We see that sentiment analysis datasets do not help much with our target MCQA datasets. But the paraphrase datasets do bring some improvements for MCQA. For span-based QA, only SQuAD 2.0 helps to improve the performance of the target dataset. Interestingly, although MRQA is much larger than other QA datasets (at least six times larger), it makes the performance worst. This suggests that span-based QA might not the appropriate source tasks for transfer learning for MCQA. We hypothesis this could due to the fact that most of the questions are non-extractive (e.g., 84% of questions in DREAM are non-extractive) while all answers are extractive in the span-based QA datasets. For the completeness of our experiments, we also used various NLI datasets: MultiNLI, SNLI, Question NLI (QLI), Recognizing Textual Entailment (RTE), and Winograd NLI (WNLI) from the GLUE benchmark. We used them in three kinds of combinations: MultiNLI alone, MultiNLI plus SNLI denoted as “NLI”, and combining all five datasets together, denoted as “GLUE-NLI”. As the results shown in Table TABREF23, NLI and GLUE-NLI are comparable and both can improve the target dataset by a large margin. Lastly, among all these tasks, using the MCQA task itself, i.e., pretraining on RACE dataset, can help boost the performance, most. This result agrees with the intuition that the in-domain dataset can be the most ideal data for transfer learning. In conclusion, we find that for out-of-domain datasets, the NLI datasets can be most helpful to the MCQA task, indicating that the natural language inference capability should be an important foundation of the MCQA systems. Besides, a larger in-domain dataset, i.e. another MCQA dataset, can also be very useful. <<</Can other tasks help with MCQA?>>> <<<NLI dataset helps with convergence>>> The first stage of coarse-tuning with NLI data can not only improve the accuracy but also help the model converge faster and better. Especially for the BERT-Large and RoBERTa-Large models that have much larger amount of trainable parameters, convergence is very sensitive to the optimization settings. However, with the help of NLI datasets , convergence for large models is no longer an issue, as shown in Figure FIGREF25. Under the same optimization hyper-parameters, compared with the baseline, coarse-tuning can make the training loss of the BERT-Base model decrease much faster. More importantly, for the BERT-Large model, without coarse-tuning, the model does not converge at all at the first several epochs, which can be completely resolved by the help of NLI data. <<</NLI dataset helps with convergence>>> <<<Multi-stage or Multi-task>>> In a typical scenario where we have one source and one target dataset, we naturally have a question about whether we should simultaneously train a model on them via multi-task learning or first train on the source dataset then on the target sequentially. Many previous works adopted the latter way BIBREF19, BIBREF20, BIBREF23 and BIBREF20 demonstrated that the sequential fine-tuning approach outperforms the multi-task learning setting in their experiments. However, we had contradictory observations in our experiments. Specifically, we conducted a pair of control experiments: one is that we first fine-tune the BERT-Base model on the source dataset RACE and then further fine-tune on the target dataset, and the other is that we simultaneously train the model on RACE and the target dataset via multi-task learning. The comparison results are shown in Table TABREF27. We see that compared with sequential fine-tuning, the multi-task learning achieved better performance. We conjecture that in the sequential fine-tuning setting, while the model is being fine-tuned on the target dataset, some information or knowledge learned from the source dataset may be lost since the model is no longer exposed to the source dataset in this stage. In comparison, this information can be kept in the multi-task learning setting and thus can better help improve the target dataset. Now that the multi-task learning approach outperforms the sequential fine-tuning setting, we naturally arrive at another question: what if we merged the coarse-tuning and multi-task learning stages together? That is, what if we simultaneously trained the NLI, source, and target datasets altogether under the multi-task learning framework? We also conducted a pair of control experiments for investigation. The results in Table TABREF27, show that casting the fine-tuning process on three datasets into separate stages performs better, indicating that multi-stage training is also necessary. This verifies our MMM framework with coarse-tuning on out-of-domain datasets and fine-tuning on in-domain datesets. <<</Multi-stage or Multi-task>>> <<<Multi-steps reasoning is important>>> Previous results show that the MAN classifier shows improvement compared with the FCNN classifier, but we are also interested in how the performance change while varying the number of reasoning steps $K$ as shown in Figure FIGREF29. $K=0$ means that we do not use MAN but FCNN as the classifier. We observe that there is a gradual improvement as we increase $K=1$ to $K=5$, but after 5 steps the improvements have saturated. This verifies that an appropriate number of steps of reasoning is important for the memory network to reflect its benefits. <<</Multi-steps reasoning is important>>> <<<Could the source dataset be benefited?>>> So far we have been discussing the case where we do multi-task learning with the source dataset RACE and various much smaller target datasets to help improve the targets. We also want to see whether our proposed techniques can also benefit the source dataset itself. Table TABREF31 summarizes the results of BERT-Base model on the RACE dataset obtained by adding the coarse-tuning stage, adding the multi-task training together with DREAM, and adding the MAN module. From this table, we see that all three techniques can bring in improvements over the baseline model for the source dataset RACE, among which NLI coarse-tuning stage can help elevate the scores most. Since we found all parts of MMM can work well for the source dataset, we tried to use them to improve the accuracy on RACE. The results are shown in Table TABREF32. We used four kinds of pre-trained sentence encoders: BERT-Base, BERT-Large, XLNet-Large, and RoBERTa-Large. For each encoder, we listed the official report of scores from the leaderboard. Compared with the baselines, MMM leads to improvements ranging from 0.5% to 3.0% in accuracy. Our best result is obtained by the RoBERTa-Large encoder. <<</Could the source dataset be benefited?>>> <<<Error Analysis>>> In order to investigate how well our model performs for different types of questions, we did an error analysis by first randomly selecting 150 samples that had wrong predictions by the BERT-Base baseline model from the development set of DREAM dataset. We then manually classified them into several question types, as shown in Table TABREF34. The annotation criterion is described in the Section 3 of the Supplementary Material. We see that the BERT-Base baseline model still does not do well on matching problems. We then evaluate our best model on these samples and report the accuracy of each question type in the last column of Table TABREF34. We find that our best model can improve upon every question type significantly especially for the matching problems, and most surprisingly, our best model can even greatly improve its ability on solving the arithmetic problems, achieving the accuracy of 73.7%. However, could our model really do math? To investigate this question, we sampled some arithmetic questions that are correctly predicted by our model, made small alterations to the passage or question, and then checked whether our model can still make correct choices. We found our model is very fragile to these minor alterations, implicating that the model is actually not that good at arithmetic problems. We provided one interesting example in the Section 3 of the Supplementary Material. <<</Error Analysis>>> <<</Discussion>>> <<<Related Work>>> There are increasing interests in machine reading comprehension (MRC) for question answering (QA). The extractive QA tasks primarily focus on locating text spans from the given document/corpus to answer questions BIBREF2. Answers in abstractive datasets such as MS MARCO BIBREF24, SearchQA BIBREF25, and NarrativeQA BIBREF26 are human-generated and based on source documents or summaries in free text format. However, since annotators tend to copy spans as answers BIBREF27, the majority of answers are still extractive in these datasets. The multi-choice QA datasets are collected either via crowd sourcing, or collected from examinations designed by educational experts BIBREF7. In this type of QA datasets, besides token matching, a significant portion of questions require multi-sentence reasoning and external knowledge BIBREF5. Progress of research for MRC first relies on the breakthrough of the sentence encoder, from the basic LSTM to the pre-trained transformer based model BIBREF8, which has elevated the performance of all MRC models by a large margin. Besides, the attention mechanisms between the context and the query can empower the neural models with higher performance BIBREF11. In addition, some techniques such as answer verification BIBREF28, multi-hop reasoning BIBREF29, and synthetic data augmentation can be also helpful. Transfer learning has been widely proved to be effective across many domain in NLP. In the QA domain, the most well-known example of transfer learning would be fine-tuning the pre-trained language model such as BERT to the down-streaming QA datasets such as SQuAD BIBREF8. Besides, multi-task learning can also be deemed as a type of transfer learning, since during the training of multiple datasets from different domains for different tasks, knowledge will be shared and transferred from each task to others, which has been used to build a generalized QA model BIBREF30. However, no previous works have investigated that the knowledge from the NLI datasets can also be transferred to improve the MCQA task. <<</Related Work>>> <<<Conclusions>>> We propose MMM, a multi-stage multi-task transfer learning method on the multiple-choice question answering tasks. Our two-stage training strategy and the multi-step attention network achieved significant improvements for MCQA. We also did detailed analysis to explore the importance of both our training strategies as well as different kinds of in-domain and out-of-domain datasets. We hope our work here can also shed light on new directions for other NLP domains. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Abstract, Introduction" ], "type": "disordered_section" }
2001.11268
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Data Mining in Clinical Trial Text: Transformers for Classification and Question Answering Tasks <<<Abstract>>> This research on data extraction methods applies recent advances in natural language processing to evidence synthesis based on medical texts. Texts of interest include abstracts of clinical trials in English and in multilingual contexts. The main focus is on information characterized via the Population, Intervention, Comparator, and Outcome (PICO) framework, but data extraction is not limited to these fields. Recent neural network architectures based on transformers show capacities for transfer learning and increased performance on downstream natural language processing tasks such as universal reading comprehension, brought forward by this architecture's use of contextualized word embeddings and self-attention mechanisms. This paper contributes to solving problems related to ambiguity in PICO sentence prediction tasks, as well as highlighting how annotations for training named entity recognition systems are used to train a high-performing, but nevertheless flexible architecture for question answering in systematic review automation. Additionally, it demonstrates how the problem of insufficient amounts of training annotations for PICO entity extraction is tackled by augmentation. All models in this paper were created with the aim to support systematic review (semi)automation. They achieve high F1 scores, and demonstrate the feasibility of applying transformer-based classification methods to support data mining in the biomedical literature. <<</Abstract>>> <<<INTRODUCTION>>> Systematic reviews (SR) of randomized controlled trials (RCTs) are regarded as the gold standard for providing information about the effects of interventions to healthcare practitioners, policy makers and members of the public. The quality of these reviews is ensured through a strict methodology that seeks to include all relevant information on the review topic BIBREF0. A SR, as produced by the quality standards of Cochrane, is conducted to appraise and synthesize all research for a specific research question, therefore providing access to the best available medical evidence where needed BIBREF1. The research question is specified using the PICO (population; intervention; comparator; outcomes) framework. The researchers conduct very broad literature searches in order to retrieve every piece of clinical evidence that meets their review's inclusion criteria, commonly all RCTs of a particular healthcare intervention in a specific population. In a search, no piece of relevant information should be missed. In other words, the aim is to achieve a recall score of one. This implies that the searches are broad BIBREF2, and authors are often left to screen a large number of abstracts manually in order to identify a small fraction of relevant publications for inclusion in the SR BIBREF3. The number of RCTs is increasing, and with it increases the potential number of reviews and the amount of workload that is implied for each. Research on the basis of PubMed entries shows that both the number of publications and the number of SRs increased rapidly in the last ten years BIBREF4, which is why acceleration of the systematic reviewing process is of interest in order to decrease working hours of highly trained researchers and to make the process more efficient. In this work, we focus on the detection and annotation of information about the PICO elements of RCTs described in English PubMed abstracts. In practice, the comparators involved in the C of PICO are just additional interventions, so we often refer to PIO (populations; interventions; outcomes) rather than PICO. Focus points for the investigation are the problems of ambiguity in labelled PIO data, integration of training data from different tasks and sources and assessing our model's capacity for transfer learning and domain adaptation. Recent advances in natural language processing (NLP) offer the potential to be able to automate or semi-automate the process of identifying information to be included in a SR. For example, an automated system might attempt to PICO-annotate large corpora of abstracts, such as RCTs indexed on PubMed, or assess the results retrieved in a literature search and predict which abstract or full text article fits the inclusion criteria of a review. Such systems need to be able to classify and extract data of interest. We show that transformer models perform well on complex data-extraction tasks. Language models are moving away from the semantic, but static representation of words as in Word2Vec BIBREF5, hence providing a richer and more flexible contextualized representation of input features within sentences or long sequences of text. The rest of this paper is organized as follows. The remainder of this section introduces related work and the contributions of our work. Section 2 describes the process of preparing training data, and introduces approaches to fine-tuning for sentence classification and question answering tasks. Results are presented in section 3, and section 4 includes a critical evaluation and implications for practice. <<<Tools for SR automation and PICO classification>>> The website systematicreviewtools.com BIBREF6 lists 36 software tools for study selection to date. Some tools are intended for organisational purposes and do not employ PICO classification, such as Covidence BIBREF7. The tool Rayyan uses support vector machines BIBREF8. RobotReviewer uses neural networks, word embeddings and recently also a transformer for named entity recognition (NER) BIBREF9. Question answering systems for PICO data extraction exist based on matching words from knowledge bases, hand-crafted rules and naïve Bayes classification, both on entity and sentence level BIBREF10, BIBREF11, but commonly focus on providing information to practicing clinicians rather than systematic reviewers BIBREF12. In the following we introduce models related to our sentence and entity classification tasks and the data on which our experiments are based. We made use of previously published training and testing data in order to ensure comparability between models. <<</Tools for SR automation and PICO classification>>> <<<Sentence classification data>>> In the context of systematic review (semi)automation, sentence classification can be used in the screening process, by highlighting relevant pieces of text. A long short-term memory (LSTM) neural network trained with sentences of structured abstracts from PubMed was published in 2018 BIBREF13. It uses a pre-trained Word2Vec embedding in order to represent each input word as a fixed vector. Due to the costs associated with labelling, its authors acquired sentence labels via automated annotation. Seven classes were assigned on the basis of structured headings within the text of each abstract. Table TABREF4 provides an overview of class abbreviations and their meaning.In the following we refer to it as the PubMed data. The LSTM itself yields impressive results with F1 scores for annotation of up to 0.85 for PIO elements, it generalizes across domains and assigns one label per sentence. We were able to confirm these scores by replicating a local version of this model. <<</Sentence classification data>>> <<<Question answering data>>> <<<SQuAD>>> The Stanford Question Answering Dataset (SQuAD) is a reading-comprehension dataset for machine learning tasks. It contains question contexts, questions and answers and is available in two versions. The older version contains only questions that can be answered based on the given context. In its newer version, the dataset also contains questions which can not be answered on the basis of the given context. The SQuAD creators provide an evaluation script, as well as a public leader board to compare model performances BIBREF14. <<</SQuAD>>> <<<Ebm-nlp>>> In the PICO domain, the potential of NER was shown by Nye and colleagues in using transformers, as well as LSTM and conditional random fields. In the following, we refer to these data as the ebm-nlp corpus. BIBREF15. The ebm-nlp corpus provided us with 5000 tokenized and annotated RCT abstracts for training, and 190 expert-annotated abstracts for testing. Annotation in this corpus include PIO classes, as well as more detailed information such as age, gender or medical condition. We adapted the human-annotated ebm-nlp corpus of abstracts for training our QA-BERT question answering system. <<</Ebm-nlp>>> <<</Question answering data>>> <<<Introduction to transformers>>> In the following, the bidirectional encoder representations from transformers (BERT) architecture is introduced BIBREF16. This architecture's key strengths are rooted in both feature representation and training. A good feature representation is essential to ensure any model's performance, but often data sparsity in the unsupervised training of embedding mechanisms leads to losses in overall performance. By employing a word piece vocabulary, BERT eliminated the problem of previously unseen words. Any word that is not present in the initial vocabulary is split into a sub-word vocabulary. Especially in the biomedical domain this enables richer semantic representations of words describing rare chemical compounds or conditions. A relevant example is the phrase ’two drops of ketorolac tromethamine’, where the initial three words stay intact, while the last words are tokenized to ’ket’, ’#oro’, ’#lac’, ’tro’, ’#meth’, ’#amine’, hence enabling the following model to focus on relevant parts of the input sequence, such as syllables that indicate chemical compounds. When obtaining a numerical representation for its inputs, transformers apply a ’self-attention’ mechanism, which leads to a contextualized representation of each word with respect to its surrounding words. BERT's weights are pre-trained in an unsupervised manner, based on large corpora of unlabelled text and two pre-training objectives. To achieve bidirectionality, its first pre-training objective includes prediction of randomly masked words. Secondly, a next-sentence prediction task trains the model to capture long-term dependencies. Pre-training is computationally expensive but needs to be carried out only once before sharing the weights together with the vocabulary. Fine-tuning to various downstream tasks can be carried out on the basis of comparably small amounts of labelled data, by changing the upper layers of the neural network to classification layers for different tasks. SCIBERT is a model based on the BERT-base architecture, with further pre-trained weights based on texts from the Semantic Scholar search engine BIBREF17. We used these weights as one of our three starting points for fine-tuning a sentence classification architecture BIBREF18. Furthermore, BERT-base (uncased) and Bert multilingual (cased, base architecture) were included in the comparison BIBREF16. <<</Introduction to transformers>>> <<<Weaknesses in the previous sentence classification approach>>> In the following, we discuss weaknesses in the PubMed data, and LSTM models trained on this type of labelled data. LSTM architectures commonly employ a trimmed version of Word2Vec embeddings as embedding layer. In our case, this leads to 20% of the input data being represented by generic `Unknown' tokens. These words are missing because they occur so rarely that no embedding vector was trained for them. Trimming means that the available embedding vocabulary is then further reduced to the known words of the training, development and testing data, in order to save memory and increase speed. The percentage of unknown tokens is likely to increase when predicting on previously unseen and unlabelled data. We tested our locally trained LSTM on 5000 abstracts from a study-based register BIBREF19 and found that 36% of all unique input features did not have a known representation. In the case of the labelled training and testing data itself, automatic annotation carries the risk of producing wrongly labelled data. But it also enables the training of neural networks in the first place because manual gold standard annotations for a project on the scale of a LSTM are expensive and time-consuming to produce. As we show later, the automated annotation technique causes noise in the evaluation because as the network learns, it can assign correct tags to wrongly labelled data. We also show that sentence labels are often ambiguous, and that the assignment of a single label limits the quality of the predictions for their use in real-world reviewing tasks. We acknowledge that the assignment of classes such as `Results' or `Conclusions' to sentences is potentially valuable for many use-cases. However, those sentences can contain additional information related to the PICO classes of interest. In the original LSTM-based model the A, M, R, and C data classes in Table TABREF4 are utilized for sequence optimization, which leads to increased classification scores. Their potential PICO content is neglected, although it represents crucial information in real-world reviewing tasks. A general weakness of predicting labels for whole sentences is the practical usability of the predictions. We will show sentence highlighting as a potential use-case for focusing reader's attention to passages of interest. However, the data obtained through this method are not fine-grained enough for usage in data extraction, or for the use in pipelines for automated evidence synthesis. Therefore, we expand our experiments to include QA-BERT, a question-answering model that predicts the locations of PICO entities within sentences. <<</Weaknesses in the previous sentence classification approach>>> <<<Contributions of this research>>> In this work we investigate state-of-the-art methods for language modelling and sentence classification. Our contributions are centred around developing transformer-based fine-tuning approaches tailored to SR tasks. We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13. We demonstrate that models based on the BERT architecture solve problems related to ambiguous sentence labels by learning to predict multiple labels reliably. Further, we show that the improved feature representation and contextualization of embeddings lead to improved performance in biomedical data extraction tasks. These fine-tuned models show promising results while providing a level of flexibility to suit reviewing tasks, such as the screening of studies for inclusion in reviews. By predicting on multilingual and full text contexts we showed that the model's capabilities for transfer learning can be useful when dealing with diverse, real-world data. In the second fine-tuning approach, we apply a question answering architecture to the task of data extraction. Previous models for PICO question answering relied on vast knowledge bases and hand-crafted rules. Our fine-tuning approach shows that an abstract as context, together with a combination of annotated PICO entities and SQuAD data can result in a system that outperforms contemporary entity recognition systems, while retaining general reading comprehension capabilities. <<</Contributions of this research>>> <<</INTRODUCTION>>> <<<METHODOLOGY>>> <<<Feature representation and advantages of contextualization>>> A language processing model's performance is limited by its capability of representing linguistic concepts numerically. In this preliminary experiment, we used the PubMed corpus for sentence classification to show the quality of PICO sentence embeddings retrieved from BERT. We mapped a random selection of 3000 population, intervention, and outcome sentences from the PubMed corpus to BERT-base uncased and SCIBERT. This resulted in each sentence being represented by a fixed length vector of 768 dimensions in each layer respectively, as defined by the model architecture's hidden size. These vectors can be obtained for each of the network's layers, and multiple layers can be represented together by concatenation and pooling. We used the t-distributed Stochastic Neighbour Embedding (t-SNE) algorithm to reduce each layer-embedding into two-dimensional space, and plotted the resulting values. Additionally, we computed adjusted rand scores in order to evaluate how well each layer (or concatenation thereof, always using reduce_mean pooling) represents our input sequence. The rand scores quantify the extent to which a naïve K-means (N=3) clustering algorithm in different layers alone led to correct grouping of the input sentences. <<</Feature representation and advantages of contextualization>>> <<<Sentence classification>>> <<<Preparation of the data>>> We used the PubMed corpus to fine-tune a sentence classification architecture. Class names and abbreviations are displayed in Table TABREF4. The corpus was supplied in pre-processed form, comprising 24,668 abstracts. For more information about the original dataset we refer to its original publication BIBREF13. Because of the PICO framework, methods for systematic review semi(automation) commonly focus on P, I, and O detection. A, M, R, and C classes are an additional feature of this corpus. They were included in the following experiment because they represent important information in abstracts and they occur in a vast majority of published trial text. Their exclusion can lead to false classification of sentences in full abstracts. In a preliminary experiment we summarized A, M, R, and C sentences as a generic class named ’Other’ in order to shift the model's focus to PIO classes. This resulted in high class imbalance, inferior classification scores and a loss of ability to predict these classes when supporting systematic reviewers during the screening process. In the following, abstracts that did not include a P, I, and O label were excluded. This left a total of 129,095 sentences for training, and 14,344 for testing (90:10 split). <<</Preparation of the data>>> <<<Fine-tuning>>> We carried out fine-tuning for sentence classification based on BERT-base (uncased), multilingual BERT (cased), and on SCIBERT. We changed the classification layer on top of the original BERT model. It remains as linear, fully connected layer but now employs the sigmoid cross-entropy loss with logits function for optimization. During training, this layer is optimised for predicting probabilities over all seven possible sentence labels. Therefore, this architecture enables multi-class, multi-label predictions. In comparison, the original BERT fine-tuning approach for sentence classification employed a softmax layer in order to obtain multi-class, single-label predictions of the most probable class only. During the training process the model then predicts class labels from Table 1 for each sentence. After each training step, backpropagation then adjusts the model's internal weights. To save GPU resources, a maximal sequence length of 64, batch size 32, learning rate of $2\times 10^{-5}$, a warm-up proportion of 0.1 and two epochs for training were used. <<</Fine-tuning>>> <<<Post-training assignment of classes>>> In the scope of the experiments for this paper, the model returns probabilities for the assignment of each class for every sentence. These probabilities were used to show effects of different probability thresholds (or simply assignment to the most probable class) on recall, precision and F1 scores. The number of classes was set to 7, thereby making use of the full PubMed dataset. <<</Post-training assignment of classes>>> <<</Sentence classification>>> <<<Question answering>>> <<</Question answering>>> <<</METHODOLOGY>>> <<<RESULTS>>> <<<Feature representation and contextualization>>> Figure FIGREF23 shows the dimensionality-reduced vectors for 3000 sentences in BERT-base, along with the positions of three exemplary sentences. All three examples were labelled as 'P' in the gold standard. This visualization highlights overlaps between the sentence data and ambiguity or noise in the labels. UTF8bsmi Sentences 1 and 2 are labelled incorrectly, and clearly appear far away from the population class centroid. Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. This supports a need for multiple tags per sentence, and the fine-tuning of weights within the network. Figure FIGREF23 shows the same set of sentences, represented by concatenations of SCIBERT outputs. SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. When clustered, its embeddings yielded an adjusted rand score of 0.57 for a concatenation of the two layers, compared with 0.25 for BERT-base. <<</Feature representation and contextualization>>> <<</RESULTS>>> <<<DISCUSSION>>> In this work, we have shown possibilities for sentence classification and data extraction of PICO characteristics from abstracts of RCTs. For sentence classification, models based on transformers can predict multiple labels per sentence, even if trained on a corpus that assigns a single label only. Additionally, these architectures show a great level of flexibility with respect to adjusting precision and recall scores. Recall is an important metric in SR tasks and the architectures proposed in this paper enable a post-classification trade-off setting that can be adjusted in the process of supporting reviewers in real-world reviewing tasks. However, tagging whole sentences with respect to populations, interventions and outcomes might not be an ideal method to advance systematic review automation. Identifying a sentence's tag could be helpful for highlighting abstracts from literature searches. This focuses the reader's attention on sentences, but is less helpful for automatically determining whether a specific entity (e.g. the drug aspirin) is mentioned. Our implementation of the question answering task has shown that a substantial amount of PICO entities can be identified in abstracts on a token level. This is an important step towards reliable systematic review automation. With our provided code and data, the QA-BERT model can be switched with more advanced transformer architectures, including XLM, XLNet, DistilBERT and ALBERT pre-trained models. More detailed investigations into multilingual predictions BIBREF26 pre-processing and predicting more than one PICO per sentence are reserved for future work. <<<Limitations>>> Limitations in the automatically annotated PubMed training data mostly consist of incomplete detection or noise P, I, and O entities due to the single labelling. We did not have access to multilingual annotated PICO corpora for testing, and therefore tested the model on German abstracts found on PubMed, as well as Chinese data provided by the Cochrane Schizophrenia Group. For the question answering, we limited the use of original SQuAD domains to enrich our data. This was done in order to save computing resources, as an addition of 100 SQuAD domains resulted in training time increases of two hours, depending on various other parameter settings. Adjusted parameters include increased batch size, and decreased maximal context length in order to reduce training time. <<</Limitations>>> <<</DISCUSSION>>> <<<CONCLUSION>>> With this paper we aimed to explore state-of-the-art NLP methods to advance systematic review (semi)automation. Both of the presented fine-tuning approaches for transformers demonstrated flexibility and high performance. We contributed an approach to deal with ambiguity in whole-sentence predictions, and proposed the usage of a completely different approach to entity recognition in settings where training data are sparse. In conclusion we wish to emphasize our argument that for future applications, interoperability is important. Instead of developing yet another stand-alone organizational interface with a machine learning classifier that works on limited data only, the focus should be to develop and train cross-domain and neural models that can be integrated into the backend of existing platforms. The performance of these models should be comparable on standardized datasets, evaluation scripts and leader boards. The logical next step, which remains less explored in the current literature because of its complexity, is the task of predicting an RCT's included or excluded status on the basis of PICOs identified in its text. For this task, more complex architectures that include drug or intervention ontologies could be integrated. Additionally, information from already completed reviews could be re-used as training data. <<</CONCLUSION>>> <<</Title>>>
{ "references": [ "INTRODUCTION, DISCUSSION" ], "type": "disordered_section" }
1909.08824
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Modeling Event Background for If-Then Commonsense Reasoning Using Context-aware Variational Autoencoder <<<Abstract>>> Understanding event and event-centered commonsense reasoning are crucial for natural language processing (NLP). Given an observed event, it is trivial for human to infer its intents and effects, while this type of If-Then reasoning still remains challenging for NLP systems. To facilitate this, a If-Then commonsense reasoning dataset Atomic is proposed, together with an RNN-based Seq2Seq model to conduct such reasoning. However, two fundamental problems still need to be addressed: first, the intents of an event may be multiple, while the generations of RNN-based Seq2Seq models are always semantically close; second, external knowledge of the event background may be necessary for understanding events and conducting the If-Then reasoning. To address these issues, we propose a novel context-aware variational autoencoder effectively learning event background information to guide the If-Then reasoning. Experimental results show that our approach improves the accuracy and diversity of inferences compared with state-of-the-art baseline methods. <<</Abstract>>> <<<Introduction>>> Recently, event-centered commonsense knowledge has attracted much attention BIBREF0, BIBREF1, BIBREF2, BIBREF3, because of understanding events is an important component of NLP. Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on. However, it still remains a challenging task for NLP systems. This is partly due to most of them are trained for task-specific datasets or objectives, which results in models that are adapt at finding task-specific underlying correlation patterns but have limited capability in simple and explainable commonsense reasoning BIBREF4. To facilitate this, BIBREF5 (BIBREF5) build the Event2Mind dataset and BIBREF4 (BIBREF4) present the Atomic dataset, mainly focus on nine If-Then reasoning types to describe causes, effects, intents and participant characteristic about events. Together with these datasets, a simple RNN-based encoder-decoder framework is proposed to conduct the If-Then reasoning. However, there still remains two challenging problems. First, as illustrated in Figure FIGREF1, given an event “PersonX finds a new job”, the plausible feeling of PersonX about that event could be multiple (such as “needy/stressed out” and “relieved/joyful”). Previous work showed that for the one-to-many problem, conventional RNN-based encoder-decoder models tend to generate generic responses, rather than meaningful and specific answers BIBREF6, BIBREF7. Second, as a commonsense reasoning problem, rich background knowledge is necessary for generating reasonable inferences. For example, as shown in Figure FIGREF1, the feeling of PersonX upon the event “PersonX finds a new job” could be multiple. However, after given a context “PersonX was fired”, the plausible inferences would be narrowed down to “needy” or “stressed out”. To better solve these problems, we propose a context-aware variational autoencoder (CWVAE) together with a two-stage training procedure. Variational Autoencoder (VAE) based models have shown great potential in modeling the one-to-many problem and generate diversified inferences BIBREF8, BIBREF9. In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.). Experiments on the Event2Mind and Atomic dataset show that our proposed approach outperforms baseline methods in both the accuracy and diversity of inferences. The code is released at https://github.com/sjcfr/CWVAE. <<</Introduction>>> <<<Background>>> Before specifically describing two dataset —- Event2Mind and Atomic used in this paper as well as the If-Then reasoning task, for clarity, we define the following terminologies: Base event: the prerequisite event in If-Then reasoning, organized as a verb phrase with a predicate and its arguments, such as the event “PersonX finds a new job” shown in Figure FIGREF1. Inference dimension: a particular If-Then reasoning type, e.g., intents, effects of the base event. Details are shown in Table TABREF2 and Table TABREF3. Target: the inferential results. For example, as shown in Figure FIGREF1, given a base event “PersonX finds a new job” and one inference dimension “xReact”, the targets could be “relieved” or “needy”. Notice that each inference dimension can have multiple targets. Event2Mind Dataset contains 25K base events and 300K targets, annotated through crowdsourcing. Event2Mind is organized in a hierarchical form: each base event has three types of inference dimensions, and given a base event, under one of inference dimensions, several targets may simultaneously exist. Table TABREF2 shows the (base event-inference dimension-target) hierarchical structure through an example from Event2Mind. Atomic Dataset Inspired by Event2Mind, the Atomic dataset shares the same hierarchical structure as Event2Mind, while scales up the size of dataset and expands the scope to nine types of inference dimensions. Table TABREF3 shows the (base event-inference dimension-target) hierarchical structure through an example from Atomic. Though Atomic covers the inference dimensions of Event2Mind, the base event collection of Event2Mind is nonidentical to that of Atomic. Problem Definition The If-Then reasoning task could be formally defined as a conditional one-to-many generation problem: given a base event $x$ and one inference dimension $d$, the model is required to generate targets $y=f(x, d)$ as close to the ground truths as possible. Both $x$ and $y$ consist of sequence of words: $x=\lbrace x_1,\dots , x_{m}\rbrace $, and $y=\lbrace y_1,\dots , y_{n}\rbrace $, where $m$ and $n$ denotes the length of $x$ and $y$, respectively. Conditional Variational Autoencoder The variational autoencoder (VAE) defines a generative framework suited for one-to-many generation problem BIBREF10. While conditional variational autoencoder (CVAE) BIBREF11 is an extension of VAE on the conditional generation problem. As shown in Figure FIGREF5 (a), CVAE characterizes the conditional one-to-many generation problem using three random variables: event $x$, target $y$ and a latent variable $z$, which is used for modeling the latent distribution of semantic over targets given an event. Hence, under a certain inference dimension, with regard to the latent semantic variable $z$, the conditional generation problem could be expressed as $p(y|x)=\int p(y|x,z)p(z|x)dz$. CVAE models $p(y|x,z)$ and $p(z|x)$ using deep neural networks (parameterized by $\theta $) $p_{\theta }(y|x,z)$ and $p_{\theta }(z|x)$. Then as illustrated in Figure FIGREF5 (b), $y$ could be generated from $x$ and $z$. CVAE is trained to maximize the conditional likelihood $p(y|x)$, which involves an intractable marginalization over the latent variable $z$. Instead, following BIBREF10 (BIBREF10), a practical way is to introduce another deep network (parameterized by $\phi $) $q_{\phi }(z|x,y)$ to approximate the true posterior distribution $p(z|x,y)$ and maximize the evidence lower bound (ELBO) of the log-likelihood function: Therefore, CVAE is composed of three neural networks in general. We refer to $p_{\theta }(z|x)$ as a prior network, $q_{\phi }(z|x,y)$ as a recognition network, and $p_{\theta }(y|x,z)$ as a neural decoder. <<</Background>>> <<<Context-aware Variational Autoencoder>>> Traditional CVAE can model the event-target relation. In other words, given an observed event, CVAE can generate its corresponding targets. While in this paper we model the If-Then reasoning as a [(background), event]-target process. It means that in addition to the observed event, we also want to involve the event background knowledge (which can be learned from event contexts) to generate the reasonable targets. To this end, we propose a context-aware variational autoencoder (CWVAE), with two additional latent variables: a context-acquiring latent variable $z_c$ to directly acquire context information, and a context-aware latent variable $z_{c^{\prime }}$ to learn background knowledge from $z_c$, as shown in Figure FIGREF6 (a). However, the event context information is absent in the Event2Mind and Atomic dataset. To learn from the external event context information, we design the following two-stage training procedure for CWVAE. Pretrain: Learning Event Background Knowledge from Auxiliary Dataset In the pretrain stage, CWVAE is trained on three narrative story corpora with rich event context information. As shown in Figure FIGREF6 (a), context-acquiring latent variable $z_c$ is directly conditioned on the context $c$. Hence, $z_c$ could be employed for acquiring background knowledge from event contexts. Then, we minimize the distance between $z_c$ and the context-aware latent variable $z_{c^{\prime }}$, by which the event background knowledge is transferred from $z_c$ to $z_{c^{\prime }}$. Finetune: Adapt Event Background Knowledge to Each Inference Dimension In the finetune stage, as shown in Figure FIGREF6 (b), CWVAE is trained on the Event2Mind and Atomic dataset without the event context information. Pretrained CWVAE is finetuned to learn the specific inferential knowledge of each inference dimension. After the training procedure, as shown in Figure FIGREF6 (c), samples of $z$ is generated based on $x$ and samples of $z_{c^{\prime }}$, where $z_{c^{\prime }}$ contains rich event background knowledge helpful for If-Then reasoning. <<<Architecture of CWVAE>>> As shown in Figure FIGREF8, CWVAE is mainly composed of four parts: a neural encoder that provides distributed representations of base events/targets, a recognition network for inferring $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$ and $q_{\phi }(z|z_{c^{\prime }}, x)$, a prior network for modeling $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$, and a neural decoder that integrates the information from $z$ and $z_{c^{\prime }}$ to generate targets. Neural Encoder We employ a bidirectional GRU as neural encoder, which encodes context $c$, event $x$ and target $y$ into distributed representations $h^c=\lbrace h_1^c,\dots ,h_{l_c}^c\rbrace $, $h^x=\lbrace h_1^x,\dots ,h_{l_x}^x\rbrace $ and $h^y=\lbrace h_1^y,\dots ,h_{l_y}^y\rbrace $, where $l_c$, $l_x$ and $l_y$ is the length of $c$, $x$ and $y$, respectively. Recognition Network The recognition network models $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$, $q_{\phi }(z|z_{c^{\prime }}, x)$ based on $h^x$, $h^y$ and $h^c$. Following traditional VAE, the above-mentioned three distributions are assumed to be multivariate Gaussian distribution with a diagonal covariance structure: where $\mu $ denotes the mean of the distribution, $\sigma $ denotes the standard deviation of the distribution, and $I$ denotes the identity matrix. Given $h^x$, $h^y$ and $h^c$, we propose a novel attention-based inferer (ABI) module to estimate the mean and standard deviation of $q_{\phi }(z_{c}|x,c)$, $q_{\phi }(z_{c^{\prime }}|x,y)$ and $q_{\phi }(z|x,y)$: Briefly, through the attention mechanism, ABI can capture the semantic interaction between input sequences, and estimate the parameters of distributions based on it. We will introduce the specific structure of ABI in below. Prior Network Prior Network models $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ based on $h^x$. The distribution of $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ are still assumed to be multivariate Gaussian, whereas the parameters are different: where $\mu ^{^{\prime }}$ denotes the mean of the distribution, $\sigma ^{^{\prime }}$ denotes the standard deviation of the distribution and $I$ denotes the identity matrix. Then the attention-based inferer module is still employed to estimate parameters of distributions: Neural Decoder Given the base event $x$, the semantic latent variable $z$, and the context-aware latent variable $z_{c^{\prime }}$, the neural decoder defines the generation probability of $y$ as following: where $p(y_j|y<j, z, z_{c^{\prime }}, x)=g(y_{j-1}, s_{j-1}, e_j)$, $g(\cdot )$ is an attention-based feed forward model, $e_j=\sum _i \alpha _{ji}h_i^{x}$ is the context vector and $s_{j-1}$ is the hidden state of the decoder. We obtain $g(\cdot )$ and $e_j$ the same way as BIBREF12 (BIBREF12). Whereas our decoder differs from BIBREF12 (BIBREF12) in that our model integrates the context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$ in the computation of $s_j=\mathrm {GRU}([E_{yj};s_{j-1},z,z_{j-1}])$, where $E_{yj}$ is the word embeddings of target words. Note that through concatenating $z$ and $z_{c^{\prime }}$ with $E_{yj}$ and $s_{j-1}$, $s_j$ could be affected by context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$. This allows model to directly access to the event background knowledge from $z_{c^{\prime }}$. In addition, the randomness of $z$ and $z_{c^{\prime }}$ would increase the diversity of model generation. Attention-based Inferer Attention mechanism has shown strong ability in capturing semantic interactions BIBREF13. Inspired by the co-attention mechanism BIBREF14, we propose an attention-based inferer (ABI) to estimate the mean and standard deviation of a distribution belongs to $p_{\theta }(\cdot )$ or $q_{\phi }(\cdot )$ by capturing semantic interactions of input sequences. Specifically, given two input sequences (e.g., representations of contexts and events) $a=\lbrace a_1,\dots ,a_{l_a}\rbrace $ and $b=\lbrace b_1,\dots ,b_{l_b}\rbrace $ with length $l_a$ and $l_b$, we first obtain the attention scores from each side through: where $W_a \in \mathbb {R}^{d\times d_a}$ and $W_b \in \mathbb {R}^{d\times d_b}$ are parameter weights. With these attention scores, the context vectors of both sequences are given by: Then we perform a mean pooling operation on context vectors of both sequences: To obtain the mean and standard deviation, the pooled context vectors $\bar{c^a}$ and $\bar{c^b}$ which carry semantic interaction between two sequences, are concatenated and projected into a latent semantic space through a nonlinear transformation: Finally the mean and standard deviation are generated through a nonlinear transformation over $h_z$: <<</Architecture of CWVAE>>> <<<Optimizing>>> With the incorporation of $z_{c^{\prime }}$, the original loglikelihood could be decomposed as: Then following traditional CVAE, the ELBO of CWVAE is defined as follows: which is the objective function at the finetune stage. While in the pretrain stage, as we aim to learn background knowledge through minimizing the distance between $z_c$ and $z_{c^{\prime }}$, in addition to $L^{ELBO}$, a context-aware regulation term is introduced: where the context aware regularization term is the KL distance between $z$ and $z_{c^{\prime }}$. Through minimizing the context aware regularization term, we aim to pass event context knowledge from $z_c$ to the context aware latent variable $z_{c^{\prime }}$. <<</Optimizing>>> <<<Training Details>>> To test the performance of CWVAE, we split the Event2Mind and Atomic dataset into training, development and test sets (80%, 10%, 10%) in the same way as BIBREF5 (BIBREF5) and BIBREF4 (BIBREF4), respectively. We initialize the embedding layer from 300d GloVe word embeddings. The neural encoder is chosen to be biGRU with 300 hidden units. For the ABI module, size of $W_a$ and $W_b$ is set to be $100 \times d_a$ and $100 \times d_b$ respectively. The dimension of $z_c$, $z_{c^{\prime }}$ and $z$ is all set as 40. The neural decoder is set to be GRU with 300d hidden state. Regulation coefficient $\lambda $ of context-aware regulation term is set to be 0.1. Models are trained using an Adam optimizer BIBREF15 with a learning rate of 0.001. <<</Training Details>>> <<</Context-aware Variational Autoencoder>>> <<<Experiments>>> <<<Auxiliary Dataset>>> The auxiliary dataset is built upon three human-written story corpora: ROCStories BIBREF16, VIST BIBREF17 and WritingPrompts BIBREF18. ROCStories and VIST are composed of short stories with five sentences. We filter out stories of more than 1,000 words in WritingPrompts, and cut the remaining stories into five-sentence-paragraphs. For each five-sentence-paragraph, we define the first three sentences as contexts of the base event, the fourth sentence as the base event, and the fifth sentence as the inference target. For example, as shown in Table TABREF25, the first three sentences describe a context that Jason was unsatisfied about his job and applied for a new job. Hence, after happening the event “he got the job”, a plausible react about the event could be “jason was much happier at his new job”. In total, the auxiliary dataset contains 192,316 $(context, event, target)$ triples. <<</Auxiliary Dataset>>> <<<Baselines>>> We compared our proposed model with the following four baseline methods: RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic. Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8. VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets. CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage. Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively. <<</Baselines>>> <<<Evaluation Metrics>>> <<<Automatic Evaluation>>> We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens. <<</Automatic Evaluation>>> <<<Human Evaluation>>> Since automatic evaluation of generations is still a challenging task BIBREF22, we also conduct human evaluations on the model performance. Five human experts are employed to evaluate the coherence, diversity and fluency of generated targets. Experts are asked to vote for if a generation is fluent or coherent for each generated target, and give a 1-5 score for the diversity of generations. For both Event2Mind and Atomic datasets, 100 events are randomly selected from the test set. For each method, top 10 generated targets of each base event are used for evaluation. Finally we report three overall averaged scores of coherence, diversity and fluency on both datasets, respectively. <<</Human Evaluation>>> <<</Evaluation Metrics>>> <<<Overall Results>>> We list the perplexity and BLEU score of CWVAE and baseline methods on Event2Mind and Atomic in Table TABREF31 and Table TABREF33, respectively, and show the distinct-1 and distinct-2 score on Event2Mind and Atomic in Table TABREF32 and Table TABREF34, respectively. We find that: (1) As shown in Table TABREF32 and Table TABREF34, comparison between RNN-based Seq2Seq and variational-based methods, including Variational Seq2Seq, VRNMT, CWVAE-unpretrained and CWVAE shows that, variational-based methods could increase the diversity of generations. This confirms one of our motivations that variational-based methods could capture the latent semantic distribution within targets and increase the diversity of If-Then reasoning. (2) Comparing CWVAE-unpretrained with other baseline methods shows that, in general CWVAE improves the accuracy and diversity on both dataset. These results indicate the efficiency of CWVAE in capturing the latent semantic distribution of targets, and generate more reasonable inferential results. (3) Comparison between CWVAE and CWVAE-unpretrained shows that the pretrain stage could enhance the performance of CWVAE in both the accuracy and diversity. This is mainly because event knowledge could offer the guidance for If-Then reasoning. In the pretrain stage, CWVAE could capture the event background knowledge through context-aware latent variable, and such knowledge could be be adapted to our task through the fintune stage. To further evaluate the effectiveness of our proposed approach, we also conduct human evaluations, the results of which are shown in Table TABREF39 and Table TABREF40. On both datasets, CWVAE-based methods achieve consistent better coherence, diversity and fluency performances. While comparing with CWVAE-Unpretrained, the pretrain procedure could improves the performance on coherence and fluency. The main reasons are twofold: first, the CWVAE has advantage in capturing the semantic distribution of targets; second, event background learned from the pretrain stage is helpful for the If-Then reasoning. <<</Overall Results>>> <<<Case Study>>> Table TABREF41 provides an example of model generations given the base event “PersonX works tirelessly” and the inference dimension “xIntent”. The generations under CWVAE mainly contain four kinds of semantics: (1) be productive, (2) finish his work soon, (3) accomplish goal, (4) earn more money. While the semantics of generations using baseline RNN-based Seq2Seq model is relatively limited. Furthermore, the first three kinds of semantic overlap the three ground truth targets, and the fourth kind of semantic is in accordance with daily-life commonsense. Compared to RNN-based Seq2Seq model, our approach can increase the diversity and rationality of generations, meanwhile keep the accuracy. <<</Case Study>>> <<</Experiments>>> <<<Related Work>>> <<<Event-Centered Commonsense Reasoning>>> Understanding events and constructing event-centered commonsense knowledge are crucial to many NLP applications, such as intention recognition BIBREF23 and dialog generation BIBREF24. Recently a growing number of studies focus on event-centered commonsense reasoning, which mainly concentrates on two areas, script event prediction and story ending generation/choosing. Script event prediction concerns with the temporal relationships between script events BIBREF25, which requires models to choose a correct subsequent triple-organized event among the candidates BIBREF2. Prior work mainly focused on modeling event pairs BIBREF25, event chains BIBREF2 and event graph BIBREF3 to predict the subsequent event. Story ending generation focuses on generating plausible story endings BIBREF16, which requires models to understand the story context, and keep generated endings logically consistent with it BIBREF26, BIBREF27. The above tasks mainly investigate the logical orders of events, whereas the If-Then reasoning task focuses on inferring the mental state of event participants. <<</Event-Centered Commonsense Reasoning>>> <<<Variational AutoEncoder-Decoder Based Natural Language Generation>>> VAE BIBREF10 has been widely applied in various of text generation tasks, such as dialogue and machine translation. In dialogue generation, BIBREF9 (BIBREF9) adapts VAE with encoder-decoder framework to model the latent semantic distribution of answers, which can increase the diversity of generations. For the task of machine translation, BIBREF19 (BIBREF19) and BIBREF28 (BIBREF28) employ a latent variable to capture the semantic interaction between the source and target sentence, and regard the latent variable as a supplementation of attention mechanism. While BIBREF29 (BIBREF29) use the latent variable to model topic distributions in text generation. In this paper, we introduce an additional context-aware latent variable to effectively learn background knowledge and conduct If-Then reasoning on the guidance of it. <<</Variational AutoEncoder-Decoder Based Natural Language Generation>>> <<</Related Work>>> <<<Conclusion>>> In this paper, we propose a novel context-aware VAE (CWVAE) framework with two training stages for If-Then commonsense reasoning. By introducing an additional context-aware latent variable, CWVAE is able to learn external background knowledge, and conduct If-Then reasoning under its guidance. In the pretrain stage, CWVAE learns event background knowledge, then in the finetune stage CWVAE adapts such knowledge to each inference dimension. Experimental results demonstrate that CWVAE outperforms baseline methods in both the accuracy and diversity of generations. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Related Work, Abstract" ], "type": "disordered_section" }
1909.08824
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Modeling Event Background for If-Then Commonsense Reasoning Using Context-aware Variational Autoencoder <<<Abstract>>> Understanding event and event-centered commonsense reasoning are crucial for natural language processing (NLP). Given an observed event, it is trivial for human to infer its intents and effects, while this type of If-Then reasoning still remains challenging for NLP systems. To facilitate this, a If-Then commonsense reasoning dataset Atomic is proposed, together with an RNN-based Seq2Seq model to conduct such reasoning. However, two fundamental problems still need to be addressed: first, the intents of an event may be multiple, while the generations of RNN-based Seq2Seq models are always semantically close; second, external knowledge of the event background may be necessary for understanding events and conducting the If-Then reasoning. To address these issues, we propose a novel context-aware variational autoencoder effectively learning event background information to guide the If-Then reasoning. Experimental results show that our approach improves the accuracy and diversity of inferences compared with state-of-the-art baseline methods. <<</Abstract>>> <<<Introduction>>> Recently, event-centered commonsense knowledge has attracted much attention BIBREF0, BIBREF1, BIBREF2, BIBREF3, because of understanding events is an important component of NLP. Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on. However, it still remains a challenging task for NLP systems. This is partly due to most of them are trained for task-specific datasets or objectives, which results in models that are adapt at finding task-specific underlying correlation patterns but have limited capability in simple and explainable commonsense reasoning BIBREF4. To facilitate this, BIBREF5 (BIBREF5) build the Event2Mind dataset and BIBREF4 (BIBREF4) present the Atomic dataset, mainly focus on nine If-Then reasoning types to describe causes, effects, intents and participant characteristic about events. Together with these datasets, a simple RNN-based encoder-decoder framework is proposed to conduct the If-Then reasoning. However, there still remains two challenging problems. First, as illustrated in Figure FIGREF1, given an event “PersonX finds a new job”, the plausible feeling of PersonX about that event could be multiple (such as “needy/stressed out” and “relieved/joyful”). Previous work showed that for the one-to-many problem, conventional RNN-based encoder-decoder models tend to generate generic responses, rather than meaningful and specific answers BIBREF6, BIBREF7. Second, as a commonsense reasoning problem, rich background knowledge is necessary for generating reasonable inferences. For example, as shown in Figure FIGREF1, the feeling of PersonX upon the event “PersonX finds a new job” could be multiple. However, after given a context “PersonX was fired”, the plausible inferences would be narrowed down to “needy” or “stressed out”. To better solve these problems, we propose a context-aware variational autoencoder (CWVAE) together with a two-stage training procedure. Variational Autoencoder (VAE) based models have shown great potential in modeling the one-to-many problem and generate diversified inferences BIBREF8, BIBREF9. In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.). Experiments on the Event2Mind and Atomic dataset show that our proposed approach outperforms baseline methods in both the accuracy and diversity of inferences. The code is released at https://github.com/sjcfr/CWVAE. <<</Introduction>>> <<<Background>>> Before specifically describing two dataset —- Event2Mind and Atomic used in this paper as well as the If-Then reasoning task, for clarity, we define the following terminologies: Base event: the prerequisite event in If-Then reasoning, organized as a verb phrase with a predicate and its arguments, such as the event “PersonX finds a new job” shown in Figure FIGREF1. Inference dimension: a particular If-Then reasoning type, e.g., intents, effects of the base event. Details are shown in Table TABREF2 and Table TABREF3. Target: the inferential results. For example, as shown in Figure FIGREF1, given a base event “PersonX finds a new job” and one inference dimension “xReact”, the targets could be “relieved” or “needy”. Notice that each inference dimension can have multiple targets. Event2Mind Dataset contains 25K base events and 300K targets, annotated through crowdsourcing. Event2Mind is organized in a hierarchical form: each base event has three types of inference dimensions, and given a base event, under one of inference dimensions, several targets may simultaneously exist. Table TABREF2 shows the (base event-inference dimension-target) hierarchical structure through an example from Event2Mind. Atomic Dataset Inspired by Event2Mind, the Atomic dataset shares the same hierarchical structure as Event2Mind, while scales up the size of dataset and expands the scope to nine types of inference dimensions. Table TABREF3 shows the (base event-inference dimension-target) hierarchical structure through an example from Atomic. Though Atomic covers the inference dimensions of Event2Mind, the base event collection of Event2Mind is nonidentical to that of Atomic. Problem Definition The If-Then reasoning task could be formally defined as a conditional one-to-many generation problem: given a base event $x$ and one inference dimension $d$, the model is required to generate targets $y=f(x, d)$ as close to the ground truths as possible. Both $x$ and $y$ consist of sequence of words: $x=\lbrace x_1,\dots , x_{m}\rbrace $, and $y=\lbrace y_1,\dots , y_{n}\rbrace $, where $m$ and $n$ denotes the length of $x$ and $y$, respectively. Conditional Variational Autoencoder The variational autoencoder (VAE) defines a generative framework suited for one-to-many generation problem BIBREF10. While conditional variational autoencoder (CVAE) BIBREF11 is an extension of VAE on the conditional generation problem. As shown in Figure FIGREF5 (a), CVAE characterizes the conditional one-to-many generation problem using three random variables: event $x$, target $y$ and a latent variable $z$, which is used for modeling the latent distribution of semantic over targets given an event. Hence, under a certain inference dimension, with regard to the latent semantic variable $z$, the conditional generation problem could be expressed as $p(y|x)=\int p(y|x,z)p(z|x)dz$. CVAE models $p(y|x,z)$ and $p(z|x)$ using deep neural networks (parameterized by $\theta $) $p_{\theta }(y|x,z)$ and $p_{\theta }(z|x)$. Then as illustrated in Figure FIGREF5 (b), $y$ could be generated from $x$ and $z$. CVAE is trained to maximize the conditional likelihood $p(y|x)$, which involves an intractable marginalization over the latent variable $z$. Instead, following BIBREF10 (BIBREF10), a practical way is to introduce another deep network (parameterized by $\phi $) $q_{\phi }(z|x,y)$ to approximate the true posterior distribution $p(z|x,y)$ and maximize the evidence lower bound (ELBO) of the log-likelihood function: Therefore, CVAE is composed of three neural networks in general. We refer to $p_{\theta }(z|x)$ as a prior network, $q_{\phi }(z|x,y)$ as a recognition network, and $p_{\theta }(y|x,z)$ as a neural decoder. <<</Background>>> <<<Context-aware Variational Autoencoder>>> Traditional CVAE can model the event-target relation. In other words, given an observed event, CVAE can generate its corresponding targets. While in this paper we model the If-Then reasoning as a [(background), event]-target process. It means that in addition to the observed event, we also want to involve the event background knowledge (which can be learned from event contexts) to generate the reasonable targets. To this end, we propose a context-aware variational autoencoder (CWVAE), with two additional latent variables: a context-acquiring latent variable $z_c$ to directly acquire context information, and a context-aware latent variable $z_{c^{\prime }}$ to learn background knowledge from $z_c$, as shown in Figure FIGREF6 (a). However, the event context information is absent in the Event2Mind and Atomic dataset. To learn from the external event context information, we design the following two-stage training procedure for CWVAE. Pretrain: Learning Event Background Knowledge from Auxiliary Dataset In the pretrain stage, CWVAE is trained on three narrative story corpora with rich event context information. As shown in Figure FIGREF6 (a), context-acquiring latent variable $z_c$ is directly conditioned on the context $c$. Hence, $z_c$ could be employed for acquiring background knowledge from event contexts. Then, we minimize the distance between $z_c$ and the context-aware latent variable $z_{c^{\prime }}$, by which the event background knowledge is transferred from $z_c$ to $z_{c^{\prime }}$. Finetune: Adapt Event Background Knowledge to Each Inference Dimension In the finetune stage, as shown in Figure FIGREF6 (b), CWVAE is trained on the Event2Mind and Atomic dataset without the event context information. Pretrained CWVAE is finetuned to learn the specific inferential knowledge of each inference dimension. After the training procedure, as shown in Figure FIGREF6 (c), samples of $z$ is generated based on $x$ and samples of $z_{c^{\prime }}$, where $z_{c^{\prime }}$ contains rich event background knowledge helpful for If-Then reasoning. <<<Architecture of CWVAE>>> As shown in Figure FIGREF8, CWVAE is mainly composed of four parts: a neural encoder that provides distributed representations of base events/targets, a recognition network for inferring $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$ and $q_{\phi }(z|z_{c^{\prime }}, x)$, a prior network for modeling $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$, and a neural decoder that integrates the information from $z$ and $z_{c^{\prime }}$ to generate targets. Neural Encoder We employ a bidirectional GRU as neural encoder, which encodes context $c$, event $x$ and target $y$ into distributed representations $h^c=\lbrace h_1^c,\dots ,h_{l_c}^c\rbrace $, $h^x=\lbrace h_1^x,\dots ,h_{l_x}^x\rbrace $ and $h^y=\lbrace h_1^y,\dots ,h_{l_y}^y\rbrace $, where $l_c$, $l_x$ and $l_y$ is the length of $c$, $x$ and $y$, respectively. Recognition Network The recognition network models $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$, $q_{\phi }(z|z_{c^{\prime }}, x)$ based on $h^x$, $h^y$ and $h^c$. Following traditional VAE, the above-mentioned three distributions are assumed to be multivariate Gaussian distribution with a diagonal covariance structure: where $\mu $ denotes the mean of the distribution, $\sigma $ denotes the standard deviation of the distribution, and $I$ denotes the identity matrix. Given $h^x$, $h^y$ and $h^c$, we propose a novel attention-based inferer (ABI) module to estimate the mean and standard deviation of $q_{\phi }(z_{c}|x,c)$, $q_{\phi }(z_{c^{\prime }}|x,y)$ and $q_{\phi }(z|x,y)$: Briefly, through the attention mechanism, ABI can capture the semantic interaction between input sequences, and estimate the parameters of distributions based on it. We will introduce the specific structure of ABI in below. Prior Network Prior Network models $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ based on $h^x$. The distribution of $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ are still assumed to be multivariate Gaussian, whereas the parameters are different: where $\mu ^{^{\prime }}$ denotes the mean of the distribution, $\sigma ^{^{\prime }}$ denotes the standard deviation of the distribution and $I$ denotes the identity matrix. Then the attention-based inferer module is still employed to estimate parameters of distributions: Neural Decoder Given the base event $x$, the semantic latent variable $z$, and the context-aware latent variable $z_{c^{\prime }}$, the neural decoder defines the generation probability of $y$ as following: where $p(y_j|y<j, z, z_{c^{\prime }}, x)=g(y_{j-1}, s_{j-1}, e_j)$, $g(\cdot )$ is an attention-based feed forward model, $e_j=\sum _i \alpha _{ji}h_i^{x}$ is the context vector and $s_{j-1}$ is the hidden state of the decoder. We obtain $g(\cdot )$ and $e_j$ the same way as BIBREF12 (BIBREF12). Whereas our decoder differs from BIBREF12 (BIBREF12) in that our model integrates the context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$ in the computation of $s_j=\mathrm {GRU}([E_{yj};s_{j-1},z,z_{j-1}])$, where $E_{yj}$ is the word embeddings of target words. Note that through concatenating $z$ and $z_{c^{\prime }}$ with $E_{yj}$ and $s_{j-1}$, $s_j$ could be affected by context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$. This allows model to directly access to the event background knowledge from $z_{c^{\prime }}$. In addition, the randomness of $z$ and $z_{c^{\prime }}$ would increase the diversity of model generation. Attention-based Inferer Attention mechanism has shown strong ability in capturing semantic interactions BIBREF13. Inspired by the co-attention mechanism BIBREF14, we propose an attention-based inferer (ABI) to estimate the mean and standard deviation of a distribution belongs to $p_{\theta }(\cdot )$ or $q_{\phi }(\cdot )$ by capturing semantic interactions of input sequences. Specifically, given two input sequences (e.g., representations of contexts and events) $a=\lbrace a_1,\dots ,a_{l_a}\rbrace $ and $b=\lbrace b_1,\dots ,b_{l_b}\rbrace $ with length $l_a$ and $l_b$, we first obtain the attention scores from each side through: where $W_a \in \mathbb {R}^{d\times d_a}$ and $W_b \in \mathbb {R}^{d\times d_b}$ are parameter weights. With these attention scores, the context vectors of both sequences are given by: Then we perform a mean pooling operation on context vectors of both sequences: To obtain the mean and standard deviation, the pooled context vectors $\bar{c^a}$ and $\bar{c^b}$ which carry semantic interaction between two sequences, are concatenated and projected into a latent semantic space through a nonlinear transformation: Finally the mean and standard deviation are generated through a nonlinear transformation over $h_z$: <<</Architecture of CWVAE>>> <<<Optimizing>>> With the incorporation of $z_{c^{\prime }}$, the original loglikelihood could be decomposed as: Then following traditional CVAE, the ELBO of CWVAE is defined as follows: which is the objective function at the finetune stage. While in the pretrain stage, as we aim to learn background knowledge through minimizing the distance between $z_c$ and $z_{c^{\prime }}$, in addition to $L^{ELBO}$, a context-aware regulation term is introduced: where the context aware regularization term is the KL distance between $z$ and $z_{c^{\prime }}$. Through minimizing the context aware regularization term, we aim to pass event context knowledge from $z_c$ to the context aware latent variable $z_{c^{\prime }}$. <<</Optimizing>>> <<<Training Details>>> To test the performance of CWVAE, we split the Event2Mind and Atomic dataset into training, development and test sets (80%, 10%, 10%) in the same way as BIBREF5 (BIBREF5) and BIBREF4 (BIBREF4), respectively. We initialize the embedding layer from 300d GloVe word embeddings. The neural encoder is chosen to be biGRU with 300 hidden units. For the ABI module, size of $W_a$ and $W_b$ is set to be $100 \times d_a$ and $100 \times d_b$ respectively. The dimension of $z_c$, $z_{c^{\prime }}$ and $z$ is all set as 40. The neural decoder is set to be GRU with 300d hidden state. Regulation coefficient $\lambda $ of context-aware regulation term is set to be 0.1. Models are trained using an Adam optimizer BIBREF15 with a learning rate of 0.001. <<</Training Details>>> <<</Context-aware Variational Autoencoder>>> <<<Experiments>>> <<<Auxiliary Dataset>>> The auxiliary dataset is built upon three human-written story corpora: ROCStories BIBREF16, VIST BIBREF17 and WritingPrompts BIBREF18. ROCStories and VIST are composed of short stories with five sentences. We filter out stories of more than 1,000 words in WritingPrompts, and cut the remaining stories into five-sentence-paragraphs. For each five-sentence-paragraph, we define the first three sentences as contexts of the base event, the fourth sentence as the base event, and the fifth sentence as the inference target. For example, as shown in Table TABREF25, the first three sentences describe a context that Jason was unsatisfied about his job and applied for a new job. Hence, after happening the event “he got the job”, a plausible react about the event could be “jason was much happier at his new job”. In total, the auxiliary dataset contains 192,316 $(context, event, target)$ triples. <<</Auxiliary Dataset>>> <<<Baselines>>> We compared our proposed model with the following four baseline methods: RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic. Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8. VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets. CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage. Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively. <<</Baselines>>> <<<Evaluation Metrics>>> <<<Automatic Evaluation>>> We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens. <<</Automatic Evaluation>>> <<<Human Evaluation>>> Since automatic evaluation of generations is still a challenging task BIBREF22, we also conduct human evaluations on the model performance. Five human experts are employed to evaluate the coherence, diversity and fluency of generated targets. Experts are asked to vote for if a generation is fluent or coherent for each generated target, and give a 1-5 score for the diversity of generations. For both Event2Mind and Atomic datasets, 100 events are randomly selected from the test set. For each method, top 10 generated targets of each base event are used for evaluation. Finally we report three overall averaged scores of coherence, diversity and fluency on both datasets, respectively. <<</Human Evaluation>>> <<</Evaluation Metrics>>> <<<Overall Results>>> We list the perplexity and BLEU score of CWVAE and baseline methods on Event2Mind and Atomic in Table TABREF31 and Table TABREF33, respectively, and show the distinct-1 and distinct-2 score on Event2Mind and Atomic in Table TABREF32 and Table TABREF34, respectively. We find that: (1) As shown in Table TABREF32 and Table TABREF34, comparison between RNN-based Seq2Seq and variational-based methods, including Variational Seq2Seq, VRNMT, CWVAE-unpretrained and CWVAE shows that, variational-based methods could increase the diversity of generations. This confirms one of our motivations that variational-based methods could capture the latent semantic distribution within targets and increase the diversity of If-Then reasoning. (2) Comparing CWVAE-unpretrained with other baseline methods shows that, in general CWVAE improves the accuracy and diversity on both dataset. These results indicate the efficiency of CWVAE in capturing the latent semantic distribution of targets, and generate more reasonable inferential results. (3) Comparison between CWVAE and CWVAE-unpretrained shows that the pretrain stage could enhance the performance of CWVAE in both the accuracy and diversity. This is mainly because event knowledge could offer the guidance for If-Then reasoning. In the pretrain stage, CWVAE could capture the event background knowledge through context-aware latent variable, and such knowledge could be be adapted to our task through the fintune stage. To further evaluate the effectiveness of our proposed approach, we also conduct human evaluations, the results of which are shown in Table TABREF39 and Table TABREF40. On both datasets, CWVAE-based methods achieve consistent better coherence, diversity and fluency performances. While comparing with CWVAE-Unpretrained, the pretrain procedure could improves the performance on coherence and fluency. The main reasons are twofold: first, the CWVAE has advantage in capturing the semantic distribution of targets; second, event background learned from the pretrain stage is helpful for the If-Then reasoning. <<</Overall Results>>> <<<Case Study>>> Table TABREF41 provides an example of model generations given the base event “PersonX works tirelessly” and the inference dimension “xIntent”. The generations under CWVAE mainly contain four kinds of semantics: (1) be productive, (2) finish his work soon, (3) accomplish goal, (4) earn more money. While the semantics of generations using baseline RNN-based Seq2Seq model is relatively limited. Furthermore, the first three kinds of semantic overlap the three ground truth targets, and the fourth kind of semantic is in accordance with daily-life commonsense. Compared to RNN-based Seq2Seq model, our approach can increase the diversity and rationality of generations, meanwhile keep the accuracy. <<</Case Study>>> <<</Experiments>>> <<<Related Work>>> <<<Event-Centered Commonsense Reasoning>>> Understanding events and constructing event-centered commonsense knowledge are crucial to many NLP applications, such as intention recognition BIBREF23 and dialog generation BIBREF24. Recently a growing number of studies focus on event-centered commonsense reasoning, which mainly concentrates on two areas, script event prediction and story ending generation/choosing. Script event prediction concerns with the temporal relationships between script events BIBREF25, which requires models to choose a correct subsequent triple-organized event among the candidates BIBREF2. Prior work mainly focused on modeling event pairs BIBREF25, event chains BIBREF2 and event graph BIBREF3 to predict the subsequent event. Story ending generation focuses on generating plausible story endings BIBREF16, which requires models to understand the story context, and keep generated endings logically consistent with it BIBREF26, BIBREF27. The above tasks mainly investigate the logical orders of events, whereas the If-Then reasoning task focuses on inferring the mental state of event participants. <<</Event-Centered Commonsense Reasoning>>> <<<Variational AutoEncoder-Decoder Based Natural Language Generation>>> VAE BIBREF10 has been widely applied in various of text generation tasks, such as dialogue and machine translation. In dialogue generation, BIBREF9 (BIBREF9) adapts VAE with encoder-decoder framework to model the latent semantic distribution of answers, which can increase the diversity of generations. For the task of machine translation, BIBREF19 (BIBREF19) and BIBREF28 (BIBREF28) employ a latent variable to capture the semantic interaction between the source and target sentence, and regard the latent variable as a supplementation of attention mechanism. While BIBREF29 (BIBREF29) use the latent variable to model topic distributions in text generation. In this paper, we introduce an additional context-aware latent variable to effectively learn background knowledge and conduct If-Then reasoning on the guidance of it. <<</Variational AutoEncoder-Decoder Based Natural Language Generation>>> <<</Related Work>>> <<<Conclusion>>> In this paper, we propose a novel context-aware VAE (CWVAE) framework with two training stages for If-Then commonsense reasoning. By introducing an additional context-aware latent variable, CWVAE is able to learn external background knowledge, and conduct If-Then reasoning under its guidance. In the pretrain stage, CWVAE learns event background knowledge, then in the finetune stage CWVAE adapts such knowledge to each inference dimension. Experimental results demonstrate that CWVAE outperforms baseline methods in both the accuracy and diversity of generations. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Related Work, Abstract" ], "type": "disordered_section" }
1909.02480
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow <<<Abstract>>> Most sequence-to-sequence (seq2seq) models are autoregressive; they generate each token by conditioning on previously generated tokens. In contrast, non-autoregressive seq2seq models generate all tokens in one pass, which leads to increased efficiency through parallel processing on hardware such as GPUs. However, directly modeling the joint distribution of all tokens simultaneously is challenging, and even with increasingly complex model structures accuracy lags significantly behind autoregressive models. In this paper, we propose a simple, efficient, and effective model for non-autoregressive sequence generation using latent variable models. Specifically, we turn to generative flow, an elegant technique to model complex distributions using neural networks, and design several layers of flow tailored for modeling the conditional density of sequential latent variables. We evaluate this model on three neural machine translation (NMT) benchmark datasets, achieving comparable performance with state-of-the-art non-autoregressive NMT models and almost constant decoding time w.r.t the sequence length. <<</Abstract>>> <<<Introduction>>> Neural sequence-to-sequence (seq2seq) models BIBREF0, BIBREF1, BIBREF2, BIBREF3 generate an output sequence $\mathbf {y} = \lbrace y_1, \ldots , y_T\rbrace $ given an input sequence $\mathbf {x} = \lbrace x_1, \ldots , x_{T^{\prime }}\rbrace $ using conditional probabilities $P_\theta (\mathbf {y}|\mathbf {x})$ predicted by neural networks (parameterized by $\theta $). Most seq2seq models are autoregressive, meaning that they factorize the joint probability of the output sequence given the input sequence $P_\theta (\mathbf {y}|\mathbf {x})$ into the product of probabilities over the next token in the sequence given the input sequence and previously generated tokens: Each factor, $P_\theta (y_{t} | y_{<t}, \mathbf {x})$, can be implemented by function approximators such as RNNs BIBREF0 and Transformers BIBREF3. This factorization takes the complicated problem of joint estimation over an exponentially large output space of outputs $\mathbf {y}$, and turns it into a sequence of tractable multi-class classification problems predicting $y_t$ given the previous words, allowing for simple maximum log-likelihood training. However, this assumption of left-to-right factorization may be sub-optimal from a modeling perspective BIBREF4, BIBREF5, and generation of outputs must be done through a linear left-to-right pass through the output tokens using beam search, which is not easily parallelizable on hardware such as GPUs. Recently, there has been work on non-autoregressive sequence generation for neural machine translation (NMT; BIBREF6, BIBREF7, BIBREF8) and language modeling BIBREF9. Non-autoregressive models attempt to model the joint distribution $P_\theta (\mathbf {y}|\mathbf {x})$ directly, decoupling the dependencies of decoding history during generation. A naïve solution is to assume that each token of the target sequence is independent given the input: Unfortunately, the performance of this simple model falls far behind autoregressive models, as seq2seq tasks usually do have strong conditional dependencies between output variables BIBREF6. This problem can be mitigated by introducing a latent variable $\mathbf {z}$ to model these conditional dependencies: where $p_{\theta }(\mathbf {z}|\mathbf {x})$ is the prior distribution over latent $\mathbf {z}$ and $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$ is the “generative” distribution (a.k.a decoder). Non-autoregressive generation can be achieved by the following independence assumption in the decoding process: BIBREF6 proposed a $\mathbf {z}$ representing fertility scores specifying the number of output words each input word generates, significantly improving the performance over Eq. (DISPLAY_FORM4). But the performance still falls behind state-of-the-art autoregressive models due to the limited expressiveness of fertility to model the interdependence between words in $\textbf {y}$. In this paper, we propose a simple, effective, and efficient model, FlowSeq, which models expressive prior distribution $p_{\theta }(\mathbf {z}|\mathbf {x})$ using a powerful mathematical framework called generative flow BIBREF10. This framework can elegantly model complex distributions, and has obtained remarkable success in modeling continuous data such as images and speech through efficient density estimation and sampling BIBREF11, BIBREF12, BIBREF13. Based on this, we posit that generative flow also has potential to introduce more meaningful latent variables $\mathbf {z}$ in the non-autoregressive generation in Eq. (DISPLAY_FORM5). FlowSeq is a flow-based sequence-to-sequence model, which is (to our knowledge) the first non-autoregressive seq2seq model utilizing generative flows. It allows for efficient parallel decoding while modeling the joint distribution of the output sequence. Experimentally, on three benchmark datasets for machine translation – WMT2014, WMT2016 and IWSLT-2014, FlowSeq achieves comparable performance with state-of-the-art non-autoregressive models, and almost constant decoding time w.r.t. the sequence length compared to a typical left-to-right Transformer model, which is super-linear. <<</Introduction>>> <<<Background>>> As noted above, incorporating expressive latent variables $\mathbf {z}$ is essential to decouple the dependencies between tokens in the target sequence in non-autoregressive models. However, in order to model all of the complexities of sequence generation to the point that we can read off all of the words in the output in an independent fashion (as in Eq. (DISPLAY_FORM6)), the prior distribution $p_{\theta }(\mathbf {z}|\mathbf {x})$ will necessarily be quite complex. In this section, we describe generative flows BIBREF10, an effective method for arbitrary modeling of complicated distributions, before describing how we apply them to sequence-to-sequence generation in §SECREF3. <<<Flow-based Generative Models>>> Put simply, flow-based generative models work by transforming a simple distribution (e.g. a simple Gaussian) into a complex one (e.g. the complex prior distribution over $\mathbf {z}$ that we want to model) through a chain of invertible transformations. Formally, a set of latent variables $\mathbf {\upsilon } \in \Upsilon $ are introduced with a simple prior distribution $p_{\Upsilon }(\upsilon )$. We then define a bijection function $f: \mathcal {Z} \rightarrow \Upsilon $ (with $g = f^{-1}$), whereby we can define a generative process over variables $\mathbf {z}$: An important insight behind flow-based models is that given this bijection function, the change of variable formula defines the model distribution on $\mathbf {z}\in \mathcal {Z}$ by: Here $\frac{\partial f_{\theta }(\mathbf {z})}{\partial \mathbf {z}}$ is the Jacobian matrix of $f_{\theta }$ at $\mathbf {z}$. Eq. (DISPLAY_FORM9) provides a way to calculate the (complex) density of $\mathbf {z}$ by calculating the (simple) density of $\upsilon $ and the Jacobian of the transformation from $\mathbf {z}$ to $\upsilon $. For efficiency purposes, flow-based models generally use certain types of transformations $f_{\theta }$ where both the inverse functions $g_{\theta }$ and the Jacobian determinants are tractable to compute. A stacked sequence of such invertible transformations is also called a (normalizing) flow BIBREF10: where $f = f_1 \circ f_2 \circ \cdots \circ f_K$ is a flow of $K$ transformations (omitting $\theta $s for brevity). <<</Flow-based Generative Models>>> <<<Variational Inference and Training>>> In the context of maximal likelihood estimation (MLE), we wish to minimize the negative log-likelihood of the parameters: where $D=\lbrace (\mathbf {x}^i, \mathbf {y}^i)\rbrace _{i=1}^{N}$ is the set of training data. However, the likelihood $P_{\theta }(\mathbf {y}| \mathbf {x})$ after marginalizing out latent variables $\mathbf {z}$ (LHS in Eq. (DISPLAY_FORM5)) is intractable to compute or differentiate directly. Variational inference BIBREF14 provides a solution by introducing a parametric inference model $q_{\phi }(\mathbf {z}|\mathbf {y}, \mathbf {x})$ (a.k.a posterior) which is then used to approximate this integral by sampling individual examples of $\mathbf {z}$. These models then optimize the evidence lower bound (ELBO), which considers both the “reconstruction error” $\log P_\theta (\mathbf {y}|\mathbf {z},\mathbf {x})$ and KL-divergence between the posterior and the prior: Both inference model $\phi $ and decoder $\theta $ parameters are optimized according to this objective. <<</Variational Inference and Training>>> <<</Background>>> <<<FlowSeq>>> We first overview FlowSeq's architecture (shown in Figure FIGREF13) and training process here before detailing each component in following sections. Similarly to classic seq2seq models, at both training and test time FlowSeq first reads the whole input sequence $\mathbf {x}$ and calculates a vector for each word in the sequence, the source encoding. At training time, FlowSeq's parameters are learned using a variational training paradigm overviewed in §SECREF10. First, we draw samples of latent codes $\mathbf {z}$ from the current posterior $q_{\phi } (\mathbf {z}|\mathbf {y}, \mathbf {x})$. Next, we feed $\mathbf {z}$ together with source encodings into the decoder network and the prior flow to compute the probabilities of $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$ and $p_{\theta }(\mathbf {z}|\mathbf {x})$ for optimizing the ELBO (Eq. (DISPLAY_FORM12)). At test time, generation is performed by first sampling a latent code $\mathbf {z}$ from the prior flow by executing the generative process defined in Eq. (DISPLAY_FORM8). In this step, the source encodings produced from the encoder are used as conditional inputs. Then the decoder receives both the sampled latent code $\mathbf {z}$ and the source encoder outputs to generate the target sequence $\mathbf {y}$ from $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$. <<<Source Encoder>>> The source encoder encodes the source sequences into hidden representations, which are used in computing attention when generating latent variables in the posterior network and prior network as well as the cross-attention with decoder. Any standard neural sequence model can be used as its encoder, including RNNs BIBREF0 or Transformers BIBREF3. <<</Source Encoder>>> <<<Posterior>>> <<<Generation of Latent Variables.>>> The latent variables $\mathbf {z}$ are represented as a sequence of continuous random vectors $\mathbf {z}=\lbrace \mathbf {z}_1, \ldots , \mathbf {z}_T\rbrace $ with the same length as the target sequence $\mathbf {y}$. Each $\mathbf {z}_t$ is a $d_{\mathrm {z}}$-dimensional vector, where $d_{\mathrm {z}}$ is the dimension of the latent space. The posterior distribution $q_{\phi } (\mathbf {z}|\mathbf {y}, \mathbf {x})$ models each $\mathbf {z}_t$ as a diagonal Gaussian with learned mean and variance: where $\mu _{t}(\cdot )$ and $\sigma _{t}(\cdot )$ are neural networks such as RNNs or Transformers. <<</Generation of Latent Variables.>>> <<<Zero initialization.>>> While we perform standard random initialization for most layers of the network, we initialize the last linear transforms that generate the $\mu $ and $\log \sigma ^2$ values with zeros. This ensures that the posterior distribution as a simple normal distribution, which we found helps train very deep generative flows more stably. <<</Zero initialization.>>> <<<Token Dropout.>>> The motivation of introducing the latent variable $\mathbf {z}$ into the model is to model the uncertainty in the generative process. Thus, it is preferable that $\mathbf {z}$ capture contextual interdependence between tokens in $\mathbf {y}$. However, there is an obvious local optimum where the posterior network generates a latent vector $\mathbf {z}_t$ that only encodes the information about the corresponding target token $y_t$, and the decoder simply generates the “correct” token at each step $t$ with $\mathbf {z}_t$ as input. In this case, FlowSeq reduces to the baseline model in Eq. (DISPLAY_FORM4). To escape this undesired local optimum, we apply token-level dropout to randomly drop an entire token when calculating the posterior, to ensure the model also has to learn how to use contextual information. This technique is similar to the “masked language model” in previous studies BIBREF15, BIBREF16, BIBREF17. <<</Token Dropout.>>> <<</Posterior>>> <<<Decoder>>> As the decoder, we take the latent sequence $\mathbf {z}$ as input, run it through several layers of a neural sequence model such as a Transformer, then directly predict the output tokens in $\mathbf {y}$ individually and independently. Notably, unlike standard seq2seq decoders, we do not perform causal masking to prevent attending to future tokens, making the model fully non-autoregressive. <<</Decoder>>> <<<Flow Architecture for Prior>>> The flow architecture is based on Glow BIBREF11. It consists of a series of steps of flow, combined in a multi-scale architecture (see Figure FIGREF13.) Each step of flow consists three types of elementary flows – actnorm, invertible multi-head linear, and coupling. Note that all three functions are invertible and conducive to calculation of log determinants (details in Appendix SECREF6). <<<Actnorm.>>> The activation normalization layer (actnorm; BIBREF11) is an alternative for batch normalization BIBREF18, that has mainly been used in the context of image data to alleviate problems in model training. Actnorm performs an affine transformation of the activations using a scale and bias parameter per feature for sequences: Both $\mathbf {z}$ and $\mathbf {z}^{\prime }$ are tensors of shape $[T\times d_{\mathrm {z}}]$ with time dimension $t$ and feature dimension $d_{\mathrm {z}}$. The parameters are initialized such that over each feature $\mathbf {z}_{t}^{\prime }$ has zero mean and unit variance given an initial mini-batch of data. <<</Actnorm.>>> <<<Invertible Multi-head Linear Layers.>>> To incorporate general permutations of variables along the feature dimension to ensure that each dimension can affect every other ones after a sufficient number of steps of flow, BIBREF11 proposed a trainable invertible $1\times 1$ convolution layer for 2D images. It is straightforward to apply similar transformations to sequential data: where $\mathbf {W}$ is the weight matrix of shape $[d_{\mathrm {z}} \times d_{\mathrm {z}}]$. The log-determinant of this transformation is: The cost of computing $\mathrm {det}(\mathbf {W})$ is $O(d_{\mathrm {z}}^3)$. Unfortunately, $d_{\mathrm {z}}$ in Seq2Seq generation is commonly large, e.g. 512, significantly slowing down the model for computing $\mathrm {det}(\mathbf {W})$. To apply this to sequence generation, we propose a multi-head invertible linear layer, which first splits each $d_{\mathrm {z}}$-dimensional feature vector into $h$ heads with dimension $d_h = d_{\mathrm {z}}/h$. Then the linear transformation in (DISPLAY_FORM26) is applied to each head, with $d_h\times d_h$ weight matrix $\mathbf {W}$, significantly reducing the dimension. For splitting of heads, one step of flow contains one linear layer with either row-major or column-major splitting format, and these steps with different linear layers are composed in an alternating pattern. <<</Invertible Multi-head Linear Layers.>>> <<<Affine Coupling Layers.>>> To model interdependence across time steps, we use affine coupling layers BIBREF19: where $\mathrm {s}(\mathbf {z}_a, \mathbf {x})$ and $\mathrm {b}(\mathbf {z}_a, \mathbf {x})$ are outputs of two neural networks with $\mathbf {z}_a$ and $\mathbf {x}$ as input. These are shown in Figure FIGREF21 (c). In experiments, we implement $\mathrm {s}(\cdot )$ and $\mathrm {b}(\cdot )$ with one Transformer decoder layer BIBREF3: multi-head self-attention over $\mathbf {z}_a$, followed by multi-head inter-attention over $\mathbf {x}$, followed by a position-wise feed-forward network. The input $\mathbf {z}_a$ is fed into this layer in one pass, without causal masking. As in BIBREF19, the $\mathrm {split}()$ function splits $\mathbf {z}$ the input tensor into two halves, while the $\mathrm {concat}$ operation performs the corresponding reverse concatenation operation. In our architecture, three types of split functions are used, based on the split dimension and pattern. Figure FIGREF21 (b) illustrates the three splitting types. The first type of split groups $\mathbf {z}$ along the time dimension on alternate indices. In this case, FlowSeq mainly models the interactions between time-steps. The second and third types of splits perform on the feature dimension, with continuous and alternate patterns, respectively. For each type of split, we alternate $\mathbf {z}_a$ and $\mathbf {z}_b$ to increase the flexibility of the split function. Different types of affine coupling layers alternate in the flow, similar to the linear layers. <<</Affine Coupling Layers.>>> <<<Multi-scale Architecture.>>> We follow BIBREF19 in implementing a multi-scale architecture using the squeezing operation on the feature dimension, which has been demonstrated helpful for training deep flows. Formally, each scale is a combination of several steps of the flow (see Figure FIGREF21 (a)). After each scale, the model drops half of the dimensions with the third type of split in Figure FIGREF21 (b) to reduce computational and memory cost, outputting the tensor with shape $[T \times \frac{d}{2}]$. Then the squeezing operation transforms the $T \times \frac{d}{2}$ tensor into an $\frac{T}{2} \times d$ one as the input of the next scale. We pad each sentence with EOS tokens to ensure $T$ is divisible by 2. The right component of Figure FIGREF13 illustrates the multi-scale architecture. <<</Multi-scale Architecture.>>> <<</Flow Architecture for Prior>>> <<<Predicting Target Sequence Length>>> In autoregressive seq2seq models, it is natural to determine the length of the sequence dynamically by simply predicting a special EOS token. However, for FlowSeq to predict the entire sequence in parallel, it needs to know its length in advance to generate the latent sequence $\mathbf {z}$. Instead of predicting the absolute length of the target sequence, we predict the length difference between source and target sequences using a classifier with a range of $[-20, 20]$. Numbers in this range are predicted by max-pooling the source encodings into a single vector, running this through a linear layer, and taking a softmax. This classifier is learned jointly with the rest of the model. <<</Predicting Target Sequence Length>>> <<<Decoding Process>>> At inference time, the model needs to identify the sequence with the highest conditional probability by marginalizing over all possible latent variables (see Eq. (DISPLAY_FORM5)), which is intractable in practice. We propose three approximating decoding algorithms to reduce the search space. <<<Argmax Decoding.>>> Following BIBREF6, one simple and effective method is to select the best sequence by choosing the highest-probability latent sequence $\mathbf {z}$: where identifying $\mathbf {y}^*$ only requires independently maximizing the local probability for each output position (see Eq. DISPLAY_FORM6). <<</Argmax Decoding.>>> <<<Noisy Parallel Decoding (NPD).>>> A more accurate approximation of decoding, proposed in BIBREF6, is to draw samples from the latent space and compute the best output for each latent sequence. Then, a pre-trained autoregressive model is adopted to rank these sequences. In FlowSeq, different candidates can be generated by sampling different target lengths or different samples from the prior, and both of the strategies can be batched via masks during decoding. In our experiments, we first select the top $l$ length candidates from the length predictor in §SECREF29. Then, for each length candidate we use $r$ random samples from the prior network to generate output sequences, yielding a total of $l\times r$ candidates. <<</Noisy Parallel Decoding (NPD).>>> <<<Importance Weighted Decoding (IWD)>>> The third approximating method is based on the lower bound of importance weighted estimation BIBREF20. Similarly to NPD, IWD first draws samples from the latent space and computes the best output for each latent sequence. Then, IWD ranks these candidate sequences with $K$ importance samples: IWD does not rely on a separate pre-trained model, though it significantly slows down the decoding speed. The detailed comparison of these three decoding methods is provided in §SECREF45. <<</Importance Weighted Decoding (IWD)>>> <<</Decoding Process>>> <<<Discussion>>> Different from the architecture proposed in BIBREF9, the architecture of FlowSeq is not using any autoregressive flow BIBREF21, BIBREF22, yielding a truly non-autoregressive model with efficient generation. Note that the FlowSeq remains non-autoregressive even if we use an RNN in the architecture because RNN is only used to encode a complete sequence of codes and all the input tokens can be fed into the RNN in parallel. This makes it possible to use highly-optimized implementations of RNNs such as those provided by cuDNN. Thus while RNNs do experience some drop in speed, it is less extreme than that experienced when using autoregressive models. <<</Discussion>>> <<</FlowSeq>>> <<<Experiments>>> <<<Experimental Setups>>> <<<Translation Datasets>>> We evaluate FlowSeq on three machine translation benchmark datasets: WMT2014 DE-EN (around 4.5M sentence pairs), WMT2016 RO-EN (around 610K sentence pairs) and a smaller dataset IWSLT2014 DE-EN (around 150K sentence pairs). We use scripts from fairseq BIBREF23 to preprocess WMT2014 and IWSLT2014, where the preprocessing steps follow BIBREF3 for WMT2014. We use the data provided in BIBREF7 for WMT2016. For both WMT datasets, the source and target languages share the same set of BPE embeddings while for IWSLT2014 we use separate embeddings. During training, we filter out sentences longer than 80 for WMT dataset and 60 for IWSLT, respectively. <<</Translation Datasets>>> <<<Modules and Hyperparameters>>> We implement the encoder, decoder and posterior networks with standard (unmasked) Transformer layers BIBREF3. For WMT datasets, the encoder consists of 6 layers, and the decoder and posterior are composed of 4 layers, and 8 attention heads. and for IWSLT, the encoder has 5 layers, and decoder and posterior have 3 layers, and 4 attention heads. The prior flow consists of 3 scales with the number of steps $[48, 48, 16]$ from bottom to top. To dissect the impact of model dimension on translation quality and speed, we perform experiments on two versions of FlowSeq with $d_{model}/d_{hidden} = 256/512$ (base) and $d_{model}/d_{hidden} = 512/1024$ (large). More model details are provided in Appendix SECREF7. <<</Modules and Hyperparameters>>> <<<Optimization>>> Parameter optimization is performed with the Adam optimizer BIBREF24 with $\beta =(0.9, 0.999)$ and $\epsilon =1e-6$. Each mini-batch consist of 2048 sentences. The learning rate is initialized to $5e-4$, and exponentially decays with rate $0.999995$. The gradient clipping cutoff is $1.0$. For all the FlowSeq models, we apply $0.1$ label smoothing and averaged the 5 best checkpoints to create the final model. At the beginning of training, the posterior network is randomly initialized, producing noisy supervision to the prior. To mitigate this issue, we first set the weight of the $\mathrm {KL}$ term in ELBO to zero for 30,000 updates to train the encoder, decoder and posterior networks. Then the $\mathrm {KL}$ weight linearly increases to one for another 10,000 updates, which we found essential to accelerate training and achieve stable performance. <<</Optimization>>> <<<Knowledge Distillation>>> Previous work on non-autoregressive generation BIBREF6, BIBREF8 has used translations produced by a pre-trained autoregressive NMT model as the training data, noting that this can significantly improve the performance. We analyze the impact of distillation in § SECREF45. <<</Knowledge Distillation>>> <<</Experimental Setups>>> <<<Main Results>>> We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8. Table TABREF39 provides the BLEU scores of FlowSeq with argmax decoding, together with baselines with purely non-autoregressive decoding methods that generate output sequence in one parallel pass. The first block lists results of models trained on raw data, while the second block are results using knowledge distillation. Without using knowledge distillation, FlowSeq base model achieves significant improvement (more than 9 BLEU points) over CMLM-base and LV NAR. It demonstrates the effectiveness of FlowSeq on modeling the complex interdependence in target languages. Towards the effect of knowledge distillation, we can mainly obtain two observations: i) Similar to the findings in previous work, knowledge distillation still benefits the translation quality of FlowSeq. ii) Compared to previous models, the benefit of knowledge distillation on FlowSeq is less significant, yielding less than 3 BLEU improvement on WMT2014 DE-EN corpus, and even no improvement on WMT2016 RO-EN corpus. The reason might be that FlowSeq does not rely much on knowledge distillation to alleviate the multi-modality problem. Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring. The first block in Table TABREF40 includes the baseline results from autoregressive Transformer. For the sampling procedure in IWD and NPD, we sampled from a reduced-temperature model BIBREF11 to obtain high-quality samples. We vary the temperature within $\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 1.0\rbrace $ and select the best temperature based on the performance on development sets. The analysis of the impact of sampling temperature and other hyper-parameters on samples is in § SECREF50. For FlowSeq, NPD obtains better results than IWD, showing that FlowSeq still falls behind auto-regressive Transformer on model data distributions. Comparing with CMLM BIBREF8 with 10 iterations of refinement, which is a contemporaneous work that achieves state-of-the-art translation performance, FlowSeq obtains competitive performance on both WMT2014 and WMT2016 corpora, with only slight degradation in translation quality. Leveraging iterative refinement to further improve the performance of FlowSeq has been left to future work. <<</Main Results>>> <<<Analysis on Decoding Speed>>> In this section, we compare the decoding speed (measured in average time in seconds required to decode one sentence) of FlowSeq at test time with that of the autoregressive Transformer model. We use the test set of WMT14 EN-DE for evaluation and all experiments are conducted on a single NVIDIA TITAN X GPU. <<<How does batch size affect the decoding speed?>>> First, we investigate how different decoding batch size can affect the decoding speed. We vary the decoding batch size within $\lbrace 1, 4, 8, 32, 64, 128\rbrace $. Figure. FIGREF44 shows that for both FlowSeq and Transformer decoding is faster when using a larger batch size. However, FlowSeq has much larger gains in the decoding speed w.r.t. the increase in batch size, gaining a speed up of 594% of base model and 403% of large model when using a batch size of 128. We hypothesize that this is because the operations in FlowSeq are more friendly to batching while the Transformer model with beam search at test time is less efficient in benefiting from batching. <<</How does batch size affect the decoding speed?>>> <<<How does sentence length affect the decoding speed?>>> Next, we examine if sentence length is a major factor affecting the decoding speed. We bucket the test data by the target sentence length. From Fig. FIGREF44, we can see that as the sentence length increases, FlowSeq achieves almost constant decoding time while Transformer has a linearly increasing decoding time. The relative decoding speed up of FlowSeq versus Transformer linearly increases as the sequence length increases. The potential of decoding long sequences with constant time is an attractive property of FlowSeq. <<</How does sentence length affect the decoding speed?>>> <<</Analysis on Decoding Speed>>> <<<Analysis of Rescoring Candidates>>> In Fig. FIGREF49, we analyze how different sampling hyperparameters affect the performance of rescoring. First, we observe that the number of samples $r$ for each length is the most important factor. The performance is always improved with a larger sample size. Second, a larger number of length candidates does not necessarily increase the rescoring performance. Third, we find that a larger sampling temperature (0.3 - 0.5) can increase the diversity of translations and leads to better rescoring BLEU. However, the latent samples become noisy when a large temperature (1.0) is used. <<</Analysis of Rescoring Candidates>>> <<<Analysis of Translation Diversity>>> Following BIBREF28, we analyze the output diversity of FlowSeq. BIBREF28 proposed pairwise-BLEU and BLEU computed in a leave-one-out manner to calibrate the diversity and quality of translation hypotheses. A lower pairwise-BLEU score implies a more diverse hypothesis set. And a higher BLEU score implies a better translation quality. We experiment on a subset of test set of WMT14-ENDE with ten references each sentence BIBREF29. In Fig. FIGREF52, we compare FlowSeq with other multi-hypothesis generation methods (ten hypotheses each sentence) to analyze how well the generation outputs of FlowSeq are in terms of diversity and quality. The right corner area of the figure indicates the ideal generations: high diversity and high quality. While FlowSeq still lags behind the autoregressive generations, by increasing the sampling temperature it provides a way of generating more diverse outputs while keeping the translation quality almost unchanged. More analysis of translation outputs and detailed results are provided in the Appendix SECREF9 and SECREF10. <<</Analysis of Translation Diversity>>> <<</Experiments>>> <<<Conclusion>>> We propose FlowSeq, an efficient and effective model for non-autoregressive sequence generation by using generative flows. One potential direction for future work is to leverage iterative refinement techniques such as masked language models to further improve translation quality. Another exciting direction is to, theoretically and empirically, investigate the latent space in FlowSeq, hence providing deep insights of the model, even enhancing controllable text generation. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Conclusion, Abstract" ], "type": "disordered_section" }
1910.02754
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> On Leveraging the Visual Modality for Neural Machine Translation <<<Abstract>>> Leveraging the visual modality effectively for Neural Machine Translation (NMT) remains an open problem in computational linguistics. Recently, Caglayan et al. posit that the observed gains are limited mainly due to the very simple, short, repetitive sentences of the Multi30k dataset (the only multimodal MT dataset available at the time), which renders the source text sufficient for context. In this work, we further investigate this hypothesis on a new large scale multimodal Machine Translation (MMT) dataset, How2, which has 1.57 times longer mean sentence length than Multi30k and no repetition. We propose and evaluate three novel fusion techniques, each of which is designed to ensure the utilization of visual context at different stages of the Sequence-to-Sequence transduction pipeline, even under full linguistic context. However, we still obtain only marginal gains under full linguistic context and posit that visual embeddings extracted from deep vision models (ResNet for Multi30k, ResNext for How2) do not lend themselves to increasing the discriminativeness between the vocabulary elements at token level prediction in NMT. We demonstrate this qualitatively by analyzing attention distribution and quantitatively through Principal Component Analysis, arriving at the conclusion that it is the quality of the visual embeddings rather than the length of sentences, which need to be improved in existing MMT datasets. <<</Abstract>>> <<<Introduction>>> A number of works have explored integrating the visual modality for Neural Machine Translation (NMT) models, though, there has been relatively modest gains or no gains at all by incorporating the visual modality in the translation pipeline BIBREF0. In particular, BIBREF1 leverage multi-task learning, BIBREF2 use visual adaptive training, while BIBREF3, BIBREF4, BIBREF5 use a number of fusion techniques to incorporate features obtained from the visual modality. Regarding the seemingly low utility of visual modality in machine translation, BIBREF6 hypothesize that the highly relevant visual properties are often not represented by linguistic models because they are too obvious to be explicitly mentioned in text (e.g., birds have wings, violins are brown). Similarly, BIBREF7 argue that perceptual information is already sufficiently encoded in textual cues. However, recently BIBREF0 have demonstrated that neural models are capable of leveraging the visual modality for translations, and posit that it is the nature of the Multi30k dataset (the only multimodal machine translation dataset at the time) which is inhibiting gains from the visual modality to emerge, due to the presence of short, simple and repetitive sentences, which renders the source text as sufficient context for translation. In this work, we further investigate this hypothesis on a large-scale multimodal machine translation (MMT) dataset, named How2 BIBREF2, which has 1.57 times longer sentences, in terms of the mean sentence length, when compared to Multi30k . To this end, we restrict ourselves to the Sequence-to-Sequence (Seq2Seq) framework and propose three simple but novel fusion techniques to ensure the utilization of visual context during different stages (Input Context Encoding, Attention and Supervision) of the Sequence-to-Sequence transduction pipeline. We then evaluate and analyze the results for further insights, with the goal of testing the utility of visual modality for NMT under full source-side linguistic context. <<</Introduction>>> <<<Proposed Fusion Techniques>>> In this section, we describe three additions to the Seq2Seq model to ensure that the visual context is utilized at different stages, namely when computing context during each step of the decoder, during attention as well as when computing the supervision signal in the Sequence-to-Sequence pipeline. This is done to encourage the Seq2Seq NMT model to make use of the visual features under full linguistic context. In each case, we assume that the visual features are fine-tuned using a visual encoder, which is trained jointly alongside the Seq2Seq model. <<<Step-Wise Decoder Fusion>>> Our first proposed technique is the step-wise decoder fusion of visual features during every prediction step i.e. we concatenate the visual encoding as context at each step of the decoding process. This differs from the usual practice of passing the visual feature only at the beginning of the decoding process BIBREF5. <<</Step-Wise Decoder Fusion>>> <<<Multimodal Attention Modulation>>> Similar to general attention BIBREF8, wherein a variable-length alignment vector $a_{th}(s)$, whose size equals the number of time steps on the source side, is derived by comparing the current target hidden state $h_{t}$ with each source hidden state $\overline{h_{s}}$; we consider a variant wherein the visual encoding $v_{t}$ is used to calculate an attention distribution $a_{tv}(s)$ over the source encodings as well. Then, the true attention distribution $a_{t}(s)$ is computed as an interpolation between the visual and text based attention scores. The score function is a content based scoring mechanism as usual. This formulation differs from BIBREF3 in that we use both the natural language as well as the visual modality to compute attention over the source sentence, rather than having attention over images. Since attention is computed over the same source embeddings (arising from a single encoder) using two different modalities, our approach also differs from BIBREF4, which focuses on combining the attention scores of multiple source encoders. <<</Multimodal Attention Modulation>>> <<<Visual-Semantic (VS) Regularizer>>> In terms of leveraging the visual modality for supervision, BIBREF1 use multi-task learning to learn grounded representations through image representation prediction. However, to our knowledge, visual-semantic supervision hasn't been much explored for multimodal translation in terms of loss functions. Our proposed technique is the inclusion of visual-semantic supervision to the machine translation model. Recently, BIBREF9 proposed an optimal transport based loss function which computes the distance between the word embeddings of the predicted sentence and the target sentence and uses it as a regularizer $L_{\text{ot}}^{\text{tgt}}$. The purpose of this term is to provide the model with sequence level supervision. We leverage this idea by including a Cosine distance term, $L_{\text{cosine}}^{\text{visual}}$, between the visual encoding (which is at the sentence level) and the target/predicted sentence embeddings (computed as the average of the target/predicted word embeddings). The purpose of this distance term is to provide sequence level supervision by aligning the visual and text embeddings. In practice, as in BIBREF9, we introduce a hyperparameter in the loss function: where $\gamma $ is a hyper-parameter balancing the effect of loss components (a separate hyperparameter than in Section 2.2). <<</Visual-Semantic (VS) Regularizer>>> <<</Proposed Fusion Techniques>>> <<<Results and Analysis>>> Throughout our experiments, we use the 300 hours subset of How2 dataset BIBREF10, which contains 300 hours of videos, sentence-level time alignments to the ground-truth English subtitles, and Portuguese translations of English subtitles. The How2 dataset has 2048 dimensional pre-trained ResNeXt embeddings BIBREF11 available for each of the video clips aligned to the sentences. Further, our baseline model is the canonical Seq2Seq model BIBREF12 consisting of bidirectional LSTM as encoder and decoder, general attention BIBREF8 and length normalization BIBREF13. In all cases, we use the embedding size of 300 and the hidden size of 512. Whenever the visual modality is used, we encode each of the visual features to 300 dimensional vectors through an encoder (consisting of a Linear layer followed by Batch Normalization and ReLU non-linearity) which is also trained end-to-end with the Seq2Seq model. Further, to integrate sequence level supervision as in BIBREF9, we utilize the Geomloss library , which provides a batched implementation of the Sinkhorn algorithm for the Optimal Transport computation. For all the translation experiments, we preprocess the data by lowercasing and removing the punctuations BIBREF2, and construct vocabulary at word level. Adam optimizer with a learning rate of 0.001 and a learning rate decay of 0.5 is used to throughout to train our models. <<<Experimental Results>>> The performances of the models are summarized in Table TABREF9, along with the gains in BLEU points. From Table TABREF9, we can make a few observations: The visual modality leads to modest gains in BLEU scores. The proposed VS regularizer leads to slightly higher gain when compared to Decoder-Fusion and Attention modulation techniques for the En-Pt language pair. Further, the gains from incorporating the visual modality are less for Multimodal Attention and VS Regularization in the case of the reversed language pair of Pt-En (Table TABREF10), even though the visual modality is common to both the languages. This can possibly be attributed to the How2 dataset creation process wherein first the videos were aligned with English sentences and then the Portuguese translations were created, implying a reduction in correspondence with the visual modality due to errors introduced in the translation process. <<</Experimental Results>>> <<<Discussion>>> To analyze the reasons for modest gains, despite incorporating multiple techniques to effectively leverage the visual modality for machine translation, we inspect the dataset as well as the proposed mechanisms. <<<PCA of Visual Features>>> We first investigate and compare the visual feature quality of the How2 dataset with respect to that of the Multi30k dataset . To analyze the discriminativeness of the visual features for both of these datasets, we leverage an analysis mechanism used in BIBREF14 in the context of analyzing word embedding discriminativeness. We analyze the variance of the visual features corresponding to each sentence in the training set. Since the visual features semantically represent the sentence as well, we could analyze how well the features are able to discriminate between the sentences and consequently between the individual words, as a measure of their utility for NMT. Figure FIGREF14 (Top) shows the variance explained by the Top 100 principal components, obtained by applying PCA on the How2 and Multi30k training set visual features. The original feature dimensions are 2048 in both the cases. It is clear from the Figure FIGREF14 that most of the energy of the visual feature space resides in a low-dimensional subspace BIBREF14. In other words, there exist a few directions in the embedding space which disproportionately explain the variance. These "common" directions affect all of the embeddings in the same way, rendering them less discriminative. Figure FIGREF14 also shows the cumulative variance explained by Top 10, 20, 50 and 100 principal components respectively. It is clear that the visual features in the case of How2 dataset are much more dominated by the "common" dimensions, when compared to the Multi30k dataset. Further, this analysis is still at the sentence level, i.e. the visual features are much less discriminative among individual sentences, further aggravating the problem at the token level. This suggests that the existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT, since they won't provide discriminativeness among the vocabulary elements at the token level during prediction. Further, this also indicates that under subword vocabulary such as BPE BIBREF15 or Sentence-Piece BIBREF16, the utility of such visual embeddings will only aggravate. <<</PCA of Visual Features>>> <<<Comparison of Attention Components>>> In this section, we analyze the visual and text based attention mechanisms. We find that the visual attention is very sparse, in that just one source encoding is attended to (the maximum visual attention over source encodings, across the test set, has mean 0.99 and standard deviation 0.015), thereby limiting the use of modulation. Thus, in practice, we find that a small weight ($\gamma =0.1$) is necessary to prevent degradation due to this sparse visual attention component. Figure FIGREF18 & FIGREF19 shows the comparison of visual and text based attention for two sentences, one long source sentence of length 21 and one short source sentence of length 7. In both cases, we find that the visual component of the attention hasn't learnt any variation over the source encodings, again suggesting that the visual embeddings do not lend themselves to enhancing token-level discriminativess during prediction. We find this to be consistent across sentences of different lengths. <<</Comparison of Attention Components>>> <<</Discussion>>> <<</Results and Analysis>>> <<<Conclusions and Future Work>>> To conclude, we investigated the utility of visual modality for NMT, under full linguistic context on a new large-scale MMT dataset named How2. Our results on the How2 dataset confirm the general consensus that the visual modality does not lead to any significant gains for NMT, however, unlike BIBREF0 we attribute the relatively modest gains to the limited discriminativeness offered by the existing visual features, rather than the length of the sentences in the dataset. We validate this hypothesis quantitatively through a PCA based analysis of the visual features as well as qualitatively by analyzing attention components. We hope that our work would lead to more useful techniques and better visual features for MMT. An immediate future direction to explore would be to construct more discriminative features for utilizing the visual modality in NMT. <<</Conclusions and Future Work>>> <<</Title>>>
{ "references": [ "Introduction, Conclusions and Future Work" ], "type": "disordered_section" }
1910.02754
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> On Leveraging the Visual Modality for Neural Machine Translation <<<Abstract>>> Leveraging the visual modality effectively for Neural Machine Translation (NMT) remains an open problem in computational linguistics. Recently, Caglayan et al. posit that the observed gains are limited mainly due to the very simple, short, repetitive sentences of the Multi30k dataset (the only multimodal MT dataset available at the time), which renders the source text sufficient for context. In this work, we further investigate this hypothesis on a new large scale multimodal Machine Translation (MMT) dataset, How2, which has 1.57 times longer mean sentence length than Multi30k and no repetition. We propose and evaluate three novel fusion techniques, each of which is designed to ensure the utilization of visual context at different stages of the Sequence-to-Sequence transduction pipeline, even under full linguistic context. However, we still obtain only marginal gains under full linguistic context and posit that visual embeddings extracted from deep vision models (ResNet for Multi30k, ResNext for How2) do not lend themselves to increasing the discriminativeness between the vocabulary elements at token level prediction in NMT. We demonstrate this qualitatively by analyzing attention distribution and quantitatively through Principal Component Analysis, arriving at the conclusion that it is the quality of the visual embeddings rather than the length of sentences, which need to be improved in existing MMT datasets. <<</Abstract>>> <<<Introduction>>> A number of works have explored integrating the visual modality for Neural Machine Translation (NMT) models, though, there has been relatively modest gains or no gains at all by incorporating the visual modality in the translation pipeline BIBREF0. In particular, BIBREF1 leverage multi-task learning, BIBREF2 use visual adaptive training, while BIBREF3, BIBREF4, BIBREF5 use a number of fusion techniques to incorporate features obtained from the visual modality. Regarding the seemingly low utility of visual modality in machine translation, BIBREF6 hypothesize that the highly relevant visual properties are often not represented by linguistic models because they are too obvious to be explicitly mentioned in text (e.g., birds have wings, violins are brown). Similarly, BIBREF7 argue that perceptual information is already sufficiently encoded in textual cues. However, recently BIBREF0 have demonstrated that neural models are capable of leveraging the visual modality for translations, and posit that it is the nature of the Multi30k dataset (the only multimodal machine translation dataset at the time) which is inhibiting gains from the visual modality to emerge, due to the presence of short, simple and repetitive sentences, which renders the source text as sufficient context for translation. In this work, we further investigate this hypothesis on a large-scale multimodal machine translation (MMT) dataset, named How2 BIBREF2, which has 1.57 times longer sentences, in terms of the mean sentence length, when compared to Multi30k . To this end, we restrict ourselves to the Sequence-to-Sequence (Seq2Seq) framework and propose three simple but novel fusion techniques to ensure the utilization of visual context during different stages (Input Context Encoding, Attention and Supervision) of the Sequence-to-Sequence transduction pipeline. We then evaluate and analyze the results for further insights, with the goal of testing the utility of visual modality for NMT under full source-side linguistic context. <<</Introduction>>> <<<Proposed Fusion Techniques>>> In this section, we describe three additions to the Seq2Seq model to ensure that the visual context is utilized at different stages, namely when computing context during each step of the decoder, during attention as well as when computing the supervision signal in the Sequence-to-Sequence pipeline. This is done to encourage the Seq2Seq NMT model to make use of the visual features under full linguistic context. In each case, we assume that the visual features are fine-tuned using a visual encoder, which is trained jointly alongside the Seq2Seq model. <<<Step-Wise Decoder Fusion>>> Our first proposed technique is the step-wise decoder fusion of visual features during every prediction step i.e. we concatenate the visual encoding as context at each step of the decoding process. This differs from the usual practice of passing the visual feature only at the beginning of the decoding process BIBREF5. <<</Step-Wise Decoder Fusion>>> <<<Multimodal Attention Modulation>>> Similar to general attention BIBREF8, wherein a variable-length alignment vector $a_{th}(s)$, whose size equals the number of time steps on the source side, is derived by comparing the current target hidden state $h_{t}$ with each source hidden state $\overline{h_{s}}$; we consider a variant wherein the visual encoding $v_{t}$ is used to calculate an attention distribution $a_{tv}(s)$ over the source encodings as well. Then, the true attention distribution $a_{t}(s)$ is computed as an interpolation between the visual and text based attention scores. The score function is a content based scoring mechanism as usual. This formulation differs from BIBREF3 in that we use both the natural language as well as the visual modality to compute attention over the source sentence, rather than having attention over images. Since attention is computed over the same source embeddings (arising from a single encoder) using two different modalities, our approach also differs from BIBREF4, which focuses on combining the attention scores of multiple source encoders. <<</Multimodal Attention Modulation>>> <<<Visual-Semantic (VS) Regularizer>>> In terms of leveraging the visual modality for supervision, BIBREF1 use multi-task learning to learn grounded representations through image representation prediction. However, to our knowledge, visual-semantic supervision hasn't been much explored for multimodal translation in terms of loss functions. Our proposed technique is the inclusion of visual-semantic supervision to the machine translation model. Recently, BIBREF9 proposed an optimal transport based loss function which computes the distance between the word embeddings of the predicted sentence and the target sentence and uses it as a regularizer $L_{\text{ot}}^{\text{tgt}}$. The purpose of this term is to provide the model with sequence level supervision. We leverage this idea by including a Cosine distance term, $L_{\text{cosine}}^{\text{visual}}$, between the visual encoding (which is at the sentence level) and the target/predicted sentence embeddings (computed as the average of the target/predicted word embeddings). The purpose of this distance term is to provide sequence level supervision by aligning the visual and text embeddings. In practice, as in BIBREF9, we introduce a hyperparameter in the loss function: where $\gamma $ is a hyper-parameter balancing the effect of loss components (a separate hyperparameter than in Section 2.2). <<</Visual-Semantic (VS) Regularizer>>> <<</Proposed Fusion Techniques>>> <<<Results and Analysis>>> Throughout our experiments, we use the 300 hours subset of How2 dataset BIBREF10, which contains 300 hours of videos, sentence-level time alignments to the ground-truth English subtitles, and Portuguese translations of English subtitles. The How2 dataset has 2048 dimensional pre-trained ResNeXt embeddings BIBREF11 available for each of the video clips aligned to the sentences. Further, our baseline model is the canonical Seq2Seq model BIBREF12 consisting of bidirectional LSTM as encoder and decoder, general attention BIBREF8 and length normalization BIBREF13. In all cases, we use the embedding size of 300 and the hidden size of 512. Whenever the visual modality is used, we encode each of the visual features to 300 dimensional vectors through an encoder (consisting of a Linear layer followed by Batch Normalization and ReLU non-linearity) which is also trained end-to-end with the Seq2Seq model. Further, to integrate sequence level supervision as in BIBREF9, we utilize the Geomloss library , which provides a batched implementation of the Sinkhorn algorithm for the Optimal Transport computation. For all the translation experiments, we preprocess the data by lowercasing and removing the punctuations BIBREF2, and construct vocabulary at word level. Adam optimizer with a learning rate of 0.001 and a learning rate decay of 0.5 is used to throughout to train our models. <<<Experimental Results>>> The performances of the models are summarized in Table TABREF9, along with the gains in BLEU points. From Table TABREF9, we can make a few observations: The visual modality leads to modest gains in BLEU scores. The proposed VS regularizer leads to slightly higher gain when compared to Decoder-Fusion and Attention modulation techniques for the En-Pt language pair. Further, the gains from incorporating the visual modality are less for Multimodal Attention and VS Regularization in the case of the reversed language pair of Pt-En (Table TABREF10), even though the visual modality is common to both the languages. This can possibly be attributed to the How2 dataset creation process wherein first the videos were aligned with English sentences and then the Portuguese translations were created, implying a reduction in correspondence with the visual modality due to errors introduced in the translation process. <<</Experimental Results>>> <<<Discussion>>> To analyze the reasons for modest gains, despite incorporating multiple techniques to effectively leverage the visual modality for machine translation, we inspect the dataset as well as the proposed mechanisms. <<<PCA of Visual Features>>> We first investigate and compare the visual feature quality of the How2 dataset with respect to that of the Multi30k dataset . To analyze the discriminativeness of the visual features for both of these datasets, we leverage an analysis mechanism used in BIBREF14 in the context of analyzing word embedding discriminativeness. We analyze the variance of the visual features corresponding to each sentence in the training set. Since the visual features semantically represent the sentence as well, we could analyze how well the features are able to discriminate between the sentences and consequently between the individual words, as a measure of their utility for NMT. Figure FIGREF14 (Top) shows the variance explained by the Top 100 principal components, obtained by applying PCA on the How2 and Multi30k training set visual features. The original feature dimensions are 2048 in both the cases. It is clear from the Figure FIGREF14 that most of the energy of the visual feature space resides in a low-dimensional subspace BIBREF14. In other words, there exist a few directions in the embedding space which disproportionately explain the variance. These "common" directions affect all of the embeddings in the same way, rendering them less discriminative. Figure FIGREF14 also shows the cumulative variance explained by Top 10, 20, 50 and 100 principal components respectively. It is clear that the visual features in the case of How2 dataset are much more dominated by the "common" dimensions, when compared to the Multi30k dataset. Further, this analysis is still at the sentence level, i.e. the visual features are much less discriminative among individual sentences, further aggravating the problem at the token level. This suggests that the existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT, since they won't provide discriminativeness among the vocabulary elements at the token level during prediction. Further, this also indicates that under subword vocabulary such as BPE BIBREF15 or Sentence-Piece BIBREF16, the utility of such visual embeddings will only aggravate. <<</PCA of Visual Features>>> <<<Comparison of Attention Components>>> In this section, we analyze the visual and text based attention mechanisms. We find that the visual attention is very sparse, in that just one source encoding is attended to (the maximum visual attention over source encodings, across the test set, has mean 0.99 and standard deviation 0.015), thereby limiting the use of modulation. Thus, in practice, we find that a small weight ($\gamma =0.1$) is necessary to prevent degradation due to this sparse visual attention component. Figure FIGREF18 & FIGREF19 shows the comparison of visual and text based attention for two sentences, one long source sentence of length 21 and one short source sentence of length 7. In both cases, we find that the visual component of the attention hasn't learnt any variation over the source encodings, again suggesting that the visual embeddings do not lend themselves to enhancing token-level discriminativess during prediction. We find this to be consistent across sentences of different lengths. <<</Comparison of Attention Components>>> <<</Discussion>>> <<</Results and Analysis>>> <<<Conclusions and Future Work>>> To conclude, we investigated the utility of visual modality for NMT, under full linguistic context on a new large-scale MMT dataset named How2. Our results on the How2 dataset confirm the general consensus that the visual modality does not lead to any significant gains for NMT, however, unlike BIBREF0 we attribute the relatively modest gains to the limited discriminativeness offered by the existing visual features, rather than the length of the sentences in the dataset. We validate this hypothesis quantitatively through a PCA based analysis of the visual features as well as qualitatively by analyzing attention components. We hope that our work would lead to more useful techniques and better visual features for MMT. An immediate future direction to explore would be to construct more discriminative features for utilizing the visual modality in NMT. <<</Conclusions and Future Work>>> <<</Title>>>
{ "references": [ "Introduction, Conclusions and Future Work" ], "type": "disordered_section" }
2004.02393
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games <<<Abstract>>> We propose the new problem of learning to recover reasoning chains from weakly supervised signals, i.e., the question-answer pairs. We propose a cooperative game approach to deal with this problem, in which how the evidence passages are selected and how the selected passages are connected are handled by two models that cooperate to select the most confident chains from a large set of candidates (from distant supervision). For evaluation, we created benchmarks based on two multi-hop QA datasets, HotpotQA and MedHop; and hand-labeled reasoning chains for the latter. The experimental results demonstrate the effectiveness of our proposed approach. <<</Abstract>>> <<<Introduction>>> NLP tasks that require multi-hop reasoning have recently enjoyed rapid progress, especially on multi-hop question answering BIBREF0, BIBREF1, BIBREF2. Advances have benefited from rich annotations of supporting evidence, as in the popular multi-hop QA and relation extraction benchmarks, e.g., HotpotQA BIBREF3 and DocRED BIBREF4, where the evidence sentences for the reasoning process were labeled by human annotators. Such evidence annotations are crucial for modern model training, since they provide finer-grained supervision for better guiding the model learning. Furthermore, they allow a pipeline fashion of model training, with each step, such as passage ranking and answer extraction, trained as a supervised learning sub-task. This is crucial from a practical perspective, in order to reduce the memory usage when handling a large amount of inputs with advanced, large pre-trained models BIBREF5, BIBREF6, BIBREF7. Manual evidence annotation is expensive, so there are only a few benchmarks with supporting evidence annotated. Even for these datasets, the structures of the annotations are still limited, as new model designs keep emerging and they may require different forms of evidence annotations. As a result, the supervision from these datasets can still be insufficient for training accurate models. Taking question answering with multi-hop reasoning as an example, annotating only supporting passages is not sufficient to show the reasoning processes due to the lack of necessary structural information (Figure FIGREF1). One example is the order of annotated evidence, which is crucial in logic reasoning and the importance of which has also been demonstrated in text-based QA BIBREF8. The other example is how the annotated evidence pieces are connected, which requires at least the definition of arguments, such as a linking entity, concept, or event. Such information has proved useful by the recently popular entity-centric methods BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF0, BIBREF2 and intuitively will be a benefit to these methods if available. We propose a cooperative game approach to recovering the reasoning chains with the aforementioned necessary structural information for multi-hop QA. Each recovered chain corresponds to a list of ordered passages and each pair of adjacent passages is connected with a linking entity. Specifically, we start with a model, the Ranker, which selects a sequence of passages arriving at the answers, with the restriction that each adjacent passage pair shares at least an entity. This is essentially an unsupervised task and the selection suffers from noise and ambiguity. Therefore we introduce another model, the Reasoner, which predicts the exact linking entity that points to the next passage. The two models play a cooperative game and are rewarded when they find a consistent chain. In this way, we restrict the selection to satisfy not only the format constraints (i.e., ordered passages with connected adjacencies) but also the semantic constraints (i.e., finding the next passage given that the partial selection can be effectively modeled by a Reasoner). Therefore, the selection can be less noisy. We evaluate the proposed method on datasets with different properties, i.e., HotpotQA and MedHop BIBREF13, to cover cases with both 2-hop and 3-hop reasoning. We created labeled reasoning chains for both datasets. Experimental results demonstrate the significant advantage of our proposed approach. <<</Introduction>>> <<<Task Definition>>> Reasoning Chains Examples of reasoning chains in HotpotQA and MedHop are shown in Figure FIGREF1. Formally, we aim at recovering the reasoning chain in the form of $(p_1 \rightarrow e_{1,2} \rightarrow p_2 \rightarrow e_{2,3} \rightarrow \cdots \rightarrow e_{n-1,n} \rightarrow p_n)$, where each $p_i$ is a passage and each $e_{i,i+1}$ is an entity that connects $p_i$ and $p_{i+1}$, i.e., appearing in both passages. The last passage $p_n$ in the chain contains the correct answer. We say $p_i$ connects $e_{i-1,i}$ and $e_{i,i+1}$ in the sense that it describes a relationship between the two entities. Our Task Given a QA pair $(q,a)$ and all its candidate passages $\mathcal {P}$, we can extract all possible candidate chains that satisfy the conditions mentioned above, denoted as $\mathcal {C}$. The goal of reasoning chain recovery is to extract the correct chains from all the candidates, given $q,a$ and $\mathcal {P}$ as inputs. Related Work Although there are recent interests on predicting reasoning chains for multi-hop QA BIBREF0, BIBREF14, BIBREF2, they all consider a fully supervised setting; i.e., annotated reasoning chains are available. Our work is the first to recover reasoning chains in a more general unsupervised setting, thus falling into the direction of denoising over distant supervised signals. From this perspective, the most relevant studies in the NLP field includes BIBREF15, BIBREF16 for evidence identification in open-domain QA and BIBREF17, BIBREF18, BIBREF19 for rationale recovery. <<</Task Definition>>> <<<Method>>> The task of recovering reasoning chains is essentially an unsupervised problem, as we have no access to annotated reasoning chains. Therefore, we resort to the noisy training signal from chains obtained by distant supervision. We first propose a conditional selection model that optimizes the passage selection by considering their orders (Section SECREF4). We then propose a cooperative Reasoner-Ranker game (Section SECREF12) in which the Reasoner recovers the linking entities that point to the next passage. This enhancement encourages the Ranker to select the chains such that their distribution is easier for a linking entity prediction model (Reasoner) to capture. Therefore, it enables our model to denoise the supervision signals while recovering chains with entity information. Figure FIGREF3 gives our overall framework, with a flow describing how the Reasoner passes additional rewards to the Ranker. <<<Passage Ranking Model>>> The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\mathcal {P} = \lbrace p_1, p_2 ... p_K\rbrace $ from a pool of candidates, and outputs a chain of selected passages. <<<Passage Scoring>>> For each step of the chain, the Ranker estimates a distribution of the selection of each passage. To this end we first encode the question and passage with a 2-layer bi-directional GRU network, resulting in an encoded question $\mathbf {Q} = \lbrace \vec{\mathbf {q}_0}, \vec{\mathbf {q}_1}, ..., \vec{\mathbf {q}_N}\rbrace $ and $\mathbf {H}_i = \lbrace \vec{\mathbf {h}_{i,0}}, \vec{\mathbf {h}_{i,1}}, ..., \vec{\mathbf {h}_{i,M_i}}\rbrace $ for each passage $p_i \in P$ of length $M_i$. Then we use the MatchLSTM model BIBREF20 to get the matching score between $\mathbf {Q}$ and each $\mathbf {H}_i$ and derive the distribution of passage selection $P(p_i|q)$ (see Appendix SECREF6 for details). We denote $P(p_i|q)=\textrm {MatchLSTM}(\mathbf {H}_i, \mathbf {Q})$ for simplicity. <<</Passage Scoring>>> <<<Conditional Selection>>> To model passage dependency along the chain of reasoning, we use a hard selection model that builds a chain incrementally. Provided with the $K$ passages, at each step $t$ the Ranker computes $P^t(p_i|\mathbf {Q}^{t-1}), i = 0, ..., K$, which is the probability of selecting passage $p_i$ conditioned on the query and previous states representation $\mathbf {Q}^{t-1}$. Then we sample one passage $p^t_{\tau }$ according to the predicted selection probability. The first step starts with the original question $\mathbf {Q}^0$. A feed-forward network is used to project the concatenation of query encoding and selected passage encoding $\tilde{\mathbf {m}}^t_{p_{\tau }}$ back to the query space, and the new query $\mathbf {Q}^{t+1}$ is used to select the next passage. <<</Conditional Selection>>> <<<Reward via Distant Supervision>>> We use policy gradient BIBREF21 to optimize our model. As we have no access to annotated reasoning chains during training, the reward comes from distant supervision. Specifically, we reward the Ranker if a selected passage appears as the corresponding part of a distant supervised chain in $\mathcal {C}$. The model receives immediate reward at each step of selection. In this paper we only consider chains consist of $\le 3$ passages (2-hop and 3-hop chains). For the 2-hop cases, our model predicts a chain of two passages from the candidate set $\mathcal {C}$ in the form of $p_h\rightarrow e \rightarrow p_t$. Each candidate chain satisfies that $p_t$ contains the answer, while $p_h$ and $p_t$ contain a shared entity $e$. We call $p_h$ the head passage and $p_t$ the tail passage. Let $\mathcal {P}_{T}/\mathcal {P}_{H}$ denote the set of all tail/head passages from $\mathcal {C}$. Our model receives rewards $r_h, r_t$ according to its selections: For the 3-hop cases, we need to select an additional intermediate passage $p_m$ between $p_h$ and $p_t$. If we reward any $p_m$ selection that appears in the middle of a chain in candidate chain set $\mathcal {C}$, the number of feasible options can be very large. Therefore, we make our model first select the head passage $p_h$ and the tail passage $p_t$ independently and then select $p_m$ conditioned on $(p_h,p_t)$. We further restrict that each path in $\mathcal {C}$ must have the head passage containing an entity from $q$. Then the selected $p_m$ is only rewarded if it appears in a chain in $\mathcal {C}$ that starts with $p_h$ and ends with $p_t$: <<</Reward via Distant Supervision>>> <<</Passage Ranking Model>>> <<<Cooperative Reasoner>>> To alleviate the noise in the distant supervision signal $\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game: Reasoner Step: Given the first passage $p_t$ selected by the trained Ranker, the Reasoner predicts the probability of each entity $e$ appearing in $p_t$. The Reasoner is trained with the cross-entropy loss: Ranker Step: Given the Reasoner's top-1 predicted linking entity $e$, the reward for Ranker at the $2^{\textrm {nd}}$ step is defined as: The extension to 3-hop cases is straightforward; the only difference is that the Reasoner reads both the selected $p_h$ and $p_t$ to output two entities. The Ranker receives one extra reward if the Reasoner picks the correct linking entity from $p_h$, so does $p_t$. <<</Cooperative Reasoner>>> <<</Method>>> <<<Experiments>>> <<<Settings>>> <<<Datasets>>> We evaluate our path selection model on HotpotQA bridge type questions and on the MedHop dataset. In HotpotQA, the entities are pre-processed Wiki anchor link objects and in MedHop they are drug/protein database identifiers. For HotpotQA, two supporting passages are provided along with each question. We ignore the support annotations during training and use them to create ground truth on development set: following BIBREF8, we determine the order of passages according to whether a passage contains the answer. We discard ambiguous instances. For MedHop, there is no evidence annotated. Therefore we created a new evaluation dataset by manually annotating the correct paths for part of the development set: we first extract all candidate paths in form of passage triplets $(p_h, p_m, p_t)$, such that $p_h$ contains the query drug and $p_t$ contains the answer drug, and $p_h/p_m$ and $p_m/p_t$ are connected by shared proteins. We label a chain as positive if all the drug-protein or protein-protein interactions are described in the corresponding passages. Note that the positive paths are not unique for a question. During training we select chains based on the full passage set $\mathcal {P}$; at inference time we extract the chains from the candidate set $\mathcal {C}$ (see Section SECREF2). <<</Datasets>>> <<<Baselines and Evaluation Metric>>> We compare our model with (1) random baseline, which randomly selects a candidate chain from the distant supervision chain set $\mathcal {C}$; and (2) distant supervised MatchLSTM, which uses the same base model as ours but scores and selects the passages independently. We use accuracy as our evaluation metric. As HotpotQA does not provide ground-truth linking entities, we only evaluate whether the supporting passages are fully recovered (yet our model still output the full chains). For MedHop we evaluate whether the whole predicted chain is correct. More details can be found in Appendix SECREF7. We use BIBREF24 as word embedding for HotpotQA, and BIBREF25 for MedHop. <<</Baselines and Evaluation Metric>>> <<</Settings>>> <<<Results>>> <<<HotpotQA>>> We first evaluate on the 2-hop HotpotQA task. Our best performed model first selects the tail passage $p_t$ and then the head passage $p_h$, because the number of candidates of tail is smaller ($\sim $2 per question). Table TABREF21 shows the results. First, training a ranker with distant supervision performs significantly better than the random baseline, showing that the training process itself has a certain degree of denoising ability to distinguish the more informative signals from distant supervision labels. By introducing additional inductive bias of orders, the conditional selection model further improves with a large margin. Finally, our cooperative game gives the best performance, showing that a trained Reasoner has the ability of ignoring entity links that are irrelevant to the reasoning chain. Table TABREF22 demonstrates the effect of selecting directions, together with the methods' recall on head passages and tail passages. The latter is evaluated on a subset of bridge-type questions in HotpotQA which has no ambiguous support annotations in passage orders; i.e., among the two human-labeled supporting passages, only one contains the answer and thus must be a tail. The results show that selecting tail first performs better. The cooperative game mainly improves the head selection. <<</HotpotQA>>> <<<MedHop>>> Results in table TABREF21 show that recovering chains from MedHop is a much harder task: first, the large number of distant supervision chains in $\mathcal {C}$ introduce too much noise so the Distant Supervised Ranker improves only 3%; second, the dependent model leads to no improvement because $\mathcal {C}$ is strictly ordered given our data construction. Our cooperative game manages to remain effective and gives further improvement. <<</MedHop>>> <<</Results>>> <<</Experiments>>> <<<Conclusions>>> In this paper we propose the problem of recovering reasoning chains in multi-hop QA from weak supervision signals. Our model adopts an cooperative game approach where a ranker and a reasoner cooperate to select the most confident chains. Experiments on the HotpotQA and MedHop benchmarks show the effectiveness of the proposed approach. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Conclusions, Method" ], "type": "disordered_section" }
2004.02393
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games <<<Abstract>>> We propose the new problem of learning to recover reasoning chains from weakly supervised signals, i.e., the question-answer pairs. We propose a cooperative game approach to deal with this problem, in which how the evidence passages are selected and how the selected passages are connected are handled by two models that cooperate to select the most confident chains from a large set of candidates (from distant supervision). For evaluation, we created benchmarks based on two multi-hop QA datasets, HotpotQA and MedHop; and hand-labeled reasoning chains for the latter. The experimental results demonstrate the effectiveness of our proposed approach. <<</Abstract>>> <<<Introduction>>> NLP tasks that require multi-hop reasoning have recently enjoyed rapid progress, especially on multi-hop question answering BIBREF0, BIBREF1, BIBREF2. Advances have benefited from rich annotations of supporting evidence, as in the popular multi-hop QA and relation extraction benchmarks, e.g., HotpotQA BIBREF3 and DocRED BIBREF4, where the evidence sentences for the reasoning process were labeled by human annotators. Such evidence annotations are crucial for modern model training, since they provide finer-grained supervision for better guiding the model learning. Furthermore, they allow a pipeline fashion of model training, with each step, such as passage ranking and answer extraction, trained as a supervised learning sub-task. This is crucial from a practical perspective, in order to reduce the memory usage when handling a large amount of inputs with advanced, large pre-trained models BIBREF5, BIBREF6, BIBREF7. Manual evidence annotation is expensive, so there are only a few benchmarks with supporting evidence annotated. Even for these datasets, the structures of the annotations are still limited, as new model designs keep emerging and they may require different forms of evidence annotations. As a result, the supervision from these datasets can still be insufficient for training accurate models. Taking question answering with multi-hop reasoning as an example, annotating only supporting passages is not sufficient to show the reasoning processes due to the lack of necessary structural information (Figure FIGREF1). One example is the order of annotated evidence, which is crucial in logic reasoning and the importance of which has also been demonstrated in text-based QA BIBREF8. The other example is how the annotated evidence pieces are connected, which requires at least the definition of arguments, such as a linking entity, concept, or event. Such information has proved useful by the recently popular entity-centric methods BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF0, BIBREF2 and intuitively will be a benefit to these methods if available. We propose a cooperative game approach to recovering the reasoning chains with the aforementioned necessary structural information for multi-hop QA. Each recovered chain corresponds to a list of ordered passages and each pair of adjacent passages is connected with a linking entity. Specifically, we start with a model, the Ranker, which selects a sequence of passages arriving at the answers, with the restriction that each adjacent passage pair shares at least an entity. This is essentially an unsupervised task and the selection suffers from noise and ambiguity. Therefore we introduce another model, the Reasoner, which predicts the exact linking entity that points to the next passage. The two models play a cooperative game and are rewarded when they find a consistent chain. In this way, we restrict the selection to satisfy not only the format constraints (i.e., ordered passages with connected adjacencies) but also the semantic constraints (i.e., finding the next passage given that the partial selection can be effectively modeled by a Reasoner). Therefore, the selection can be less noisy. We evaluate the proposed method on datasets with different properties, i.e., HotpotQA and MedHop BIBREF13, to cover cases with both 2-hop and 3-hop reasoning. We created labeled reasoning chains for both datasets. Experimental results demonstrate the significant advantage of our proposed approach. <<</Introduction>>> <<<Task Definition>>> Reasoning Chains Examples of reasoning chains in HotpotQA and MedHop are shown in Figure FIGREF1. Formally, we aim at recovering the reasoning chain in the form of $(p_1 \rightarrow e_{1,2} \rightarrow p_2 \rightarrow e_{2,3} \rightarrow \cdots \rightarrow e_{n-1,n} \rightarrow p_n)$, where each $p_i$ is a passage and each $e_{i,i+1}$ is an entity that connects $p_i$ and $p_{i+1}$, i.e., appearing in both passages. The last passage $p_n$ in the chain contains the correct answer. We say $p_i$ connects $e_{i-1,i}$ and $e_{i,i+1}$ in the sense that it describes a relationship between the two entities. Our Task Given a QA pair $(q,a)$ and all its candidate passages $\mathcal {P}$, we can extract all possible candidate chains that satisfy the conditions mentioned above, denoted as $\mathcal {C}$. The goal of reasoning chain recovery is to extract the correct chains from all the candidates, given $q,a$ and $\mathcal {P}$ as inputs. Related Work Although there are recent interests on predicting reasoning chains for multi-hop QA BIBREF0, BIBREF14, BIBREF2, they all consider a fully supervised setting; i.e., annotated reasoning chains are available. Our work is the first to recover reasoning chains in a more general unsupervised setting, thus falling into the direction of denoising over distant supervised signals. From this perspective, the most relevant studies in the NLP field includes BIBREF15, BIBREF16 for evidence identification in open-domain QA and BIBREF17, BIBREF18, BIBREF19 for rationale recovery. <<</Task Definition>>> <<<Method>>> The task of recovering reasoning chains is essentially an unsupervised problem, as we have no access to annotated reasoning chains. Therefore, we resort to the noisy training signal from chains obtained by distant supervision. We first propose a conditional selection model that optimizes the passage selection by considering their orders (Section SECREF4). We then propose a cooperative Reasoner-Ranker game (Section SECREF12) in which the Reasoner recovers the linking entities that point to the next passage. This enhancement encourages the Ranker to select the chains such that their distribution is easier for a linking entity prediction model (Reasoner) to capture. Therefore, it enables our model to denoise the supervision signals while recovering chains with entity information. Figure FIGREF3 gives our overall framework, with a flow describing how the Reasoner passes additional rewards to the Ranker. <<<Passage Ranking Model>>> The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\mathcal {P} = \lbrace p_1, p_2 ... p_K\rbrace $ from a pool of candidates, and outputs a chain of selected passages. <<<Passage Scoring>>> For each step of the chain, the Ranker estimates a distribution of the selection of each passage. To this end we first encode the question and passage with a 2-layer bi-directional GRU network, resulting in an encoded question $\mathbf {Q} = \lbrace \vec{\mathbf {q}_0}, \vec{\mathbf {q}_1}, ..., \vec{\mathbf {q}_N}\rbrace $ and $\mathbf {H}_i = \lbrace \vec{\mathbf {h}_{i,0}}, \vec{\mathbf {h}_{i,1}}, ..., \vec{\mathbf {h}_{i,M_i}}\rbrace $ for each passage $p_i \in P$ of length $M_i$. Then we use the MatchLSTM model BIBREF20 to get the matching score between $\mathbf {Q}$ and each $\mathbf {H}_i$ and derive the distribution of passage selection $P(p_i|q)$ (see Appendix SECREF6 for details). We denote $P(p_i|q)=\textrm {MatchLSTM}(\mathbf {H}_i, \mathbf {Q})$ for simplicity. <<</Passage Scoring>>> <<<Conditional Selection>>> To model passage dependency along the chain of reasoning, we use a hard selection model that builds a chain incrementally. Provided with the $K$ passages, at each step $t$ the Ranker computes $P^t(p_i|\mathbf {Q}^{t-1}), i = 0, ..., K$, which is the probability of selecting passage $p_i$ conditioned on the query and previous states representation $\mathbf {Q}^{t-1}$. Then we sample one passage $p^t_{\tau }$ according to the predicted selection probability. The first step starts with the original question $\mathbf {Q}^0$. A feed-forward network is used to project the concatenation of query encoding and selected passage encoding $\tilde{\mathbf {m}}^t_{p_{\tau }}$ back to the query space, and the new query $\mathbf {Q}^{t+1}$ is used to select the next passage. <<</Conditional Selection>>> <<<Reward via Distant Supervision>>> We use policy gradient BIBREF21 to optimize our model. As we have no access to annotated reasoning chains during training, the reward comes from distant supervision. Specifically, we reward the Ranker if a selected passage appears as the corresponding part of a distant supervised chain in $\mathcal {C}$. The model receives immediate reward at each step of selection. In this paper we only consider chains consist of $\le 3$ passages (2-hop and 3-hop chains). For the 2-hop cases, our model predicts a chain of two passages from the candidate set $\mathcal {C}$ in the form of $p_h\rightarrow e \rightarrow p_t$. Each candidate chain satisfies that $p_t$ contains the answer, while $p_h$ and $p_t$ contain a shared entity $e$. We call $p_h$ the head passage and $p_t$ the tail passage. Let $\mathcal {P}_{T}/\mathcal {P}_{H}$ denote the set of all tail/head passages from $\mathcal {C}$. Our model receives rewards $r_h, r_t$ according to its selections: For the 3-hop cases, we need to select an additional intermediate passage $p_m$ between $p_h$ and $p_t$. If we reward any $p_m$ selection that appears in the middle of a chain in candidate chain set $\mathcal {C}$, the number of feasible options can be very large. Therefore, we make our model first select the head passage $p_h$ and the tail passage $p_t$ independently and then select $p_m$ conditioned on $(p_h,p_t)$. We further restrict that each path in $\mathcal {C}$ must have the head passage containing an entity from $q$. Then the selected $p_m$ is only rewarded if it appears in a chain in $\mathcal {C}$ that starts with $p_h$ and ends with $p_t$: <<</Reward via Distant Supervision>>> <<</Passage Ranking Model>>> <<<Cooperative Reasoner>>> To alleviate the noise in the distant supervision signal $\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game: Reasoner Step: Given the first passage $p_t$ selected by the trained Ranker, the Reasoner predicts the probability of each entity $e$ appearing in $p_t$. The Reasoner is trained with the cross-entropy loss: Ranker Step: Given the Reasoner's top-1 predicted linking entity $e$, the reward for Ranker at the $2^{\textrm {nd}}$ step is defined as: The extension to 3-hop cases is straightforward; the only difference is that the Reasoner reads both the selected $p_h$ and $p_t$ to output two entities. The Ranker receives one extra reward if the Reasoner picks the correct linking entity from $p_h$, so does $p_t$. <<</Cooperative Reasoner>>> <<</Method>>> <<<Experiments>>> <<<Settings>>> <<<Datasets>>> We evaluate our path selection model on HotpotQA bridge type questions and on the MedHop dataset. In HotpotQA, the entities are pre-processed Wiki anchor link objects and in MedHop they are drug/protein database identifiers. For HotpotQA, two supporting passages are provided along with each question. We ignore the support annotations during training and use them to create ground truth on development set: following BIBREF8, we determine the order of passages according to whether a passage contains the answer. We discard ambiguous instances. For MedHop, there is no evidence annotated. Therefore we created a new evaluation dataset by manually annotating the correct paths for part of the development set: we first extract all candidate paths in form of passage triplets $(p_h, p_m, p_t)$, such that $p_h$ contains the query drug and $p_t$ contains the answer drug, and $p_h/p_m$ and $p_m/p_t$ are connected by shared proteins. We label a chain as positive if all the drug-protein or protein-protein interactions are described in the corresponding passages. Note that the positive paths are not unique for a question. During training we select chains based on the full passage set $\mathcal {P}$; at inference time we extract the chains from the candidate set $\mathcal {C}$ (see Section SECREF2). <<</Datasets>>> <<<Baselines and Evaluation Metric>>> We compare our model with (1) random baseline, which randomly selects a candidate chain from the distant supervision chain set $\mathcal {C}$; and (2) distant supervised MatchLSTM, which uses the same base model as ours but scores and selects the passages independently. We use accuracy as our evaluation metric. As HotpotQA does not provide ground-truth linking entities, we only evaluate whether the supporting passages are fully recovered (yet our model still output the full chains). For MedHop we evaluate whether the whole predicted chain is correct. More details can be found in Appendix SECREF7. We use BIBREF24 as word embedding for HotpotQA, and BIBREF25 for MedHop. <<</Baselines and Evaluation Metric>>> <<</Settings>>> <<<Results>>> <<<HotpotQA>>> We first evaluate on the 2-hop HotpotQA task. Our best performed model first selects the tail passage $p_t$ and then the head passage $p_h$, because the number of candidates of tail is smaller ($\sim $2 per question). Table TABREF21 shows the results. First, training a ranker with distant supervision performs significantly better than the random baseline, showing that the training process itself has a certain degree of denoising ability to distinguish the more informative signals from distant supervision labels. By introducing additional inductive bias of orders, the conditional selection model further improves with a large margin. Finally, our cooperative game gives the best performance, showing that a trained Reasoner has the ability of ignoring entity links that are irrelevant to the reasoning chain. Table TABREF22 demonstrates the effect of selecting directions, together with the methods' recall on head passages and tail passages. The latter is evaluated on a subset of bridge-type questions in HotpotQA which has no ambiguous support annotations in passage orders; i.e., among the two human-labeled supporting passages, only one contains the answer and thus must be a tail. The results show that selecting tail first performs better. The cooperative game mainly improves the head selection. <<</HotpotQA>>> <<<MedHop>>> Results in table TABREF21 show that recovering chains from MedHop is a much harder task: first, the large number of distant supervision chains in $\mathcal {C}$ introduce too much noise so the Distant Supervised Ranker improves only 3%; second, the dependent model leads to no improvement because $\mathcal {C}$ is strictly ordered given our data construction. Our cooperative game manages to remain effective and gives further improvement. <<</MedHop>>> <<</Results>>> <<</Experiments>>> <<<Conclusions>>> In this paper we propose the problem of recovering reasoning chains in multi-hop QA from weak supervision signals. Our model adopts an cooperative game approach where a ranker and a reasoner cooperate to select the most confident chains. Experiments on the HotpotQA and MedHop benchmarks show the effectiveness of the proposed approach. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Introduction, Task Definition" ], "type": "disordered_section" }
2004.02393
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games <<<Abstract>>> We propose the new problem of learning to recover reasoning chains from weakly supervised signals, i.e., the question-answer pairs. We propose a cooperative game approach to deal with this problem, in which how the evidence passages are selected and how the selected passages are connected are handled by two models that cooperate to select the most confident chains from a large set of candidates (from distant supervision). For evaluation, we created benchmarks based on two multi-hop QA datasets, HotpotQA and MedHop; and hand-labeled reasoning chains for the latter. The experimental results demonstrate the effectiveness of our proposed approach. <<</Abstract>>> <<<Introduction>>> NLP tasks that require multi-hop reasoning have recently enjoyed rapid progress, especially on multi-hop question answering BIBREF0, BIBREF1, BIBREF2. Advances have benefited from rich annotations of supporting evidence, as in the popular multi-hop QA and relation extraction benchmarks, e.g., HotpotQA BIBREF3 and DocRED BIBREF4, where the evidence sentences for the reasoning process were labeled by human annotators. Such evidence annotations are crucial for modern model training, since they provide finer-grained supervision for better guiding the model learning. Furthermore, they allow a pipeline fashion of model training, with each step, such as passage ranking and answer extraction, trained as a supervised learning sub-task. This is crucial from a practical perspective, in order to reduce the memory usage when handling a large amount of inputs with advanced, large pre-trained models BIBREF5, BIBREF6, BIBREF7. Manual evidence annotation is expensive, so there are only a few benchmarks with supporting evidence annotated. Even for these datasets, the structures of the annotations are still limited, as new model designs keep emerging and they may require different forms of evidence annotations. As a result, the supervision from these datasets can still be insufficient for training accurate models. Taking question answering with multi-hop reasoning as an example, annotating only supporting passages is not sufficient to show the reasoning processes due to the lack of necessary structural information (Figure FIGREF1). One example is the order of annotated evidence, which is crucial in logic reasoning and the importance of which has also been demonstrated in text-based QA BIBREF8. The other example is how the annotated evidence pieces are connected, which requires at least the definition of arguments, such as a linking entity, concept, or event. Such information has proved useful by the recently popular entity-centric methods BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF0, BIBREF2 and intuitively will be a benefit to these methods if available. We propose a cooperative game approach to recovering the reasoning chains with the aforementioned necessary structural information for multi-hop QA. Each recovered chain corresponds to a list of ordered passages and each pair of adjacent passages is connected with a linking entity. Specifically, we start with a model, the Ranker, which selects a sequence of passages arriving at the answers, with the restriction that each adjacent passage pair shares at least an entity. This is essentially an unsupervised task and the selection suffers from noise and ambiguity. Therefore we introduce another model, the Reasoner, which predicts the exact linking entity that points to the next passage. The two models play a cooperative game and are rewarded when they find a consistent chain. In this way, we restrict the selection to satisfy not only the format constraints (i.e., ordered passages with connected adjacencies) but also the semantic constraints (i.e., finding the next passage given that the partial selection can be effectively modeled by a Reasoner). Therefore, the selection can be less noisy. We evaluate the proposed method on datasets with different properties, i.e., HotpotQA and MedHop BIBREF13, to cover cases with both 2-hop and 3-hop reasoning. We created labeled reasoning chains for both datasets. Experimental results demonstrate the significant advantage of our proposed approach. <<</Introduction>>> <<<Task Definition>>> Reasoning Chains Examples of reasoning chains in HotpotQA and MedHop are shown in Figure FIGREF1. Formally, we aim at recovering the reasoning chain in the form of $(p_1 \rightarrow e_{1,2} \rightarrow p_2 \rightarrow e_{2,3} \rightarrow \cdots \rightarrow e_{n-1,n} \rightarrow p_n)$, where each $p_i$ is a passage and each $e_{i,i+1}$ is an entity that connects $p_i$ and $p_{i+1}$, i.e., appearing in both passages. The last passage $p_n$ in the chain contains the correct answer. We say $p_i$ connects $e_{i-1,i}$ and $e_{i,i+1}$ in the sense that it describes a relationship between the two entities. Our Task Given a QA pair $(q,a)$ and all its candidate passages $\mathcal {P}$, we can extract all possible candidate chains that satisfy the conditions mentioned above, denoted as $\mathcal {C}$. The goal of reasoning chain recovery is to extract the correct chains from all the candidates, given $q,a$ and $\mathcal {P}$ as inputs. Related Work Although there are recent interests on predicting reasoning chains for multi-hop QA BIBREF0, BIBREF14, BIBREF2, they all consider a fully supervised setting; i.e., annotated reasoning chains are available. Our work is the first to recover reasoning chains in a more general unsupervised setting, thus falling into the direction of denoising over distant supervised signals. From this perspective, the most relevant studies in the NLP field includes BIBREF15, BIBREF16 for evidence identification in open-domain QA and BIBREF17, BIBREF18, BIBREF19 for rationale recovery. <<</Task Definition>>> <<<Method>>> The task of recovering reasoning chains is essentially an unsupervised problem, as we have no access to annotated reasoning chains. Therefore, we resort to the noisy training signal from chains obtained by distant supervision. We first propose a conditional selection model that optimizes the passage selection by considering their orders (Section SECREF4). We then propose a cooperative Reasoner-Ranker game (Section SECREF12) in which the Reasoner recovers the linking entities that point to the next passage. This enhancement encourages the Ranker to select the chains such that their distribution is easier for a linking entity prediction model (Reasoner) to capture. Therefore, it enables our model to denoise the supervision signals while recovering chains with entity information. Figure FIGREF3 gives our overall framework, with a flow describing how the Reasoner passes additional rewards to the Ranker. <<<Passage Ranking Model>>> The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\mathcal {P} = \lbrace p_1, p_2 ... p_K\rbrace $ from a pool of candidates, and outputs a chain of selected passages. <<<Passage Scoring>>> For each step of the chain, the Ranker estimates a distribution of the selection of each passage. To this end we first encode the question and passage with a 2-layer bi-directional GRU network, resulting in an encoded question $\mathbf {Q} = \lbrace \vec{\mathbf {q}_0}, \vec{\mathbf {q}_1}, ..., \vec{\mathbf {q}_N}\rbrace $ and $\mathbf {H}_i = \lbrace \vec{\mathbf {h}_{i,0}}, \vec{\mathbf {h}_{i,1}}, ..., \vec{\mathbf {h}_{i,M_i}}\rbrace $ for each passage $p_i \in P$ of length $M_i$. Then we use the MatchLSTM model BIBREF20 to get the matching score between $\mathbf {Q}$ and each $\mathbf {H}_i$ and derive the distribution of passage selection $P(p_i|q)$ (see Appendix SECREF6 for details). We denote $P(p_i|q)=\textrm {MatchLSTM}(\mathbf {H}_i, \mathbf {Q})$ for simplicity. <<</Passage Scoring>>> <<<Conditional Selection>>> To model passage dependency along the chain of reasoning, we use a hard selection model that builds a chain incrementally. Provided with the $K$ passages, at each step $t$ the Ranker computes $P^t(p_i|\mathbf {Q}^{t-1}), i = 0, ..., K$, which is the probability of selecting passage $p_i$ conditioned on the query and previous states representation $\mathbf {Q}^{t-1}$. Then we sample one passage $p^t_{\tau }$ according to the predicted selection probability. The first step starts with the original question $\mathbf {Q}^0$. A feed-forward network is used to project the concatenation of query encoding and selected passage encoding $\tilde{\mathbf {m}}^t_{p_{\tau }}$ back to the query space, and the new query $\mathbf {Q}^{t+1}$ is used to select the next passage. <<</Conditional Selection>>> <<<Reward via Distant Supervision>>> We use policy gradient BIBREF21 to optimize our model. As we have no access to annotated reasoning chains during training, the reward comes from distant supervision. Specifically, we reward the Ranker if a selected passage appears as the corresponding part of a distant supervised chain in $\mathcal {C}$. The model receives immediate reward at each step of selection. In this paper we only consider chains consist of $\le 3$ passages (2-hop and 3-hop chains). For the 2-hop cases, our model predicts a chain of two passages from the candidate set $\mathcal {C}$ in the form of $p_h\rightarrow e \rightarrow p_t$. Each candidate chain satisfies that $p_t$ contains the answer, while $p_h$ and $p_t$ contain a shared entity $e$. We call $p_h$ the head passage and $p_t$ the tail passage. Let $\mathcal {P}_{T}/\mathcal {P}_{H}$ denote the set of all tail/head passages from $\mathcal {C}$. Our model receives rewards $r_h, r_t$ according to its selections: For the 3-hop cases, we need to select an additional intermediate passage $p_m$ between $p_h$ and $p_t$. If we reward any $p_m$ selection that appears in the middle of a chain in candidate chain set $\mathcal {C}$, the number of feasible options can be very large. Therefore, we make our model first select the head passage $p_h$ and the tail passage $p_t$ independently and then select $p_m$ conditioned on $(p_h,p_t)$. We further restrict that each path in $\mathcal {C}$ must have the head passage containing an entity from $q$. Then the selected $p_m$ is only rewarded if it appears in a chain in $\mathcal {C}$ that starts with $p_h$ and ends with $p_t$: <<</Reward via Distant Supervision>>> <<</Passage Ranking Model>>> <<<Cooperative Reasoner>>> To alleviate the noise in the distant supervision signal $\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game: Reasoner Step: Given the first passage $p_t$ selected by the trained Ranker, the Reasoner predicts the probability of each entity $e$ appearing in $p_t$. The Reasoner is trained with the cross-entropy loss: Ranker Step: Given the Reasoner's top-1 predicted linking entity $e$, the reward for Ranker at the $2^{\textrm {nd}}$ step is defined as: The extension to 3-hop cases is straightforward; the only difference is that the Reasoner reads both the selected $p_h$ and $p_t$ to output two entities. The Ranker receives one extra reward if the Reasoner picks the correct linking entity from $p_h$, so does $p_t$. <<</Cooperative Reasoner>>> <<</Method>>> <<<Experiments>>> <<<Settings>>> <<<Datasets>>> We evaluate our path selection model on HotpotQA bridge type questions and on the MedHop dataset. In HotpotQA, the entities are pre-processed Wiki anchor link objects and in MedHop they are drug/protein database identifiers. For HotpotQA, two supporting passages are provided along with each question. We ignore the support annotations during training and use them to create ground truth on development set: following BIBREF8, we determine the order of passages according to whether a passage contains the answer. We discard ambiguous instances. For MedHop, there is no evidence annotated. Therefore we created a new evaluation dataset by manually annotating the correct paths for part of the development set: we first extract all candidate paths in form of passage triplets $(p_h, p_m, p_t)$, such that $p_h$ contains the query drug and $p_t$ contains the answer drug, and $p_h/p_m$ and $p_m/p_t$ are connected by shared proteins. We label a chain as positive if all the drug-protein or protein-protein interactions are described in the corresponding passages. Note that the positive paths are not unique for a question. During training we select chains based on the full passage set $\mathcal {P}$; at inference time we extract the chains from the candidate set $\mathcal {C}$ (see Section SECREF2). <<</Datasets>>> <<<Baselines and Evaluation Metric>>> We compare our model with (1) random baseline, which randomly selects a candidate chain from the distant supervision chain set $\mathcal {C}$; and (2) distant supervised MatchLSTM, which uses the same base model as ours but scores and selects the passages independently. We use accuracy as our evaluation metric. As HotpotQA does not provide ground-truth linking entities, we only evaluate whether the supporting passages are fully recovered (yet our model still output the full chains). For MedHop we evaluate whether the whole predicted chain is correct. More details can be found in Appendix SECREF7. We use BIBREF24 as word embedding for HotpotQA, and BIBREF25 for MedHop. <<</Baselines and Evaluation Metric>>> <<</Settings>>> <<<Results>>> <<<HotpotQA>>> We first evaluate on the 2-hop HotpotQA task. Our best performed model first selects the tail passage $p_t$ and then the head passage $p_h$, because the number of candidates of tail is smaller ($\sim $2 per question). Table TABREF21 shows the results. First, training a ranker with distant supervision performs significantly better than the random baseline, showing that the training process itself has a certain degree of denoising ability to distinguish the more informative signals from distant supervision labels. By introducing additional inductive bias of orders, the conditional selection model further improves with a large margin. Finally, our cooperative game gives the best performance, showing that a trained Reasoner has the ability of ignoring entity links that are irrelevant to the reasoning chain. Table TABREF22 demonstrates the effect of selecting directions, together with the methods' recall on head passages and tail passages. The latter is evaluated on a subset of bridge-type questions in HotpotQA which has no ambiguous support annotations in passage orders; i.e., among the two human-labeled supporting passages, only one contains the answer and thus must be a tail. The results show that selecting tail first performs better. The cooperative game mainly improves the head selection. <<</HotpotQA>>> <<<MedHop>>> Results in table TABREF21 show that recovering chains from MedHop is a much harder task: first, the large number of distant supervision chains in $\mathcal {C}$ introduce too much noise so the Distant Supervised Ranker improves only 3%; second, the dependent model leads to no improvement because $\mathcal {C}$ is strictly ordered given our data construction. Our cooperative game manages to remain effective and gives further improvement. <<</MedHop>>> <<</Results>>> <<</Experiments>>> <<<Conclusions>>> In this paper we propose the problem of recovering reasoning chains in multi-hop QA from weak supervision signals. Our model adopts an cooperative game approach where a ranker and a reasoner cooperate to select the most confident chains. Experiments on the HotpotQA and MedHop benchmarks show the effectiveness of the proposed approach. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Method, Introduction" ], "type": "disordered_section" }
2004.01694
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> A Set of Recommendations for Assessing Human-Machine Parity in Language Translation <<<Abstract>>> The quality of machine translation has increased remarkably over the past years, to the degree that it was found to be indistinguishable from professional human translation in a number of empirical investigations. We reassess Hassan et al.'s 2018 investigation into Chinese to English news translation, showing that the finding of human-machine parity was owed to weaknesses in the evaluation design - which is currently considered best practice in the field. We show that the professional human translations contained significantly fewer errors, and that perceived quality in human evaluation depends on the choice of raters, the availability of linguistic context, and the creation of reference translations. Our results call for revisiting current best practices to assess strong machine translation systems in general and human-machine parity in particular, for which we offer a set of recommendations based on our empirical findings. <<</Abstract>>> <<<Introduction>>> Machine translation (MT) has made astounding progress in recent years thanks to improvements in neural modelling BIBREF0, BIBREF1, BIBREF2, and the resulting increase in translation quality is creating new challenges for MT evaluation. Human evaluation remains the gold standard, but there are many design decisions that potentially affect the validity of such a human evaluation. This paper is a response to two recent human evaluation studies in which some neural machine translation systems reportedly performed at (or above) the level of human translators for news translation from Chinese to English BIBREF3 and English to Czech BIBREF4, BIBREF5. Both evaluations were based on current best practices in the field: they used a source-based direct assessment with non-expert annotators, using data sets and the evaluation protocol of the Conference on Machine Translation (WMT). While the results are intriguing, especially because they are based on best practices in MT evaluation, BIBREF5 warn against taking their results as evidence for human–machine parity, and caution that for well-resourced language pairs, an update of WMT evaluation style will be needed to keep up with the progress in machine translation. We concur that these findings have demonstrated the need to critically re-evaluate the design of human MT evaluation. Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis. <<</Introduction>>> <<<Background>>> We first review current methods to assess the quality of machine translation system outputs, and highlight potential issues in using these methods to compare such outputs to translations produced by professional human translators. <<<Human Evaluation of Machine Translation>>> The evaluation of MT quality has been the subject of controversial discussions in research and the language services industry for decades due to its high economic importance. While automatic evaluation methods are particularly important in system development, there is consensus that a reliable evaluation should—despite high costs—be carried out by humans. Various methods have been proposed for the human evaluation of MT quality BIBREF8. What they have in common is that the MT output to be rated is paired with a translation hint: the source text or a reference translation. The MT output is then either adapted or scored with reference to the translation hint by human post-editors or raters, respectively. As part of the large-scale evaluation campaign at WMT, two primary evaluation methods have been used in recent years: relative ranking and direct assessment BIBREF9. In the case of relative ranking, raters are presented with outputs from two or more systems, which they are asked to evaluate relative to each other (e.g., to determine system A is better than system B). Ties (e.g., system A is as good or as bad as system B) are typically allowed. Compared to absolute scores on Likert scales, data obtained through relative ranking show better inter- and intra-annotator agreement BIBREF10. However, they do not allow conclusions to be drawn about the order of magnitude of the differences, so that it is not possible to determine how much better system A was than system B. This is one of the reasons why direct assessment has prevailed as an evaluation method more recently. In contrast to relative ranking, the raters are presented with one MT output at a time, to which they assign a score between 0 and 100. To increase homogeneity, each rater's ratings are standardised BIBREF11. Reference translations serve as the basis in the context of WMT, and evaluations are carried out by monolingual raters. To avoid reference bias, the evaluation can be based on source texts instead, which presupposes bilingual raters, but leads to more reliable results overall BIBREF12. <<</Human Evaluation of Machine Translation>>> <<<Assessing Human–Machine Parity>>> BIBREF3 base their claim of achieving human–machine parity on a source-based direct assessment as described in the previous section, where they found no significant difference in ratings between the output of their MT system and a professional human translation. Similarly, BIBREF5 report that the best-performing English to Czech system submitted to WMT 2018 BIBREF4 significantly outperforms the human reference translation. However, the authors caution against interpreting their results as evidence of human–machine parity, highlighting potential limitations of the evaluation. In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations. <<<Choice of Raters>>> The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations. <<<Evaluation Protocol>>> We test for difference in ratings of MT outputs and human translations between experts and non-experts. We consider professional translators as experts, and both crowd workers and MT researchers as non-experts. We conduct a relative ranking experiment using one professional human (H$_A$) and two machine translations (MT$_1$ and MT$_2$), considering the native Chinese part of the WMT 2017 Chinese–English test set (see Section SECREF35 for details). The 299 sentences used in the experiments stem from 41 documents, randomly selected from all the documents in the test set originally written in Chinese, and are shown in their original order. Raters are shown one sentence at a time, and see the original Chinese source alongside the three translations. The previous and next source sentences are also shown, in order to provide the annotator with local inter-sentential context. Five raters—two experts and three non-experts—participated in the assessment. The experts were professional Chinese to English translators: one native in Chinese with a fluent level of English, the other native in English with a fluent level of Chinese. The non-experts were NLP researchers native in Chinese, working in an English-speaking country. The ratings are elicited with Appraise BIBREF21. We derive an overall score for each translation (H$_A$, MT$_1$, and MT$_2$) based on the rankings. We use the TrueSkill method adapted to MT evaluation BIBREF22 following its usage at WMT15, i. e., we run 1,000 iterations of the rankings recorded with Appraise followed by clustering (significance level $\alpha =0.05$). <<</Evaluation Protocol>>> <<<Results>>> Table TABREF17 shows the TrueSkill scores for each translation resulting from the evaluations by expert and non-expert translators. We find that translation expertise affects the judgement of MT$_1$ and H$_A$, where the rating gap is wider for the expert raters. This indicates that non-experts disregard translation nuances in the evaluation, which leads to a more tolerant judgement of MT systems and a lower inter-annotator agreement ($\kappa =0.13$ for non-experts versus $\kappa =0.254$ for experts). It is worth noticing that, regardless of their expertise, the performance of human raters may vary over time. For example, performance may improve or decrease due to learning effects or fatigue, respectively BIBREF23. It is likely that such longitudinal effects are present in our data. They should be accounted for in future work, e. g., by using trial number as an additional predictor BIBREF24. <<</Results>>> <<</Choice of Raters>>> <<<Linguistic Context>>> MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents. <<<Discussion>>> Our findings emphasise the importance of linguistic context in human evaluation of MT. In terms of adequacy, raters assessing documents as a whole show a significant preference for human translation, but when assessing single sentences in random order, they show no significant preference for human translation. Document-level evaluation exposes errors to raters which are hard or impossible to spot in a sentence-level evaluation, such as coherent translation of named entities. The example in Table TABREF23 shows the first two sentences of a Chinese news article as translated by a professional human translator (H$_A$) and BIBREF3's BIBREF3 NMT system (MT$_1$). When looking at both sentences (document-level evaluation), it can be seen that MT$_1$ uses two different translations to refer to a cultural festival, “2016盂兰文化节", whereas the human translation uses only one. When assessing the second sentence out of context (sentence-level evaluation), it is hard to penalise MT$_1$ for producing 2016 Python Cultural Festival, particularly for fluency raters without access to the corresponding source text. For further examples, see Section SECREF24 and Table TABREF34. <<</Discussion>>> <<</Linguistic Context>>> <<<Reference Translations>>> The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality. We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6). <<<Quality>>> Because the translations are created by humans, a number of factors could lead to compromises in quality: If the translator is a non-native speaker of the source language, they may make mistakes in interpreting the original message. This is particularly true if the translator does not normally work in the domain of the text, e. g., when a translator who normally works on translating electronic product manuals is asked to translate news. If the translator is a non-native speaker of the target language, they might not be able to generate completely fluent text. This similarly applies to domain-specific terminology. Unlike computers, human translators have limits in time, attention, and motivation, and will generally do a better job when they have sufficient time to check their work, or are particularly motivated to do a good job, such as when doing a good job is necessary to maintain their reputation as a translator. In recent years, a large number of human translation jobs are performed by post-editing MT output, which can result in MT artefacts remaining even after manual post-editing BIBREF31, BIBREF32, BIBREF33. In this section, we examine the effect of the quality of underlying translations on the conclusions that can be drawn with regards to human–machine parity. We first do an analysis on (i) how the source of the human translation affects claims of human–machine parity, and (ii) whether significant differences exist between two varieties of human translation. We follow the same protocol as in Section SECREF19, having 4 professional translators per condition, evaluate the translations for adequacy and fluency on both the sentence and document level. The results are shown in Table TABREF30. From this, we can see that the human translation H$_B$, which was aggressively edited to ensure target fluency, resulted in lower adequacy (Table TABREF30). With more fluent and less accurate translations, raters do not prefer human over machine translation in terms of adequacy (Table TABREF30), but have a stronger preference for human translation in terms of fluency (compare Tables TABREF30 and TABREF21). In a direct comparison of the two human translations (Table TABREF30), we also find that H$_A$ is considered significantly more adequate than H$_B$, while there is no significant difference in fluency. To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32. From these results, we can glean a few interesting insights. First, we find significantly larger numbers of errors of the categories of Incorrect Word and Named Entity in MT$_1$, indicating that the MT system is less effective at choosing correct translations for individual words than the human translators. An example of this can be found in Table TABREF33, where we see that the MT system refers to a singular “point of view" and translates 线路 (channel, route, path) into the semantically similar but inadequate lines. Interestingly, MT$_1$ has significantly more Word Order errors, one example of this being shown in Table TABREF33, with the relative placements of at the end of last year (去年年底) and stop production (停产). This result is particularly notable given previous reports that NMT systems have led to great increases in reordering accuracy compared to previous statistical MT systems BIBREF35, BIBREF36, demonstrating that the problem of generating correctly ordered output is far from solved even in very strong NMT systems. Moreover, H$_B$ had significantly more Missing Word (Semantics) errors than both H$_A$ ($p<.001$) and MT$_1$ ($p<.001$), an indication that the proofreading process resulted in drops of content in favour of fluency. An example of this is shown in Table TABREF33, where H$_B$ dropped the information that the meetings between Suning and Apple were recently (近期) held. Finally, while there was not a significant difference, likely due to the small number of examples overall, it is noticeable that MT$_1$ had a higher percentage of Collocation and Context errors, which indicate that the system has more trouble translating words that are dependent on longer-range context. Similarly, some Named Entity errors are also attributable to translation inconsistencies due to lack of longer-range context. Table TABREF34 shows an example where we see that the MT system was unable to maintain a consistently gendered or correct pronoun for the female Olympic shooter Zhang Binbin (张彬彬). Apart from showing qualitative differences between the three translations, the analysis also supports the finding of the pairwise ranking study: H$_A$ is both preferred over MT$_1$ in the pairwise ranking study, and exhibits fewer translation errors in our error classification. H$_B$ has a substantially higher number of missing words than the other two translations, which agrees with the lower perceived adequacy in the pairwise ranking. However, the analysis not only supports the findings of the pairwise ranking study, but also adds nuance to it. Even though H$_B$ has the highest number of deletions, and does worse than the other two translations in a pairwise adequacy ranking, it is similar to H$_A$, and better than MT$_1$, in terms of most other error categories. <<</Quality>>> <<<Directionality>>> Translation quality is also affected by the nature of the source text. In this respect, we note that from the 2,001 sentences in the WMT 2017 Chinese–English test set, half were originally written in Chinese; the remaining half were originally written in English and then manually translated into Chinese. This Chinese reference file (half original, half translated) was then manually translated into English by BIBREF3 to make up the reference for assessing human–machine parity. Therefore, 50 % of the reference comprises direct English translations from the original Chinese, while 50 % are English translations from the human-translated file from English into Chinese, i. e., backtranslations of the original English. According to BIBREF37, translated texts differ from their originals in that they are simpler, more explicit, and more normalised. For example, the synonyms used in an original text may be replaced by a single translation. These differences are referred to as translationese, and have been shown to affect translation quality in the field of machine translation BIBREF38, BIBREF39, BIBREF32, BIBREF33. We test whether translationese has an effect on assessing parity between translations produced by humans and machines, using relative rankings of translations in the WMT 2017 Chinese–English test set by five raters (see Section SECREF3). Our hypothesis is that the difference between human and machine translation quality is smaller when source texts are translated English (translationese) rather than original Chinese, because a translationese source text should be simpler and thus easier to translate for an MT system. We confirm Laviosa's observation that “translationese” Chinese (that started as English) exhibits less lexical variety than “natively” Chinese text and demonstrate that translationese source texts are generally easier for MT systems to score well on. Table TABREF36 shows the TrueSkill scores for translations (H$_A$, MT$_1$, and MT$_2$) of the entire test set (Both) versus only the sentences originally written in Chinese or English therein. The human translation H$_A$ outperforms the machine translation MT$_1$ significantly when the original language is Chinese, while the difference between the two is not significant when the original language is English (i. e., translationese input). We also compare the two subsets of the test set, original and translationese, using type-token ratio (TTR). Our hypothesis is that the TTR will be smaller for the translationese subset, thus its simpler nature getting reflected in a less varied use of language. While both subsets contain a similar number of sentences (1,001 and 1,000), the Chinese subset contains more tokens (26,468) than its English counterpart (22,279). We thus take a subset of the Chinese (840 sentences) containing a similar amount of words to the English data (22,271 words). We then calculate the TTR for these two subsets using bootstrap resampling. The TTR for Chinese ($M=0.1927$, $SD=0.0026$, 95 % confidence interval $[0.1925,0.1928]$) is 13 % higher than that for English ($M=0.1710$, $SD=0.0025$, 95 % confidence interval $[0.1708,0.1711]$). Our results show that using translationese (Chinese translated from English) rather than original source texts results in higher scores for MT systems in human evaluation, and that the lexical variety of translationese is smaller than that of original text. <<</Directionality>>> <<</Reference Translations>>> <<</Assessing Human–Machine Parity>>> <<<Translations>>> We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article: [labelwidth=1cm, leftmargin=1.25cm] The professional human translations in the dataset of BIBREF3.[1] Professional human translations that we ordered from a different translation vendor, which included a post-hoc native English check. We produced these only for the documents that were originally Chinese, as discussed in more detail in Section SECREF35. The machine translations produced by BIBREF3's BIBREF3 best system (Combo-6),[1] for which the authors found parity with H$_A$. The machine translations produced by Google's production system (Google Translate) in October 2017, as contained in BIBREF3's BIBREF3 dataset.[1] Statistical significance is denoted by * ($p\le .05$), ** ($p\le .01$), and *** ($p\le .001$) throughout this article, unless otherwise stated. <<</Translations>>> <<</Background>>> <<<Choice of Raters>>> Both professional and amateur evaluators can be involved in human evaluation of MT quality. However, from published work in the field BIBREF19, it is fair to say that there is a tendency to “rely on students and amateur evaluators, sometimes with an undefined (or self-rated) proficiency in the languages involved, an unknown expertise with the text type" BIBREF8. Previous work on evaluation of MT output by professional translators against crowd workers by BIBREF20 showed that for all language pairs (involving 11 languages) evaluated, crowd workers tend to be more accepting of the MT output by giving higher fluency and adequacy scores and performing very little post-editing. The authors argued that non-expert translators lack knowledge of translation and so might not notice subtle differences that make one translation more suitable than another, and therefore, when confronted with a translation that is hard to post-edit, tend to accept the MT rather than try to improve it. <<</Choice of Raters>>> <<<Linguistic Context>>> Another concern is the unit of evaluation. Historically, machine translation has primarily operated on the level of sentences, and so has machine translation evaluation. However, it has been remarked that human raters do not necessarily understand the intended meaning of a sentence shown out-of-context BIBREF25, which limits their ability to spot some mistranslations. Also, a sentence-level evaluation will be blind to errors related to textual cohesion and coherence. While sentence-level evaluation may be good enough when evaluating MT systems of relatively low quality, we hypothesise that with additional context, raters will be able to make more nuanced quality assessments, and will also reward translations that show more textual cohesion and coherence. We believe that this aspect should be considered in evaluation, especially when making claims about human–machine parity, since human translators can and do take inter-sentential context into account BIBREF15, BIBREF16. <<</Linguistic Context>>> <<<Reference Translations>>> Yet another relevant element in human evaluation is the reference translation used. This is the focus of this section, where we cover two aspects of reference translations that can have an impact on evaluation: quality and directionality. <<</Reference Translations>>> <<<Recommendations>>> Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. <<<(R1) Choose professional translators as raters.>>> In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs. <<</(R1) Choose professional translators as raters.>>> <<<(R2) Evaluate documents, not sentences.>>> When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4). <<</(R2) Evaluate documents, not sentences.>>> <<<(R3) Evaluate fluency in addition to adequacy.>>> Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality. <<</(R3) Evaluate fluency in addition to adequacy.>>> <<<(R4) Do not heavily edit reference translations for fluency.>>> In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30). <<</(R4) Do not heavily edit reference translations for fluency.>>> <<<(R5) Use original source texts.>>> Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT. Our work empirically strengthens and extends the recommendations on human MT evaluation in previous work BIBREF6, BIBREF7, some of which have meanwhile been adopted by the large-scale evaluation campaign at WMT 2019 BIBREF40: the new evaluation protocol uses original source texts only (R5) and gives raters access to document-level context (R2). The findings of WMT 2019 provide further evidence in support of our recommendations. In particular, human English to Czech translation was found to be significantly better than MT BIBREF40; the comparison includes the same MT system (CUNI-Transformer-T2T-2018) which outperformed human translation according to the previous protocol BIBREF5. Results also show a larger difference between human translation and MT in document-level evaluation. We note that in contrast to WMT, the judgements in our experiments are provided by a small number of human raters: five in the experiments of Sections SECREF3 and SECREF35, four per condition (adequacy and fluency) in Section SECREF4, and one in the fine-grained error analysis presented in Section SECREF24. Moreover, the results presented in this article are based on one text domain (news) and one language direction (Chinese to English), and while a large-scale evaluation with another language pair supports our findings (see above), further experiments with more languages, domains, and raters will be required to increase their external validity. <<</(R5) Use original source texts.>>> <<</Recommendations>>> <<<Conclusion>>> We compared professional human Chinese to English translations to the output of a strong MT system. In a human evaluation following best practices, BIBREF3 found no significant difference between the two, concluding that their NMT system had reached parity with professional human translation. Our blind qualitative analysis, however, showed that the machine translation output contained significantly more incorrect words, omissions, mistranslated names, and word order errors. Our experiments show that recent findings of human–machine parity in language translation are owed to weaknesses in the design of human evaluation campaigns. We empirically tested alternatives to what is currently considered best practice in the field, and found that the choice of raters, the availability of linguistic context, and the creation of reference translations have a strong impact on perceived translation quality. As for the choice of raters, professional translators showed a significant preference for human translation, while non-expert raters did not. In terms of linguistic context, raters found human translation significantly more accurate than machine translation when evaluating full documents, but not when evaluating single sentences out of context. They also found human translation significantly more fluent than machine translation, both when evaluating full documents and single sentences. Moreover, we showed that aggressive editing of human reference translations for target language fluency can decrease adequacy to the point that they become indistinguishable from machine translation, and that raters found human translations significantly better than machine translations of original source texts, but not of source texts that were translations themselves. Our results strongly suggest that in order to reveal errors in the output of strong MT systems, the design of MT quality assessments with human raters should be revisited. To that end, we have offered a set of recommendations, supported by empirical data, which we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Our recommendations have the aim of increasing the validity of MT evaluation, but we are aware of the high cost of having MT evaluation done by professional translators, and on the level of full documents. We welcome future research into alternative evaluation protocols that can demonstrate their validity at a lower cost. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Conclusion, Reference Translations" ], "type": "disordered_section" }
2003.00576
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> StructSum: Incorporating Latent and Explicit Sentence Dependencies for Single Document Summarization <<<Abstract>>> Traditional preneural approaches to single document summarization relied on modeling the intermediate structure of a document before generating the summary. In contrast, the current state of the art neural summarization models do not preserve any intermediate structure, resorting to encoding the document as a sequence of tokens. The goal of this work is two-fold: to improve the quality of generated summaries and to learn interpretable document representations for summarization. To this end, we propose incorporating latent and explicit sentence dependencies into single-document summarization models. We use structure-aware encoders to induce latent sentence relations, and inject explicit coreferring mention graph across sentences to incorporate explicit structure. On the CNN/DM dataset, our model outperforms standard baselines and provides intermediate latent structures for analysis. We present an extensive analysis of our summaries and show that modeling document structure reduces copying long sequences and incorporates richer content from the source document while maintaining comparable summary lengths and an increased degree of abstraction. <<</Abstract>>> <<<Introduction>>> Traditional approaches to abstractive summarization have relied on interpretable structured representations such as graph based sentence centrality BIBREF0, AMR parses BIBREF1, discourse based compression and anaphora constraints BIBREF2. On the other hand, state of the art neural approaches to single document summarization encode the document as a sequence of tokens and compose them into a document representation BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Albeit being effective, these systems learn to rely significantly on layout bias associated with the source document BIBREF8 and do not lend themselves easily to interpretation via intermediate structures. Recent work provides evidence that structured representation of text leads to better document representations BIBREF9, BIBREF10. However, structured representations are under-explored in the neural summarization literature. Motivated by this, we propose a structure-aware end-to-end model (§SECREF2) for summarization. Our proposed model, StructSum, augments the existing pointer-generator network BIBREF3 with two novel components: (1) a latent-structure attention module that adapts structured representations BIBREF11, BIBREF12 for the summarization task, and (2) an explicit-structure attention module, that incorporates a coreference graph. The components together model sentence level dependencies in a document generating rich structured representations. The motivation of this work is to provide a framework to induce rich interpretable latent structures and inject external document structures that can be introduced into any document encoder model. Encoders with induced latent structures have been shown to benefit several tasks including document classification, natural language inference BIBREF12, BIBREF13, and machine translation BIBREF11. Building on this motivation, our latent structure attention module builds upon BIBREF12 to model the dependencies between sentences in a document. It uses a variant of Kirchhoff’s matrix-tree theorem BIBREF14 to model such dependencies as non-projective tree structures(§SECREF3). The explicit attention module is linguistically-motivated and aims to incorporate sentence-level structures from externally annotated document structures. We incorporate a coreference based sentence dependency graph, which is then combined with the output of the latent structure attention module to produce a hybrid structure-aware sentence representation (§SECREF5). We evaluate our model on the CNN/DM dataset BIBREF15 and show in §SECREF4 that it outperforms strong baselines by up to 1.1 ROUGE-L. We find that the latent and explicit structures are complementary, both contributing to the final performance improvement. Our modules are also independent of the underlying encoder-decoder architectures, rendering them flexible to be incorporated into any advanced models. Our analysis quantitatively compares our generated summaries with the baselines and reference documents (§SECREF5). It reveals that structure-aware summarization reduces the bias of copying large sequences from the source inherently making the summaries more abstractive by generating $\sim $15% more novel n-grams compared to a competitive baseline. We also show qualitative examples of the learned interpretable sentence dependency structures, motivating further research for structure-aware modeling. <<</Introduction>>> <<<StructSum Model>>> Consider a source document $\mathbf {x}$ consisting of $n$ sentences $\lbrace \mathbf {s}\rbrace $ where each sentence $\mathbf {s}_i$ is composed of a sequence of words. Document summarization aims to map the source document to a target summary of $m$ words $\lbrace y\rbrace $. A typical neural abstractive summarization system is an attentional sequence-to-sequence model that encodes the input sequence $\mathbf {x}$ as a continuous sequence of tokens $\lbrace w\rbrace $ using a BiLSTM. The encoder produces a set of hidden representations $\lbrace \mathbf {h}\rbrace $. An LSTM decoder maps the previously generated token $y_{t-1}$ to a hidden state and computes a soft attention probability distribution $p(\mathbf {a}_t \mid \mathbf {x}, \mathbf {y}_{1:t-1})$ over encoder hidden states. A distribution $p$ over the vocabulary is computed at every timestep $t$ and the network is trained using negative log likelihood loss : $\text{loss}_t = - \mathrm {log}\:p(y_t) $. The pointer-generator network BIBREF3 augments the standard encoder-decoder architecture by linearly interpolating a pointer based copy mechanism. StructSum uses the pointer-generator network as the base model. Our encoder is a structured hierarchical encoder BIBREF16, which computes hidden representations of the sequence both at the token and sentence level. The model then uses the explicit-structure and implicit-structure attention modules to augment the sentence representations with rich sentence dependency information, leveraging both learned latent structure and additional external structure from other NLP modules. The attended vectors are then passed to the decoder, which produces the output sequence for abstractive summarization. In the rest of this section, we describe our model architecture, shown in Figure FIGREF2, in detail. <<<Encoder>>> Our hierarchical encoder consists of a BiLSTM encoder over words, followed by sentence level BiLSTM encoder. The word encoder takes a sequence of words in a sentence $\mathbf {s}_i = \lbrace w\rbrace $ as input and produces contextual hidden representation for each word $\mathbf {h}_{w_{ik}}$, where $w_{ik}$ is the $i^{th}$ word of the $k^{th}$ sentence, $k=1:q$ and $q$ is the number of words in the sentence $\mathbf {s}_i$. The word hidden representations are max-pooled at the sentence level and the result is passed to a BiLSTM sentence-encoder which produces new hidden sentence representations for each sentence $\mathbf {h}_{\mathbf {s}_i}$. The sentence hidden representations are then passed as inputs to latent and explicit structure attention modules. <<</Encoder>>> <<<Latent Structure (LS) Attention>>> We model the latent structure of a source document as a non-projective dependency tree and force a pair-wise attention module to automatically induce this tree. We denote the marginal probability of a dependency edge as $a_{ij} = p(z_{ij}=1)$ where $z_{ij}$ is the latent variable representing the edge from sentence $i$ to sentence $j$. We parameterize with a neural network the unnormalized pair-wise scores between sentences and use the Kirchoff's matrix tree theorem BIBREF14 to compute the marginal probability of a dependency edge between any two sentences. We decompose the representation of sentence $\mathbf {s}_i$ into a semantic vector $\mathbf {g}_{\mathbf {s}_i}$ and structure vector $\mathbf {d}_{\mathbf {s}_i}$ as $\mathbf {h}_{\mathbf {s}_i} = [\mathbf {g}_{\mathbf {s}_i}; \mathbf {d}_{\mathbf {s}_i}]$. Using the structure vectors $\mathbf {d}_{\mathbf {s}_i}, \mathbf {d}_{\mathbf {s}_j}$, we compute a score $f_{ij}$ between sentence pairs $(i,j)$ (where sentence $i$ is the parent node of sentence $j$) and a score for sentence $\mathbf {s}_i$ being the root node $r_i$: where $F_p, F_c$ and $F_r$ are linear-projection functions to build representations for the parent, child and root node respectively and $W_a$ is the weight for bilinear transformation. Here, $f_{ij}$ is the edge weight between nodes $(i,j)$ in a weighted adjacency graph $\mathbf {F}$ and is computed for all pairs of sentences. Using $f_{ij}$ and $r_i$, we compute normalized attention scores $a_{ij}$ and $a_{i}^r $ using a variant of Kirchhoff’s matrix-tree theorem BIBREF12, BIBREF14 where $a_{ij}$ is the marginal probability of a dependency edge between sentences $(i,j)$ and $a_{i}^r $ is the probability of sentence $i$ being the root. Using these probabilistic attention weights and the semantic vectors $\lbrace \mathbf {g}_{\mathbf {s}}\rbrace $, we compute the attended sentence representations as: where $\mathbf {p}_{\mathbf {s}_i}$ is the context vector gathered from possible parents of sentence $i$, $\mathbf {c}_{\mathbf {s}_i}$ is the context vector gathered from possible children, and $\mathbf {g}_{root}$ is a special embedding for the root node. Here, the updated sentence representation $\textit {l}_{\mathbf {s}_i}$ incorporates the implicit structural information. <<</Latent Structure (LS) Attention>>> <<<Explicit Structure (ES) Attention>>> BIBREF2 showed that modeling coreference knowledge through anaphora constraints led to improved clarity or grammaticality in summaries. Taking inspiration from this, we choose coreference links across sentences as our explicit structure. First, we use an off-the-shelf coreference parser to identify coreferring mentions. We then build a coreference based sentence graph by adding a link between sentences $(\mathbf {s}_i, \mathbf {s}_j)$, if they have any coreferring mentions between them. This representation is then converted into a weighted graph by incorporating a weight on the edge between two sentences that is proportional to the number of unique coreferring mentions between them. We normalize these edge weights for every sentence, effectively building a weighted adjacency matrix $\mathbf {K}$ where $k_{ij}$ is given by: where $m_i$ denotes the set of unique mentions in sentence $\mathbf {s}_i$, ($m_i$ $\bigcap $ $m_j$) denotes the set of co-referring mentions between the two sentences and $z$ is a latent variable representing a link in the coreference sentence graph. $\epsilon = 5e-4$ is a smoothing hyperparameter. <<<Incorporating explicit structure>>> Given contextual sentence representations $\lbrace \mathbf {h}_{\mathbf {s}}\rbrace $ and our explicit coreference based weighted adjacency matrix $\mathbf {K}$, we learn an explicit-structure aware representation as follows: where $F_u$ and $F_e$ are linear projections and $\mathbf {e}_{\mathbf {s}_i}$ is an updated sentence representation which incorporates explicit structural information. Finally, to combine the two structural representations, we concatenate the latent and explicit sentence vectors as: $\mathbf {h}_{\mathbf {s}_i} = [\mathbf {l}_{\mathbf {s}_i};\mathbf {e}_{\mathbf {s}_i}]$ to form encoder sentence representations of the source document. To provide every token representation with context of the entire document, we keep the same formulation as pointer-generator networks, where each token $w_{ij}$ is mapped to its hidden representation $\mathbf {h}_{w_{ij}}$ using a BiLSTM. The token representation is concatenated with their corresponding structure-aware sentence representation: $\mathbf {h}_{w_{ij}} = [\mathbf {h}_{w_{ij}};\mathbf {h}_{\mathbf {s}_i}]$ where $\mathbf {s}_i$ is the sentence to which the word $w_{ij}$ belongs. The resulting structure-aware token representations can be used to directly replace previous token representations as input to the decoder. <<</Incorporating explicit structure>>> <<</Explicit Structure (ES) Attention>>> <<</StructSum Model>>> <<<Experiments>>> <<<Dataset:>>> We evaluate our approach on the CNN/Daily Mail corpus BIBREF15, BIBREF17 and use the same preprocessing steps as shown in BIBREF3. The CNN/DM summaries have an average of 66 tokens ($\sigma = 26$) and 4.9 sentences. Differing from BIBREF3, we truncate source documents to 700 tokens instead of 400 in training and validation sets to model longer documents with more sentences. <<</Dataset:>>> <<<Baselines:>>> We choose the following baselines based on their relatedness to the task and wide applicability: BIBREF3 : We re-implement the base pointer-generator model and the additional coverage mechanism. This forms the base model of our implementation and hence our addition of modeling document structure can be directly compared to it. BIBREF6 : This is a graph-based attention model that is closest in spirit to the method we present in this work. They use a graph attention module to learn attention between sentences, but cannot be easily used to induce interpretable document structures, since their attention scores are not constrained to learn structure. In addition to learning latent and interpretable structured attention between sentences, StructSum also introduces an explicit structure component to inject external document structure. BIBREF7 : We compare with the DiffMask experiment with this work. This work introduces a separate content selector which tags words and phrases to be copied. The DiffMask variant is an end-to-end variant like ours and hence is included in our baselines. Our baselines exclude Reinforcement Learning (RL) based systems as they aren't directly comparable, but our approach can be easily introduced in any encoder-decoder based RL system. Since we do not incorporate any pretraining, we do not compare with recent contextual representation based models BIBREF18. <<</Baselines:>>> <<<Hyperparameters:>>> Our encoder uses 256 hidden states for both directions in the one-layer LSTM, and 512 for the single-layer decoder. We use the adagrad optimizer BIBREF19 with a learning rate of 0.15 and an initial accumulator value of 0.1. We do not use dropout and use gradient-clipping with a maximum norm of 2. We selected the best model using early stopping based on the ROUGE score on the validation dataset as our criteria. We also used the coverage penalty during inference as shown in BIBREF7. For decoding, we use beam-search with a beam width of 3. We did not observe significant improvements with higher beam widths. <<</Hyperparameters:>>> <<</Experiments>>> <<<Results>>> Table TABREF8 shows the results of our work on the CNN/DM dataset. We use the standard ROUGE-1,2 and L BIBREF20 F1 metric to evaluate all our summarization output. We first observe that introducing the capability to learn latent structures already improves our performance on ROUGE-L. It suggests that modeling dependencies between sentences helps the model compose better long sequences w.r.t reference compared to baselines. We do not see a significant improvement in ROUGE-1 and ROUGE-2, hinting that we retrieve similar content words as the baseline but compose them into better contiguous sequences. We observe similar results when using explicit structures only with the ES attention module. This shows that adding inductive bias in the form of coreference based sentence graphs helps compose long sequences. Our results here are close to the model that uses just LS attention. This demonstrates that LS attention induces good latent dependencies that make up for pure external coreference knowledge. Finally, our combined model which uses both Latent and Explicit structure performs the best with a strong improvement of 1.08 points in ROUGE-L over our base pointer-generator model and 0.6 points in ROUGE-1. It shows that the latent and explicit information are complementary and a model can jointly leverage them to produce better summaries. Modeling structure and adding inductive biases also helps a model to converge faster where the combined LS+ES Attention model took 126K iterations for training in comparison to 230K iterations required to train the plain pointer-generator network and an additional 3K iterations for the coverage loss BIBREF3. <<</Results>>> <<<Analysis>>> We present below analysis on the quality of summarization as compared to our base model, the pointer-generator network with coverage BIBREF3 and the reference. <<<Analysis of Copying>>> Despite being an abstractive model, the pointer-generator model tends to copy very long sequences of words including whole sentences from the source document (also observed by BIBREF7). Table TABREF15 shows a comparison of the Average Length (Copy Len) of contiguous copied sequences greater than length 3. We observe that the pointer-generator baseline on average copies 16.61 continuous tokens from the source which shows the extractive nature of the model. This indicates that pointer networks, aimed at combining advantages from abstractive and extractive methods by allowing to copy content from the input document, tend to skew towards copying, particularly in this dataset. A consequence of this is that the model fails to interrupt copying at desirable sequence length. In contrast, modeling document structure through StructSum reduces the length of copied sequences to 9.13 words on average reducing the bias of copying sentences in entirety. This average is closer to the reference (5.07 words) in comparison, without sacrificing task performance. StructSum learns to stop when needed, only copying enough content to generate a coherent summary. <<</Analysis of Copying>>> <<<Content Selection and Abstraction>>> A direct outcome of copying shorter sequences is being able to cover more content from the source document within given length constraints. We observe that this leads to better summarization performance. In our analysis, we compute coverage by computing the number of source sentences from which sequences greater than length 3 are copied in the summary. Table TABREF15 shows a comparison of the coverage of source sentences in the summary content. We see that while the baseline pointer-generator model only copies from 12.1% of the source sentences, we copy content from 24.0% of the source sentences. Additionally, the average length of the summaries produced by StructSum remains mostly unchanged at 66 words on average compared to 61 of the baseline model. This indicates that StructSum produces summaries that draw from a wider selection of sentences from the original article compared to the baseline models. BIBREF21 show that copying more diverse content in isolation does not necessarily lead to better summaries for extractive summarization. Our analysis suggests that this observation might not extend to abstractive summarization methods. The proportion of novel n-grams generated has been used in the literature to measure the degree of abstraction of summarization models BIBREF3. Figure FIGREF17 compares the percentage of novel n-grams in StructSum as compared to the baseline model. Our model produces novel trigrams 21.0% of the time and copies whole sentences only 21.7% of the time. In comparison, the pointer-generator network has only 6.1% novel trigrams and copies entire sentences 51.7% of the time. This shows that StructSum on average generates 14.7% more novel n-grams in comparison to the pointer-generator baseline. <<</Content Selection and Abstraction>>> <<<Layout Bias>>> Neural abstractive summarization methods applied to news articles are typically biased towards selecting and generating summaries based on the first few sentences of the articles. This stems from the structure of news articles, which present the salient information of the article in the first few sentences and expand in the subsequent ones. As a result, the LEAD 3 baseline, which selects the top three sentences of an article, is widely used in the literature as a strong baseline to evaluate summarization models applied to the news domain BIBREF22. BIBREF8 observed that the current summarization models learn to exploit the layout biases of current datasets and offer limited diversity in their outputs. To analyze whether StructSum also holds the same layout biases, we compute a distribution of source sentence indices that are used for copying content (copied sequences of length 3 or more are considered). Figure FIGREF19 shows the comparison of coverage of sentences. The coverage of sentences in the reference summaries shows a high proportion of the top 5 sentences of any article being copied to the summary. Additionally, the reference summaries have a smoother tail end distribution with relevant sentences in all positions being copied. It shows that a smooth distribution over all sentences is a desirable feature. We notice that the sequence-to-sequence and pointer-generator framework (with and without coverage enabled) have a stronger bias towards the beginning of the article with a high concentration of copied sentences within the top 5 sentences of the article. In contrast, StructSum improves coverage slightly having a lower concentration of top 5 sentences and copies more tail end sentences than the baselines. However, although the modeling of structure does help, our model has a reasonable gap compared to the reference distribution. We see this as an area of improvement and a direction for future work. <<</Layout Bias>>> <<<Document Structures>>> Similar to BIBREF12, we also look at the quality of the intermediate structures learned by the model. We use the Chu-Liu-Edmonds algorithm BIBREF23, BIBREF24 to extract the maximum spanning tree from the attention score matrix as our sentence structure. Table TABREF20 shows the frequency of various tree depths. We find that the average tree depth is 2.9 and the average proportion of leaf nodes is 88%, consistent with results from tree induction in document classification BIBREF25. Further, we compare latent trees extracted from StructSum with undirected graphs based on coreference and NER. These are constructed similarly to our explicit coreference based sentence graphs in §SECREF5 by linking sentences with overlapping coreference mentions or named entities. We measure the similarity between the learned latent trees and the explicit graphs through precision and recall over edges. The results are shown in Table TABREF22. We observe that our latent graphs have low recall with the linguistic graphs showing that our latent graphs do not capture the coreference or named entity overlaps explicitly, suggesting that the latent and explicit structures capture complementary information. Figure FIGREF24 shows qualitative examples of our induced structures along with generated summaries from the StructSum model. The first example shows a tree with sentence 3 chosen as root, which was the key sentence mentioned in the reference. We notice that in both examples, the sentences in the lower level of the dependency tree contribute less to the generated summary. Along the same lines, in the examples source sentences used to generate summaries tend to be closer to the root node. In the first summary, all sentences from which content was drawn are either the root node or within depth 1 of the root node. Similarly, in the second example, 4 out of 5 source sentences were at depth=1 in the tree. In the two examples, generated summaries diverged from the reference by omitting certain sentences used in the reference. These sentences appear in the lower section of the tree giving us some insights on which sentences were preferred for the summary generation. Further, in example 1, we notice that the latent structures cluster sentences based on the main topic of the document. Sentences 1,2,3 differ from sentences 5,6,7 on the topic being discussed and our model has clustered the two sets separately. <<</Document Structures>>> <<</Analysis>>> <<<Related Work>>> Prior to neural models for summarization, document structure played a critical role in generating relevant, diverse and coherent summaries. BIBREF26 formulated document summarization using linguistic features to construct a semantic graph of the document and building a subgraph for the summary. BIBREF27 leverage language-independent syntactic graphs of the source document to do unsupervised document summarization. BIBREF1 parse the source text into a set of AMR graphs, transform the graphs to summary graphs and then generate text from the summary graph. While such systems generate grammatical summaries and preserve linguistic quality BIBREF2, they are often computationally demanding and do not generalize well BIBREF21. Data-driven neural models for summarization fall into extractive BIBREF13, BIBREF28 or abstractive BIBREF29, BIBREF3, BIBREF7, BIBREF30. BIBREF3 proposed a pointer-generator framework that learns to either generate novel in-vocabulary words or copy words from the source. This model has been the foundation for a lot of follow up work on abstractive summarization BIBREF7, BIBREF31, BIBREF32. Our model extends the pointer-generator model by incorporating latent structure and explicit structure knowledge, making our extension applicable to any of the followup work. BIBREF6 present a graph-based attention system to improve the saliency of summaries. While this model learns attention between sentences, it does not induce interpretable intermediate structures. A lot of recent work looks into incorporating structure into neural models. BIBREF32 infuse source side syntactic structure into the copy mechanism of the pointer-generator model. They identify explicit word-level syntactic features based on dependency parses and parts of speech tags and augment the decoder copy mechanism to attend to them. In contrast, we model sentence level dependency structures in the form of latent or induced structures and explicit coreference based structures. We do not identify any heuristic or salient features other than linking dependent sentences. BIBREF33 propose structural compression and coverage regularizers to provide an objective to neural models to generate concise and informative content. Here, they incorporate structural bias about the target summaries but we choose to model the structure of the source sentence to produce rich document representations. BIBREF34 induce latent document structure for aspect based summarization. BIBREF35 use present long document summarization model applicable for scientific papers, which attends to discourse sections in a document, while BIBREF36 propose an unsupervised model for review summarization which learns a latent discourse structure and uses it to summarize a review. BIBREF37 use discourse structures to improve coherence in blog summarization. These are all complementary directions to our work. To our knowledge, we are the first to simultaneously incorporate latent and explicit document structure in a single framework for document summarization. <<</Related Work>>> <<<Conclusion and Future Work>>> To summarize, our contributions are three-fold. We propose a framework for incorporating latent and explicit document structure in neural abstractive summarization. We introduce a novel explicit-attention module which can incorporate external linguistic structures, and we show one such application where we use coreference to enhance summarization. We show quantitative improvements on the ROUGE metric over strong summarization baselines and demonstrate improvements in abstraction and coverage through extensive qualitative analysis. StructSum has demonstrated performance gain and higher quality output summaries; with a potential direction to study the role of latent structures in the interpretability of models in the future. Another possible direction is to investigate whether structured representations allow better generalization for transfer learning and summarization in other domains with limited data. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "Abstract, Related Work" ], "type": "disordered_section" }
2003.00576
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> StructSum: Incorporating Latent and Explicit Sentence Dependencies for Single Document Summarization <<<Abstract>>> Traditional preneural approaches to single document summarization relied on modeling the intermediate structure of a document before generating the summary. In contrast, the current state of the art neural summarization models do not preserve any intermediate structure, resorting to encoding the document as a sequence of tokens. The goal of this work is two-fold: to improve the quality of generated summaries and to learn interpretable document representations for summarization. To this end, we propose incorporating latent and explicit sentence dependencies into single-document summarization models. We use structure-aware encoders to induce latent sentence relations, and inject explicit coreferring mention graph across sentences to incorporate explicit structure. On the CNN/DM dataset, our model outperforms standard baselines and provides intermediate latent structures for analysis. We present an extensive analysis of our summaries and show that modeling document structure reduces copying long sequences and incorporates richer content from the source document while maintaining comparable summary lengths and an increased degree of abstraction. <<</Abstract>>> <<<Introduction>>> Traditional approaches to abstractive summarization have relied on interpretable structured representations such as graph based sentence centrality BIBREF0, AMR parses BIBREF1, discourse based compression and anaphora constraints BIBREF2. On the other hand, state of the art neural approaches to single document summarization encode the document as a sequence of tokens and compose them into a document representation BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Albeit being effective, these systems learn to rely significantly on layout bias associated with the source document BIBREF8 and do not lend themselves easily to interpretation via intermediate structures. Recent work provides evidence that structured representation of text leads to better document representations BIBREF9, BIBREF10. However, structured representations are under-explored in the neural summarization literature. Motivated by this, we propose a structure-aware end-to-end model (§SECREF2) for summarization. Our proposed model, StructSum, augments the existing pointer-generator network BIBREF3 with two novel components: (1) a latent-structure attention module that adapts structured representations BIBREF11, BIBREF12 for the summarization task, and (2) an explicit-structure attention module, that incorporates a coreference graph. The components together model sentence level dependencies in a document generating rich structured representations. The motivation of this work is to provide a framework to induce rich interpretable latent structures and inject external document structures that can be introduced into any document encoder model. Encoders with induced latent structures have been shown to benefit several tasks including document classification, natural language inference BIBREF12, BIBREF13, and machine translation BIBREF11. Building on this motivation, our latent structure attention module builds upon BIBREF12 to model the dependencies between sentences in a document. It uses a variant of Kirchhoff’s matrix-tree theorem BIBREF14 to model such dependencies as non-projective tree structures(§SECREF3). The explicit attention module is linguistically-motivated and aims to incorporate sentence-level structures from externally annotated document structures. We incorporate a coreference based sentence dependency graph, which is then combined with the output of the latent structure attention module to produce a hybrid structure-aware sentence representation (§SECREF5). We evaluate our model on the CNN/DM dataset BIBREF15 and show in §SECREF4 that it outperforms strong baselines by up to 1.1 ROUGE-L. We find that the latent and explicit structures are complementary, both contributing to the final performance improvement. Our modules are also independent of the underlying encoder-decoder architectures, rendering them flexible to be incorporated into any advanced models. Our analysis quantitatively compares our generated summaries with the baselines and reference documents (§SECREF5). It reveals that structure-aware summarization reduces the bias of copying large sequences from the source inherently making the summaries more abstractive by generating $\sim $15% more novel n-grams compared to a competitive baseline. We also show qualitative examples of the learned interpretable sentence dependency structures, motivating further research for structure-aware modeling. <<</Introduction>>> <<<StructSum Model>>> Consider a source document $\mathbf {x}$ consisting of $n$ sentences $\lbrace \mathbf {s}\rbrace $ where each sentence $\mathbf {s}_i$ is composed of a sequence of words. Document summarization aims to map the source document to a target summary of $m$ words $\lbrace y\rbrace $. A typical neural abstractive summarization system is an attentional sequence-to-sequence model that encodes the input sequence $\mathbf {x}$ as a continuous sequence of tokens $\lbrace w\rbrace $ using a BiLSTM. The encoder produces a set of hidden representations $\lbrace \mathbf {h}\rbrace $. An LSTM decoder maps the previously generated token $y_{t-1}$ to a hidden state and computes a soft attention probability distribution $p(\mathbf {a}_t \mid \mathbf {x}, \mathbf {y}_{1:t-1})$ over encoder hidden states. A distribution $p$ over the vocabulary is computed at every timestep $t$ and the network is trained using negative log likelihood loss : $\text{loss}_t = - \mathrm {log}\:p(y_t) $. The pointer-generator network BIBREF3 augments the standard encoder-decoder architecture by linearly interpolating a pointer based copy mechanism. StructSum uses the pointer-generator network as the base model. Our encoder is a structured hierarchical encoder BIBREF16, which computes hidden representations of the sequence both at the token and sentence level. The model then uses the explicit-structure and implicit-structure attention modules to augment the sentence representations with rich sentence dependency information, leveraging both learned latent structure and additional external structure from other NLP modules. The attended vectors are then passed to the decoder, which produces the output sequence for abstractive summarization. In the rest of this section, we describe our model architecture, shown in Figure FIGREF2, in detail. <<<Encoder>>> Our hierarchical encoder consists of a BiLSTM encoder over words, followed by sentence level BiLSTM encoder. The word encoder takes a sequence of words in a sentence $\mathbf {s}_i = \lbrace w\rbrace $ as input and produces contextual hidden representation for each word $\mathbf {h}_{w_{ik}}$, where $w_{ik}$ is the $i^{th}$ word of the $k^{th}$ sentence, $k=1:q$ and $q$ is the number of words in the sentence $\mathbf {s}_i$. The word hidden representations are max-pooled at the sentence level and the result is passed to a BiLSTM sentence-encoder which produces new hidden sentence representations for each sentence $\mathbf {h}_{\mathbf {s}_i}$. The sentence hidden representations are then passed as inputs to latent and explicit structure attention modules. <<</Encoder>>> <<<Latent Structure (LS) Attention>>> We model the latent structure of a source document as a non-projective dependency tree and force a pair-wise attention module to automatically induce this tree. We denote the marginal probability of a dependency edge as $a_{ij} = p(z_{ij}=1)$ where $z_{ij}$ is the latent variable representing the edge from sentence $i$ to sentence $j$. We parameterize with a neural network the unnormalized pair-wise scores between sentences and use the Kirchoff's matrix tree theorem BIBREF14 to compute the marginal probability of a dependency edge between any two sentences. We decompose the representation of sentence $\mathbf {s}_i$ into a semantic vector $\mathbf {g}_{\mathbf {s}_i}$ and structure vector $\mathbf {d}_{\mathbf {s}_i}$ as $\mathbf {h}_{\mathbf {s}_i} = [\mathbf {g}_{\mathbf {s}_i}; \mathbf {d}_{\mathbf {s}_i}]$. Using the structure vectors $\mathbf {d}_{\mathbf {s}_i}, \mathbf {d}_{\mathbf {s}_j}$, we compute a score $f_{ij}$ between sentence pairs $(i,j)$ (where sentence $i$ is the parent node of sentence $j$) and a score for sentence $\mathbf {s}_i$ being the root node $r_i$: where $F_p, F_c$ and $F_r$ are linear-projection functions to build representations for the parent, child and root node respectively and $W_a$ is the weight for bilinear transformation. Here, $f_{ij}$ is the edge weight between nodes $(i,j)$ in a weighted adjacency graph $\mathbf {F}$ and is computed for all pairs of sentences. Using $f_{ij}$ and $r_i$, we compute normalized attention scores $a_{ij}$ and $a_{i}^r $ using a variant of Kirchhoff’s matrix-tree theorem BIBREF12, BIBREF14 where $a_{ij}$ is the marginal probability of a dependency edge between sentences $(i,j)$ and $a_{i}^r $ is the probability of sentence $i$ being the root. Using these probabilistic attention weights and the semantic vectors $\lbrace \mathbf {g}_{\mathbf {s}}\rbrace $, we compute the attended sentence representations as: where $\mathbf {p}_{\mathbf {s}_i}$ is the context vector gathered from possible parents of sentence $i$, $\mathbf {c}_{\mathbf {s}_i}$ is the context vector gathered from possible children, and $\mathbf {g}_{root}$ is a special embedding for the root node. Here, the updated sentence representation $\textit {l}_{\mathbf {s}_i}$ incorporates the implicit structural information. <<</Latent Structure (LS) Attention>>> <<<Explicit Structure (ES) Attention>>> BIBREF2 showed that modeling coreference knowledge through anaphora constraints led to improved clarity or grammaticality in summaries. Taking inspiration from this, we choose coreference links across sentences as our explicit structure. First, we use an off-the-shelf coreference parser to identify coreferring mentions. We then build a coreference based sentence graph by adding a link between sentences $(\mathbf {s}_i, \mathbf {s}_j)$, if they have any coreferring mentions between them. This representation is then converted into a weighted graph by incorporating a weight on the edge between two sentences that is proportional to the number of unique coreferring mentions between them. We normalize these edge weights for every sentence, effectively building a weighted adjacency matrix $\mathbf {K}$ where $k_{ij}$ is given by: where $m_i$ denotes the set of unique mentions in sentence $\mathbf {s}_i$, ($m_i$ $\bigcap $ $m_j$) denotes the set of co-referring mentions between the two sentences and $z$ is a latent variable representing a link in the coreference sentence graph. $\epsilon = 5e-4$ is a smoothing hyperparameter. <<<Incorporating explicit structure>>> Given contextual sentence representations $\lbrace \mathbf {h}_{\mathbf {s}}\rbrace $ and our explicit coreference based weighted adjacency matrix $\mathbf {K}$, we learn an explicit-structure aware representation as follows: where $F_u$ and $F_e$ are linear projections and $\mathbf {e}_{\mathbf {s}_i}$ is an updated sentence representation which incorporates explicit structural information. Finally, to combine the two structural representations, we concatenate the latent and explicit sentence vectors as: $\mathbf {h}_{\mathbf {s}_i} = [\mathbf {l}_{\mathbf {s}_i};\mathbf {e}_{\mathbf {s}_i}]$ to form encoder sentence representations of the source document. To provide every token representation with context of the entire document, we keep the same formulation as pointer-generator networks, where each token $w_{ij}$ is mapped to its hidden representation $\mathbf {h}_{w_{ij}}$ using a BiLSTM. The token representation is concatenated with their corresponding structure-aware sentence representation: $\mathbf {h}_{w_{ij}} = [\mathbf {h}_{w_{ij}};\mathbf {h}_{\mathbf {s}_i}]$ where $\mathbf {s}_i$ is the sentence to which the word $w_{ij}$ belongs. The resulting structure-aware token representations can be used to directly replace previous token representations as input to the decoder. <<</Incorporating explicit structure>>> <<</Explicit Structure (ES) Attention>>> <<</StructSum Model>>> <<<Experiments>>> <<<Dataset:>>> We evaluate our approach on the CNN/Daily Mail corpus BIBREF15, BIBREF17 and use the same preprocessing steps as shown in BIBREF3. The CNN/DM summaries have an average of 66 tokens ($\sigma = 26$) and 4.9 sentences. Differing from BIBREF3, we truncate source documents to 700 tokens instead of 400 in training and validation sets to model longer documents with more sentences. <<</Dataset:>>> <<<Baselines:>>> We choose the following baselines based on their relatedness to the task and wide applicability: BIBREF3 : We re-implement the base pointer-generator model and the additional coverage mechanism. This forms the base model of our implementation and hence our addition of modeling document structure can be directly compared to it. BIBREF6 : This is a graph-based attention model that is closest in spirit to the method we present in this work. They use a graph attention module to learn attention between sentences, but cannot be easily used to induce interpretable document structures, since their attention scores are not constrained to learn structure. In addition to learning latent and interpretable structured attention between sentences, StructSum also introduces an explicit structure component to inject external document structure. BIBREF7 : We compare with the DiffMask experiment with this work. This work introduces a separate content selector which tags words and phrases to be copied. The DiffMask variant is an end-to-end variant like ours and hence is included in our baselines. Our baselines exclude Reinforcement Learning (RL) based systems as they aren't directly comparable, but our approach can be easily introduced in any encoder-decoder based RL system. Since we do not incorporate any pretraining, we do not compare with recent contextual representation based models BIBREF18. <<</Baselines:>>> <<<Hyperparameters:>>> Our encoder uses 256 hidden states for both directions in the one-layer LSTM, and 512 for the single-layer decoder. We use the adagrad optimizer BIBREF19 with a learning rate of 0.15 and an initial accumulator value of 0.1. We do not use dropout and use gradient-clipping with a maximum norm of 2. We selected the best model using early stopping based on the ROUGE score on the validation dataset as our criteria. We also used the coverage penalty during inference as shown in BIBREF7. For decoding, we use beam-search with a beam width of 3. We did not observe significant improvements with higher beam widths. <<</Hyperparameters:>>> <<</Experiments>>> <<<Results>>> Table TABREF8 shows the results of our work on the CNN/DM dataset. We use the standard ROUGE-1,2 and L BIBREF20 F1 metric to evaluate all our summarization output. We first observe that introducing the capability to learn latent structures already improves our performance on ROUGE-L. It suggests that modeling dependencies between sentences helps the model compose better long sequences w.r.t reference compared to baselines. We do not see a significant improvement in ROUGE-1 and ROUGE-2, hinting that we retrieve similar content words as the baseline but compose them into better contiguous sequences. We observe similar results when using explicit structures only with the ES attention module. This shows that adding inductive bias in the form of coreference based sentence graphs helps compose long sequences. Our results here are close to the model that uses just LS attention. This demonstrates that LS attention induces good latent dependencies that make up for pure external coreference knowledge. Finally, our combined model which uses both Latent and Explicit structure performs the best with a strong improvement of 1.08 points in ROUGE-L over our base pointer-generator model and 0.6 points in ROUGE-1. It shows that the latent and explicit information are complementary and a model can jointly leverage them to produce better summaries. Modeling structure and adding inductive biases also helps a model to converge faster where the combined LS+ES Attention model took 126K iterations for training in comparison to 230K iterations required to train the plain pointer-generator network and an additional 3K iterations for the coverage loss BIBREF3. <<</Results>>> <<<Analysis>>> We present below analysis on the quality of summarization as compared to our base model, the pointer-generator network with coverage BIBREF3 and the reference. <<<Analysis of Copying>>> Despite being an abstractive model, the pointer-generator model tends to copy very long sequences of words including whole sentences from the source document (also observed by BIBREF7). Table TABREF15 shows a comparison of the Average Length (Copy Len) of contiguous copied sequences greater than length 3. We observe that the pointer-generator baseline on average copies 16.61 continuous tokens from the source which shows the extractive nature of the model. This indicates that pointer networks, aimed at combining advantages from abstractive and extractive methods by allowing to copy content from the input document, tend to skew towards copying, particularly in this dataset. A consequence of this is that the model fails to interrupt copying at desirable sequence length. In contrast, modeling document structure through StructSum reduces the length of copied sequences to 9.13 words on average reducing the bias of copying sentences in entirety. This average is closer to the reference (5.07 words) in comparison, without sacrificing task performance. StructSum learns to stop when needed, only copying enough content to generate a coherent summary. <<</Analysis of Copying>>> <<<Content Selection and Abstraction>>> A direct outcome of copying shorter sequences is being able to cover more content from the source document within given length constraints. We observe that this leads to better summarization performance. In our analysis, we compute coverage by computing the number of source sentences from which sequences greater than length 3 are copied in the summary. Table TABREF15 shows a comparison of the coverage of source sentences in the summary content. We see that while the baseline pointer-generator model only copies from 12.1% of the source sentences, we copy content from 24.0% of the source sentences. Additionally, the average length of the summaries produced by StructSum remains mostly unchanged at 66 words on average compared to 61 of the baseline model. This indicates that StructSum produces summaries that draw from a wider selection of sentences from the original article compared to the baseline models. BIBREF21 show that copying more diverse content in isolation does not necessarily lead to better summaries for extractive summarization. Our analysis suggests that this observation might not extend to abstractive summarization methods. The proportion of novel n-grams generated has been used in the literature to measure the degree of abstraction of summarization models BIBREF3. Figure FIGREF17 compares the percentage of novel n-grams in StructSum as compared to the baseline model. Our model produces novel trigrams 21.0% of the time and copies whole sentences only 21.7% of the time. In comparison, the pointer-generator network has only 6.1% novel trigrams and copies entire sentences 51.7% of the time. This shows that StructSum on average generates 14.7% more novel n-grams in comparison to the pointer-generator baseline. <<</Content Selection and Abstraction>>> <<<Layout Bias>>> Neural abstractive summarization methods applied to news articles are typically biased towards selecting and generating summaries based on the first few sentences of the articles. This stems from the structure of news articles, which present the salient information of the article in the first few sentences and expand in the subsequent ones. As a result, the LEAD 3 baseline, which selects the top three sentences of an article, is widely used in the literature as a strong baseline to evaluate summarization models applied to the news domain BIBREF22. BIBREF8 observed that the current summarization models learn to exploit the layout biases of current datasets and offer limited diversity in their outputs. To analyze whether StructSum also holds the same layout biases, we compute a distribution of source sentence indices that are used for copying content (copied sequences of length 3 or more are considered). Figure FIGREF19 shows the comparison of coverage of sentences. The coverage of sentences in the reference summaries shows a high proportion of the top 5 sentences of any article being copied to the summary. Additionally, the reference summaries have a smoother tail end distribution with relevant sentences in all positions being copied. It shows that a smooth distribution over all sentences is a desirable feature. We notice that the sequence-to-sequence and pointer-generator framework (with and without coverage enabled) have a stronger bias towards the beginning of the article with a high concentration of copied sentences within the top 5 sentences of the article. In contrast, StructSum improves coverage slightly having a lower concentration of top 5 sentences and copies more tail end sentences than the baselines. However, although the modeling of structure does help, our model has a reasonable gap compared to the reference distribution. We see this as an area of improvement and a direction for future work. <<</Layout Bias>>> <<<Document Structures>>> Similar to BIBREF12, we also look at the quality of the intermediate structures learned by the model. We use the Chu-Liu-Edmonds algorithm BIBREF23, BIBREF24 to extract the maximum spanning tree from the attention score matrix as our sentence structure. Table TABREF20 shows the frequency of various tree depths. We find that the average tree depth is 2.9 and the average proportion of leaf nodes is 88%, consistent with results from tree induction in document classification BIBREF25. Further, we compare latent trees extracted from StructSum with undirected graphs based on coreference and NER. These are constructed similarly to our explicit coreference based sentence graphs in §SECREF5 by linking sentences with overlapping coreference mentions or named entities. We measure the similarity between the learned latent trees and the explicit graphs through precision and recall over edges. The results are shown in Table TABREF22. We observe that our latent graphs have low recall with the linguistic graphs showing that our latent graphs do not capture the coreference or named entity overlaps explicitly, suggesting that the latent and explicit structures capture complementary information. Figure FIGREF24 shows qualitative examples of our induced structures along with generated summaries from the StructSum model. The first example shows a tree with sentence 3 chosen as root, which was the key sentence mentioned in the reference. We notice that in both examples, the sentences in the lower level of the dependency tree contribute less to the generated summary. Along the same lines, in the examples source sentences used to generate summaries tend to be closer to the root node. In the first summary, all sentences from which content was drawn are either the root node or within depth 1 of the root node. Similarly, in the second example, 4 out of 5 source sentences were at depth=1 in the tree. In the two examples, generated summaries diverged from the reference by omitting certain sentences used in the reference. These sentences appear in the lower section of the tree giving us some insights on which sentences were preferred for the summary generation. Further, in example 1, we notice that the latent structures cluster sentences based on the main topic of the document. Sentences 1,2,3 differ from sentences 5,6,7 on the topic being discussed and our model has clustered the two sets separately. <<</Document Structures>>> <<</Analysis>>> <<<Related Work>>> Prior to neural models for summarization, document structure played a critical role in generating relevant, diverse and coherent summaries. BIBREF26 formulated document summarization using linguistic features to construct a semantic graph of the document and building a subgraph for the summary. BIBREF27 leverage language-independent syntactic graphs of the source document to do unsupervised document summarization. BIBREF1 parse the source text into a set of AMR graphs, transform the graphs to summary graphs and then generate text from the summary graph. While such systems generate grammatical summaries and preserve linguistic quality BIBREF2, they are often computationally demanding and do not generalize well BIBREF21. Data-driven neural models for summarization fall into extractive BIBREF13, BIBREF28 or abstractive BIBREF29, BIBREF3, BIBREF7, BIBREF30. BIBREF3 proposed a pointer-generator framework that learns to either generate novel in-vocabulary words or copy words from the source. This model has been the foundation for a lot of follow up work on abstractive summarization BIBREF7, BIBREF31, BIBREF32. Our model extends the pointer-generator model by incorporating latent structure and explicit structure knowledge, making our extension applicable to any of the followup work. BIBREF6 present a graph-based attention system to improve the saliency of summaries. While this model learns attention between sentences, it does not induce interpretable intermediate structures. A lot of recent work looks into incorporating structure into neural models. BIBREF32 infuse source side syntactic structure into the copy mechanism of the pointer-generator model. They identify explicit word-level syntactic features based on dependency parses and parts of speech tags and augment the decoder copy mechanism to attend to them. In contrast, we model sentence level dependency structures in the form of latent or induced structures and explicit coreference based structures. We do not identify any heuristic or salient features other than linking dependent sentences. BIBREF33 propose structural compression and coverage regularizers to provide an objective to neural models to generate concise and informative content. Here, they incorporate structural bias about the target summaries but we choose to model the structure of the source sentence to produce rich document representations. BIBREF34 induce latent document structure for aspect based summarization. BIBREF35 use present long document summarization model applicable for scientific papers, which attends to discourse sections in a document, while BIBREF36 propose an unsupervised model for review summarization which learns a latent discourse structure and uses it to summarize a review. BIBREF37 use discourse structures to improve coherence in blog summarization. These are all complementary directions to our work. To our knowledge, we are the first to simultaneously incorporate latent and explicit document structure in a single framework for document summarization. <<</Related Work>>> <<<Conclusion and Future Work>>> To summarize, our contributions are three-fold. We propose a framework for incorporating latent and explicit document structure in neural abstractive summarization. We introduce a novel explicit-attention module which can incorporate external linguistic structures, and we show one such application where we use coreference to enhance summarization. We show quantitative improvements on the ROUGE metric over strong summarization baselines and demonstrate improvements in abstraction and coverage through extensive qualitative analysis. StructSum has demonstrated performance gain and higher quality output summaries; with a potential direction to study the role of latent structures in the interpretability of models in the future. Another possible direction is to investigate whether structured representations allow better generalization for transfer learning and summarization in other domains with limited data. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "Experiments, Abstract" ], "type": "disordered_section" }
2003.00576
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> StructSum: Incorporating Latent and Explicit Sentence Dependencies for Single Document Summarization <<<Abstract>>> Traditional preneural approaches to single document summarization relied on modeling the intermediate structure of a document before generating the summary. In contrast, the current state of the art neural summarization models do not preserve any intermediate structure, resorting to encoding the document as a sequence of tokens. The goal of this work is two-fold: to improve the quality of generated summaries and to learn interpretable document representations for summarization. To this end, we propose incorporating latent and explicit sentence dependencies into single-document summarization models. We use structure-aware encoders to induce latent sentence relations, and inject explicit coreferring mention graph across sentences to incorporate explicit structure. On the CNN/DM dataset, our model outperforms standard baselines and provides intermediate latent structures for analysis. We present an extensive analysis of our summaries and show that modeling document structure reduces copying long sequences and incorporates richer content from the source document while maintaining comparable summary lengths and an increased degree of abstraction. <<</Abstract>>> <<<Introduction>>> Traditional approaches to abstractive summarization have relied on interpretable structured representations such as graph based sentence centrality BIBREF0, AMR parses BIBREF1, discourse based compression and anaphora constraints BIBREF2. On the other hand, state of the art neural approaches to single document summarization encode the document as a sequence of tokens and compose them into a document representation BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Albeit being effective, these systems learn to rely significantly on layout bias associated with the source document BIBREF8 and do not lend themselves easily to interpretation via intermediate structures. Recent work provides evidence that structured representation of text leads to better document representations BIBREF9, BIBREF10. However, structured representations are under-explored in the neural summarization literature. Motivated by this, we propose a structure-aware end-to-end model (§SECREF2) for summarization. Our proposed model, StructSum, augments the existing pointer-generator network BIBREF3 with two novel components: (1) a latent-structure attention module that adapts structured representations BIBREF11, BIBREF12 for the summarization task, and (2) an explicit-structure attention module, that incorporates a coreference graph. The components together model sentence level dependencies in a document generating rich structured representations. The motivation of this work is to provide a framework to induce rich interpretable latent structures and inject external document structures that can be introduced into any document encoder model. Encoders with induced latent structures have been shown to benefit several tasks including document classification, natural language inference BIBREF12, BIBREF13, and machine translation BIBREF11. Building on this motivation, our latent structure attention module builds upon BIBREF12 to model the dependencies between sentences in a document. It uses a variant of Kirchhoff’s matrix-tree theorem BIBREF14 to model such dependencies as non-projective tree structures(§SECREF3). The explicit attention module is linguistically-motivated and aims to incorporate sentence-level structures from externally annotated document structures. We incorporate a coreference based sentence dependency graph, which is then combined with the output of the latent structure attention module to produce a hybrid structure-aware sentence representation (§SECREF5). We evaluate our model on the CNN/DM dataset BIBREF15 and show in §SECREF4 that it outperforms strong baselines by up to 1.1 ROUGE-L. We find that the latent and explicit structures are complementary, both contributing to the final performance improvement. Our modules are also independent of the underlying encoder-decoder architectures, rendering them flexible to be incorporated into any advanced models. Our analysis quantitatively compares our generated summaries with the baselines and reference documents (§SECREF5). It reveals that structure-aware summarization reduces the bias of copying large sequences from the source inherently making the summaries more abstractive by generating $\sim $15% more novel n-grams compared to a competitive baseline. We also show qualitative examples of the learned interpretable sentence dependency structures, motivating further research for structure-aware modeling. <<</Introduction>>> <<<StructSum Model>>> Consider a source document $\mathbf {x}$ consisting of $n$ sentences $\lbrace \mathbf {s}\rbrace $ where each sentence $\mathbf {s}_i$ is composed of a sequence of words. Document summarization aims to map the source document to a target summary of $m$ words $\lbrace y\rbrace $. A typical neural abstractive summarization system is an attentional sequence-to-sequence model that encodes the input sequence $\mathbf {x}$ as a continuous sequence of tokens $\lbrace w\rbrace $ using a BiLSTM. The encoder produces a set of hidden representations $\lbrace \mathbf {h}\rbrace $. An LSTM decoder maps the previously generated token $y_{t-1}$ to a hidden state and computes a soft attention probability distribution $p(\mathbf {a}_t \mid \mathbf {x}, \mathbf {y}_{1:t-1})$ over encoder hidden states. A distribution $p$ over the vocabulary is computed at every timestep $t$ and the network is trained using negative log likelihood loss : $\text{loss}_t = - \mathrm {log}\:p(y_t) $. The pointer-generator network BIBREF3 augments the standard encoder-decoder architecture by linearly interpolating a pointer based copy mechanism. StructSum uses the pointer-generator network as the base model. Our encoder is a structured hierarchical encoder BIBREF16, which computes hidden representations of the sequence both at the token and sentence level. The model then uses the explicit-structure and implicit-structure attention modules to augment the sentence representations with rich sentence dependency information, leveraging both learned latent structure and additional external structure from other NLP modules. The attended vectors are then passed to the decoder, which produces the output sequence for abstractive summarization. In the rest of this section, we describe our model architecture, shown in Figure FIGREF2, in detail. <<<Encoder>>> Our hierarchical encoder consists of a BiLSTM encoder over words, followed by sentence level BiLSTM encoder. The word encoder takes a sequence of words in a sentence $\mathbf {s}_i = \lbrace w\rbrace $ as input and produces contextual hidden representation for each word $\mathbf {h}_{w_{ik}}$, where $w_{ik}$ is the $i^{th}$ word of the $k^{th}$ sentence, $k=1:q$ and $q$ is the number of words in the sentence $\mathbf {s}_i$. The word hidden representations are max-pooled at the sentence level and the result is passed to a BiLSTM sentence-encoder which produces new hidden sentence representations for each sentence $\mathbf {h}_{\mathbf {s}_i}$. The sentence hidden representations are then passed as inputs to latent and explicit structure attention modules. <<</Encoder>>> <<<Latent Structure (LS) Attention>>> We model the latent structure of a source document as a non-projective dependency tree and force a pair-wise attention module to automatically induce this tree. We denote the marginal probability of a dependency edge as $a_{ij} = p(z_{ij}=1)$ where $z_{ij}$ is the latent variable representing the edge from sentence $i$ to sentence $j$. We parameterize with a neural network the unnormalized pair-wise scores between sentences and use the Kirchoff's matrix tree theorem BIBREF14 to compute the marginal probability of a dependency edge between any two sentences. We decompose the representation of sentence $\mathbf {s}_i$ into a semantic vector $\mathbf {g}_{\mathbf {s}_i}$ and structure vector $\mathbf {d}_{\mathbf {s}_i}$ as $\mathbf {h}_{\mathbf {s}_i} = [\mathbf {g}_{\mathbf {s}_i}; \mathbf {d}_{\mathbf {s}_i}]$. Using the structure vectors $\mathbf {d}_{\mathbf {s}_i}, \mathbf {d}_{\mathbf {s}_j}$, we compute a score $f_{ij}$ between sentence pairs $(i,j)$ (where sentence $i$ is the parent node of sentence $j$) and a score for sentence $\mathbf {s}_i$ being the root node $r_i$: where $F_p, F_c$ and $F_r$ are linear-projection functions to build representations for the parent, child and root node respectively and $W_a$ is the weight for bilinear transformation. Here, $f_{ij}$ is the edge weight between nodes $(i,j)$ in a weighted adjacency graph $\mathbf {F}$ and is computed for all pairs of sentences. Using $f_{ij}$ and $r_i$, we compute normalized attention scores $a_{ij}$ and $a_{i}^r $ using a variant of Kirchhoff’s matrix-tree theorem BIBREF12, BIBREF14 where $a_{ij}$ is the marginal probability of a dependency edge between sentences $(i,j)$ and $a_{i}^r $ is the probability of sentence $i$ being the root. Using these probabilistic attention weights and the semantic vectors $\lbrace \mathbf {g}_{\mathbf {s}}\rbrace $, we compute the attended sentence representations as: where $\mathbf {p}_{\mathbf {s}_i}$ is the context vector gathered from possible parents of sentence $i$, $\mathbf {c}_{\mathbf {s}_i}$ is the context vector gathered from possible children, and $\mathbf {g}_{root}$ is a special embedding for the root node. Here, the updated sentence representation $\textit {l}_{\mathbf {s}_i}$ incorporates the implicit structural information. <<</Latent Structure (LS) Attention>>> <<<Explicit Structure (ES) Attention>>> BIBREF2 showed that modeling coreference knowledge through anaphora constraints led to improved clarity or grammaticality in summaries. Taking inspiration from this, we choose coreference links across sentences as our explicit structure. First, we use an off-the-shelf coreference parser to identify coreferring mentions. We then build a coreference based sentence graph by adding a link between sentences $(\mathbf {s}_i, \mathbf {s}_j)$, if they have any coreferring mentions between them. This representation is then converted into a weighted graph by incorporating a weight on the edge between two sentences that is proportional to the number of unique coreferring mentions between them. We normalize these edge weights for every sentence, effectively building a weighted adjacency matrix $\mathbf {K}$ where $k_{ij}$ is given by: where $m_i$ denotes the set of unique mentions in sentence $\mathbf {s}_i$, ($m_i$ $\bigcap $ $m_j$) denotes the set of co-referring mentions between the two sentences and $z$ is a latent variable representing a link in the coreference sentence graph. $\epsilon = 5e-4$ is a smoothing hyperparameter. <<<Incorporating explicit structure>>> Given contextual sentence representations $\lbrace \mathbf {h}_{\mathbf {s}}\rbrace $ and our explicit coreference based weighted adjacency matrix $\mathbf {K}$, we learn an explicit-structure aware representation as follows: where $F_u$ and $F_e$ are linear projections and $\mathbf {e}_{\mathbf {s}_i}$ is an updated sentence representation which incorporates explicit structural information. Finally, to combine the two structural representations, we concatenate the latent and explicit sentence vectors as: $\mathbf {h}_{\mathbf {s}_i} = [\mathbf {l}_{\mathbf {s}_i};\mathbf {e}_{\mathbf {s}_i}]$ to form encoder sentence representations of the source document. To provide every token representation with context of the entire document, we keep the same formulation as pointer-generator networks, where each token $w_{ij}$ is mapped to its hidden representation $\mathbf {h}_{w_{ij}}$ using a BiLSTM. The token representation is concatenated with their corresponding structure-aware sentence representation: $\mathbf {h}_{w_{ij}} = [\mathbf {h}_{w_{ij}};\mathbf {h}_{\mathbf {s}_i}]$ where $\mathbf {s}_i$ is the sentence to which the word $w_{ij}$ belongs. The resulting structure-aware token representations can be used to directly replace previous token representations as input to the decoder. <<</Incorporating explicit structure>>> <<</Explicit Structure (ES) Attention>>> <<</StructSum Model>>> <<<Experiments>>> <<<Dataset:>>> We evaluate our approach on the CNN/Daily Mail corpus BIBREF15, BIBREF17 and use the same preprocessing steps as shown in BIBREF3. The CNN/DM summaries have an average of 66 tokens ($\sigma = 26$) and 4.9 sentences. Differing from BIBREF3, we truncate source documents to 700 tokens instead of 400 in training and validation sets to model longer documents with more sentences. <<</Dataset:>>> <<<Baselines:>>> We choose the following baselines based on their relatedness to the task and wide applicability: BIBREF3 : We re-implement the base pointer-generator model and the additional coverage mechanism. This forms the base model of our implementation and hence our addition of modeling document structure can be directly compared to it. BIBREF6 : This is a graph-based attention model that is closest in spirit to the method we present in this work. They use a graph attention module to learn attention between sentences, but cannot be easily used to induce interpretable document structures, since their attention scores are not constrained to learn structure. In addition to learning latent and interpretable structured attention between sentences, StructSum also introduces an explicit structure component to inject external document structure. BIBREF7 : We compare with the DiffMask experiment with this work. This work introduces a separate content selector which tags words and phrases to be copied. The DiffMask variant is an end-to-end variant like ours and hence is included in our baselines. Our baselines exclude Reinforcement Learning (RL) based systems as they aren't directly comparable, but our approach can be easily introduced in any encoder-decoder based RL system. Since we do not incorporate any pretraining, we do not compare with recent contextual representation based models BIBREF18. <<</Baselines:>>> <<<Hyperparameters:>>> Our encoder uses 256 hidden states for both directions in the one-layer LSTM, and 512 for the single-layer decoder. We use the adagrad optimizer BIBREF19 with a learning rate of 0.15 and an initial accumulator value of 0.1. We do not use dropout and use gradient-clipping with a maximum norm of 2. We selected the best model using early stopping based on the ROUGE score on the validation dataset as our criteria. We also used the coverage penalty during inference as shown in BIBREF7. For decoding, we use beam-search with a beam width of 3. We did not observe significant improvements with higher beam widths. <<</Hyperparameters:>>> <<</Experiments>>> <<<Results>>> Table TABREF8 shows the results of our work on the CNN/DM dataset. We use the standard ROUGE-1,2 and L BIBREF20 F1 metric to evaluate all our summarization output. We first observe that introducing the capability to learn latent structures already improves our performance on ROUGE-L. It suggests that modeling dependencies between sentences helps the model compose better long sequences w.r.t reference compared to baselines. We do not see a significant improvement in ROUGE-1 and ROUGE-2, hinting that we retrieve similar content words as the baseline but compose them into better contiguous sequences. We observe similar results when using explicit structures only with the ES attention module. This shows that adding inductive bias in the form of coreference based sentence graphs helps compose long sequences. Our results here are close to the model that uses just LS attention. This demonstrates that LS attention induces good latent dependencies that make up for pure external coreference knowledge. Finally, our combined model which uses both Latent and Explicit structure performs the best with a strong improvement of 1.08 points in ROUGE-L over our base pointer-generator model and 0.6 points in ROUGE-1. It shows that the latent and explicit information are complementary and a model can jointly leverage them to produce better summaries. Modeling structure and adding inductive biases also helps a model to converge faster where the combined LS+ES Attention model took 126K iterations for training in comparison to 230K iterations required to train the plain pointer-generator network and an additional 3K iterations for the coverage loss BIBREF3. <<</Results>>> <<<Analysis>>> We present below analysis on the quality of summarization as compared to our base model, the pointer-generator network with coverage BIBREF3 and the reference. <<<Analysis of Copying>>> Despite being an abstractive model, the pointer-generator model tends to copy very long sequences of words including whole sentences from the source document (also observed by BIBREF7). Table TABREF15 shows a comparison of the Average Length (Copy Len) of contiguous copied sequences greater than length 3. We observe that the pointer-generator baseline on average copies 16.61 continuous tokens from the source which shows the extractive nature of the model. This indicates that pointer networks, aimed at combining advantages from abstractive and extractive methods by allowing to copy content from the input document, tend to skew towards copying, particularly in this dataset. A consequence of this is that the model fails to interrupt copying at desirable sequence length. In contrast, modeling document structure through StructSum reduces the length of copied sequences to 9.13 words on average reducing the bias of copying sentences in entirety. This average is closer to the reference (5.07 words) in comparison, without sacrificing task performance. StructSum learns to stop when needed, only copying enough content to generate a coherent summary. <<</Analysis of Copying>>> <<<Content Selection and Abstraction>>> A direct outcome of copying shorter sequences is being able to cover more content from the source document within given length constraints. We observe that this leads to better summarization performance. In our analysis, we compute coverage by computing the number of source sentences from which sequences greater than length 3 are copied in the summary. Table TABREF15 shows a comparison of the coverage of source sentences in the summary content. We see that while the baseline pointer-generator model only copies from 12.1% of the source sentences, we copy content from 24.0% of the source sentences. Additionally, the average length of the summaries produced by StructSum remains mostly unchanged at 66 words on average compared to 61 of the baseline model. This indicates that StructSum produces summaries that draw from a wider selection of sentences from the original article compared to the baseline models. BIBREF21 show that copying more diverse content in isolation does not necessarily lead to better summaries for extractive summarization. Our analysis suggests that this observation might not extend to abstractive summarization methods. The proportion of novel n-grams generated has been used in the literature to measure the degree of abstraction of summarization models BIBREF3. Figure FIGREF17 compares the percentage of novel n-grams in StructSum as compared to the baseline model. Our model produces novel trigrams 21.0% of the time and copies whole sentences only 21.7% of the time. In comparison, the pointer-generator network has only 6.1% novel trigrams and copies entire sentences 51.7% of the time. This shows that StructSum on average generates 14.7% more novel n-grams in comparison to the pointer-generator baseline. <<</Content Selection and Abstraction>>> <<<Layout Bias>>> Neural abstractive summarization methods applied to news articles are typically biased towards selecting and generating summaries based on the first few sentences of the articles. This stems from the structure of news articles, which present the salient information of the article in the first few sentences and expand in the subsequent ones. As a result, the LEAD 3 baseline, which selects the top three sentences of an article, is widely used in the literature as a strong baseline to evaluate summarization models applied to the news domain BIBREF22. BIBREF8 observed that the current summarization models learn to exploit the layout biases of current datasets and offer limited diversity in their outputs. To analyze whether StructSum also holds the same layout biases, we compute a distribution of source sentence indices that are used for copying content (copied sequences of length 3 or more are considered). Figure FIGREF19 shows the comparison of coverage of sentences. The coverage of sentences in the reference summaries shows a high proportion of the top 5 sentences of any article being copied to the summary. Additionally, the reference summaries have a smoother tail end distribution with relevant sentences in all positions being copied. It shows that a smooth distribution over all sentences is a desirable feature. We notice that the sequence-to-sequence and pointer-generator framework (with and without coverage enabled) have a stronger bias towards the beginning of the article with a high concentration of copied sentences within the top 5 sentences of the article. In contrast, StructSum improves coverage slightly having a lower concentration of top 5 sentences and copies more tail end sentences than the baselines. However, although the modeling of structure does help, our model has a reasonable gap compared to the reference distribution. We see this as an area of improvement and a direction for future work. <<</Layout Bias>>> <<<Document Structures>>> Similar to BIBREF12, we also look at the quality of the intermediate structures learned by the model. We use the Chu-Liu-Edmonds algorithm BIBREF23, BIBREF24 to extract the maximum spanning tree from the attention score matrix as our sentence structure. Table TABREF20 shows the frequency of various tree depths. We find that the average tree depth is 2.9 and the average proportion of leaf nodes is 88%, consistent with results from tree induction in document classification BIBREF25. Further, we compare latent trees extracted from StructSum with undirected graphs based on coreference and NER. These are constructed similarly to our explicit coreference based sentence graphs in §SECREF5 by linking sentences with overlapping coreference mentions or named entities. We measure the similarity between the learned latent trees and the explicit graphs through precision and recall over edges. The results are shown in Table TABREF22. We observe that our latent graphs have low recall with the linguistic graphs showing that our latent graphs do not capture the coreference or named entity overlaps explicitly, suggesting that the latent and explicit structures capture complementary information. Figure FIGREF24 shows qualitative examples of our induced structures along with generated summaries from the StructSum model. The first example shows a tree with sentence 3 chosen as root, which was the key sentence mentioned in the reference. We notice that in both examples, the sentences in the lower level of the dependency tree contribute less to the generated summary. Along the same lines, in the examples source sentences used to generate summaries tend to be closer to the root node. In the first summary, all sentences from which content was drawn are either the root node or within depth 1 of the root node. Similarly, in the second example, 4 out of 5 source sentences were at depth=1 in the tree. In the two examples, generated summaries diverged from the reference by omitting certain sentences used in the reference. These sentences appear in the lower section of the tree giving us some insights on which sentences were preferred for the summary generation. Further, in example 1, we notice that the latent structures cluster sentences based on the main topic of the document. Sentences 1,2,3 differ from sentences 5,6,7 on the topic being discussed and our model has clustered the two sets separately. <<</Document Structures>>> <<</Analysis>>> <<<Related Work>>> Prior to neural models for summarization, document structure played a critical role in generating relevant, diverse and coherent summaries. BIBREF26 formulated document summarization using linguistic features to construct a semantic graph of the document and building a subgraph for the summary. BIBREF27 leverage language-independent syntactic graphs of the source document to do unsupervised document summarization. BIBREF1 parse the source text into a set of AMR graphs, transform the graphs to summary graphs and then generate text from the summary graph. While such systems generate grammatical summaries and preserve linguistic quality BIBREF2, they are often computationally demanding and do not generalize well BIBREF21. Data-driven neural models for summarization fall into extractive BIBREF13, BIBREF28 or abstractive BIBREF29, BIBREF3, BIBREF7, BIBREF30. BIBREF3 proposed a pointer-generator framework that learns to either generate novel in-vocabulary words or copy words from the source. This model has been the foundation for a lot of follow up work on abstractive summarization BIBREF7, BIBREF31, BIBREF32. Our model extends the pointer-generator model by incorporating latent structure and explicit structure knowledge, making our extension applicable to any of the followup work. BIBREF6 present a graph-based attention system to improve the saliency of summaries. While this model learns attention between sentences, it does not induce interpretable intermediate structures. A lot of recent work looks into incorporating structure into neural models. BIBREF32 infuse source side syntactic structure into the copy mechanism of the pointer-generator model. They identify explicit word-level syntactic features based on dependency parses and parts of speech tags and augment the decoder copy mechanism to attend to them. In contrast, we model sentence level dependency structures in the form of latent or induced structures and explicit coreference based structures. We do not identify any heuristic or salient features other than linking dependent sentences. BIBREF33 propose structural compression and coverage regularizers to provide an objective to neural models to generate concise and informative content. Here, they incorporate structural bias about the target summaries but we choose to model the structure of the source sentence to produce rich document representations. BIBREF34 induce latent document structure for aspect based summarization. BIBREF35 use present long document summarization model applicable for scientific papers, which attends to discourse sections in a document, while BIBREF36 propose an unsupervised model for review summarization which learns a latent discourse structure and uses it to summarize a review. BIBREF37 use discourse structures to improve coherence in blog summarization. These are all complementary directions to our work. To our knowledge, we are the first to simultaneously incorporate latent and explicit document structure in a single framework for document summarization. <<</Related Work>>> <<<Conclusion and Future Work>>> To summarize, our contributions are three-fold. We propose a framework for incorporating latent and explicit document structure in neural abstractive summarization. We introduce a novel explicit-attention module which can incorporate external linguistic structures, and we show one such application where we use coreference to enhance summarization. We show quantitative improvements on the ROUGE metric over strong summarization baselines and demonstrate improvements in abstraction and coverage through extensive qualitative analysis. StructSum has demonstrated performance gain and higher quality output summaries; with a potential direction to study the role of latent structures in the interpretability of models in the future. Another possible direction is to investigate whether structured representations allow better generalization for transfer learning and summarization in other domains with limited data. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "Abstract, Analysis" ], "type": "disordered_section" }
1909.02635
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Effective Use of Transformer Networks for Entity Tracking <<<Abstract>>> Tracking entities in procedural language requires understanding the transformations arising from actions on entities as well as those entities' interactions. While self-attention-based pre-trained language encoders like GPT and BERT have been successfully applied across a range of natural language understanding tasks, their ability to handle the nuances of procedural texts is still untested. In this paper, we explore the use of pre-trained transformer networks for entity tracking tasks in procedural text. First, we test standard lightweight approaches for prediction with pre-trained transformers, and find that these approaches underperform even simple baselines. We show that much stronger results can be attained by restructuring the input to guide the transformer model to focus on a particular entity. Second, we assess the degree to which transformer networks capture the process dynamics, investigating such factors as merged entities and oblique entity references. On two different tasks, ingredient detection in recipes and QA over scientific processes, we achieve state-of-the-art results, but our models still largely attend to shallow context clues and do not form complex representations of intermediate entity or process state. <<</Abstract>>> <<<Introduction>>> Transformer based pre-trained language models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 have been shown to perform remarkably well on a range of tasks, including entity-related tasks like coreference resolution BIBREF5 and named entity recognition BIBREF0. This performance has been generally attributed to the robust transfer of lexical semantics to downstream tasks. However, these models are still better at capturing syntax than they are at more entity-focused aspects like coreference BIBREF6, BIBREF7; moreover, existing state-of-the-art architectures for such tasks often perform well looking at only local entity mentions BIBREF8, BIBREF9, BIBREF10 rather than forming truly global entity representations BIBREF11, BIBREF12. Thus, performance on these tasks does not form sufficient evidence that these representations strongly capture entity semantics. Better understanding the models' capabilities requires testing them in domains involving complex entity interactions over longer texts. One such domain is that of procedural language, which is strongly focused on tracking the entities involved and their interactions BIBREF13, BIBREF14, BIBREF15. This paper investigates the question of how transformer-based models form entity representations and what these representations capture. We expect that after fine-tuning on a target task, a transformer's output representations should somehow capture relevant entity properties, in the sense that these properties can be extracted by shallow classification either from entity tokens or from marker tokens. However, we observe that such “post-conditioning” approaches don't perform significantly better than rule-based baselines on the tasks we study. We address this by proposing entity-centric ways of structuring input to the transformer networks, using the entity to guide the intrinsic self-attention and form entity-centric representations for all the tokens. We find that our proposed methods lead to a significant improvement in performance over baselines. Although our entity-specific application of transformers is more effective at the entity tracking tasks we study, we perform additional analysis and find that these tasks still do not encourage transformers to form truly deep entity representations. Our performance gain is largely from better understanding of verb semantics in terms of associating process actions with entity the paragraph is conditioned on. The model also does not specialize in “tracking” composed entities per se, again using surface clues like verbs to identify the components involved in a new composition. We evaluate our models on two datasets specifically designed to invoke procedural understanding: (i) Recipes BIBREF16, and (ii) ProPara BIBREF14. For the Recipes dataset, we classify whether an ingredient was affected in a certain step, which requires understanding when ingredients are combined or the focus of the recipe shifts away from them. The ProPara dataset involves answering a more complex set of questions about physical state changes of components in scientific processes. To handle this more structured setting, our transformer produces potentials consumed by a conditional random field which predicts entity states over time. Using a unidirectional GPT-based architecture, we achieve state-of-the-art results on both the datasets; nevertheless, analysis shows that our approach still falls short of capturing the full space of entity interactions. <<</Introduction>>> <<<Background: Process Understanding>>> Procedural text is a domain of text involved with understanding some kind of process, such as a phenomenon arising in nature or a set of instructions to perform a task. Entity tracking is a core component of understanding such texts. BIBREF14 introduced the ProPara dataset to probe understanding of scientific processes. The goal is to track the sequence of physical state changes (creation, destruction, and movement) entites undergo over long sequences of process steps. Past work involves both modeling entities across time BIBREF17 and capturing structural constraints inherent in the processes BIBREF18, BIBREF19 Figure FIGREF2b shows an example of the dataset posed as a structured prediction task, as in BIBREF19. For such a domain, it is crucial to capture implicit event occurrences beyond explicit entity mentions. For example, in fuel goes into the generator. The generator converts mechanical energy into electrical energy”, the fuel is implicitly destroyed in the process. BIBREF15 introduced the task of detecting state changes in recipes in the Recipes dataset and proposed an entity-centric memory network neural architecture for simulating action dynamics. Figure FIGREF2a shows an example from the Recipes dataset with a grid showing ingredient presence. We focus specifically on this core problem of ingredient detection; while only one of the sub-tasks associated with their dataset, it reflects some complex semantics involving understanding the current state of the recipe. Tracking of ingredients in the cooking domain is challenging owing to the compositional nature of recipes whereby ingredients mix together and are aliased as intermediate compositions. We pose both of these procedural understanding tasks as classification problems, predicting the state of the entity at each timestep from a set of pre-defined classes. In Figure FIGREF2, these classes correspond to either the presence (1) or absence (0) or the sequence of state changes create (C), move (M), destroy (D), exists (E), and none (O). State-of-the-art approaches on these tasks are inherently entity-centric. Separately, it has been shown that entity-centric language modeling in a continuous framework can lead to better performance for LM related tasks BIBREF20, BIBREF21. Moreover, external data has shown to be useful for modeling process understanding tasks in prior work BIBREF18, BIBREF15, suggesting that pre-trained models may be effective. With such tasks in place, a strong model will ideally learn to form robust entity-centric representation at each time step instead of solely relying on extracting information from the local entity mentions. This expectation is primarily due to the evolving nature of the process domain where entities undergo complex interactions, form intermediate compositions, and are often accompanied by implicit state changes. We now investigate to what extent this is true in a standard application of transformer models to this problem. <<</Background: Process Understanding>>> <<<Studying Basic Transformer Representations for Entity Tracking>>> <<<Post-conditioning Models>>> The most natural way to use the pre-trained transformer architectures for the entity tracking tasks is to simply encode the text sequence and then attempt to “read off” entity states from the contextual transformer representation. We call this approach post-conditioning: the transformer runs with no knowledge of which entity or entities we are going to make predictions on, but we only condition on the target entity after the transformer stage. Figure FIGREF4 depicts this model. Formally, for a labelled pair $(\lbrace s_1, s_2, \dots , s_t\rbrace , y_{et})$, we encode the tokenized sequence of steps up to the current timestep (the sentences are separated by using a special [SEP] token), independent of the entity. We denote by $X=[h_{1}, h_{2},\dots , h_{m}]$ the contextualized hidden representation of the $m$ input tokens from the last layer, and by $\textstyle g_{e}\!=\!\!\!\sum \limits _{\text{ent toks}}\!emb(e_i)$ the entity representation for post conditioning. We now use one of the following two ways to make an entity-specific prediction: <<<Task Specific Input Token>>> We append a $\texttt {[CLS]}$ token to the input sequence and use the output representation of the $\texttt {[CLS]}$ token denoted by $h_{ \texttt {[CLS]}}$ concatenated with the learned BPE embeddings of the entity as the representation $c_{e,t}$ for our entity tracking system. We then use a linear layer over it to get class probabilities: The aim of the [CLS] token is to encode information related to general entity related semantics participating in the recipe (sentence priors). We then use a single linear layer to learn sentence priors and entity priors independently, without strong interaction. We call this model GPT$_{indep}$. <<</Task Specific Input Token>>> <<<Entity Based Attention>>> Second, we explore a more fine-grained way of using the GPT model outputs. Specifically, we use bilinear attention between $g_e$ and the transformer output for the process tokens $X$ to get a contextual representation $c_{e,t}$ for a given entity. Finally, using a feed-forward network followed by softmax layer gives us the class probabilities: The bilinear attention over the contextual representations of the process tokens allows the model to fetch token content relevant to that particular entity. We call this model GPT$_{attn}$. <<</Entity Based Attention>>> <<</Post-conditioning Models>>> <<<Results and Observations>>> We evaluate the discussed post-conditioning models on the ingredient detection task of the Recipes dataset. To benchmark the performance, we compare to three rule-based baselines. This includes (i) Majority Class, (ii) Exact Match of an ingredient $e$ in recipe step $s_t$, and (iii) First Occurrence, where we predict the ingredient to be present in all steps following the first exact match. These latter two baselines capture natural modes of reasoning about the dataset: an ingredient is used when it is directly mentioned, or it is used in every step after it is mentioned, reflecting the assumption that a recipe is about incrementally adding ingredients to an ever-growing mixture. We also construct a LSTM baseline to evaluate the performance of ELMo embeddings (ELMo$_{token}$ and ELMo$_{sent}$) BIBREF22 compared to GPT. Table TABREF10 compares the performance of the discussed models against the baselines, evaluating per-step entity prediction performance. Using the ground truth about ingredient's state, we also report the uncombined (UR) and combined (CR) recalls, which are per-timestep ingredient recall distinguished by whether the ingredient is explicitly mentioned (uncombined) or part of a mixture (combined). Note that Exact Match and First Occ baselines represent high-precision and high-recall regimes for this task, respectively. As observed from the results, the post-conditioning frameworks underperform compared to the First Occ baseline. While the CR values appear to be high, which would suggest that the model is capturing the addition of ingredients to the mixture, we note that this value is also lower than the corresponding value for First Occ. This result suggests that the model may be approximating the behavior of this baseline, but doing so poorly. The unconditional self-attention mechanism of the transformers does not seem sufficient to capture the entity details at each time step beyond simple presence or absence. Moreover, we see that GPT$_{indep}$ performs somewhat comparably to GPT$_{attn}$, suggesting that consuming the transformer's output with simple attention is not able to really extract the right entity representation. For ProPara, we observe similar performance trends where the post-conditioning model performed below par with the state-of-the-art architectures. <<</Results and Observations>>> <<</Studying Basic Transformer Representations for Entity Tracking>>> <<<Entity-Conditioned Models>>> The post-conditioning framework assumes that the transformer network can form strong representations containing entity information accessible in a shallow way based on the target entity. We now propose a model architecture which more strongly conditions on the entity as a part of the intrinsic self-attention mechanism of the transformers. Our approach consists of structuring input to the transformer network to use and guide the self-attention of the transformers, conditioning it on the entity. Our main mode of encoding the input, the entity-first method, is shown in Figure FIGREF4. The input sequence begins with a [START] token, then the entity under consideration, then a [SEP] token. After each sentence, a [CLS] token is used to anchor the prediction for that sentence. In this model, the transformer can always observe the entity it should be primarily “attending to” from the standpoint of building representations. We also have an entity-last variant where the entity is primarily observed just before the classification token to condition the [CLS] token's self-attention accordingly. These variants are naturally more computationally-intensive than post-conditioned models, as we need to rerun the transformer for each distinct entity we want to make a prediction for. <<<Sentence Level vs. Document Level>>> As an additional variation, we can either run the transformer once per document with multiple [CLS] tokens (a document-level model as shown in Figure FIGREF4) or specialize the prediction to a single timestep (a sentence-level model). In a sentence level model, we formulate each pair of entity $e$ and process step $t$ as a separate instance for our classification task. Thus, for a process with $T$ steps and $m$ entities we get $T \times m$ input sequences for fine tuning our classification task. <<</Sentence Level vs. Document Level>>> <<<Training Details>>> In most experiments, we initialize the network with the weights of the standard pre-trained GPT model, then subsequently do either domain specific LM fine-tuning and supervised task specific fine-tuning. <<<Domain Specific LM fine-tuning>>> For some procedural domains, we have access to additional unlabeled data. To adapt the LM to capture domain intricacies, we fine-tune the transformer network on this unlabeled corpus. <<</Domain Specific LM fine-tuning>>> <<<Supervised Task Fine-Tuning>>> After the domain specific LM fine-tuning, we fine-tune our network parameters for the end task of entity tracking. For fine-tuning for the task, we have a labelled dataset which we denote by $\mathcal {C}$, the set of labelled pairs $(\lbrace s_1, s_2, \dots , s_t\rbrace , y_{et})$ for a given process. The input is converted according to our chosen entity conditioning procedure, then fed through the pre-trained network. In addition, we observed that adding the language model loss during task specific fine-tuning leads to better performance as well, possibly because it adapts the LM to our task-specific input formulation. Thus, <<</Supervised Task Fine-Tuning>>> <<</Training Details>>> <<<Experiments: Ingredient Detection>>> We first evaluate the proposed entity conditioned self-attention model on the Recipes dataset to compare the performance with the post-conditioning variants. <<<Systems to Compare>>> We use the pre-trained GPT architecture in the proposed entity conditioned framework with all its variants. BERT mainly differs in that it is bidirectional, though we also use the pre-trained [CLS] and [SEP] tokens instead of introducing new tokens in the input vocabulary and training them from scratch during fine-tuning. Owing to the lengths of the processes, all our experiments are performed on BERT$_{BASE}$. <<<Neural Process Networks>>> The most significant prior work on this dataset is the work of BIBREF15. However, their data condition differs significantly from ours: they train on a large noisy training set and do not use any of the high-quality labeled data, instead treating it as dev and test data. Consequently, their model achieves low performance, roughly 56 $F_1 $ while ours achieves $82.5$ $F_1$ (though these are not the exact same test set). Moreover, theirs underperforms the first occurrence baseline, which calls into question the value of that training data. Therefore, we do not compare to this model directly. We use the small set of human-annotated data for our probing task. Our train/dev/test split consists of $600/100/175$ recipes, respectively. <<</Neural Process Networks>>> <<</Systems to Compare>>> <<<Results>>> Table TABREF20 compares the overall performances of our proposed models. Our best ET$_{GPT}$ model achieves an $F_1$ score of $82.50$. Comparing to the baselines (Majority through First) and post-conditioned models, we see that the early entity conditioning is critical to achieve high performance. Although the First model still achieves the highest CR, due to operating in a high-recall regime, we see that the ET$_{GPT}$ models all significantly outperform the post-conditioning models on this metric, indicating better modeling of these compositions. Both recall and precision are substantially increaesd compared to these baseline models. Interestingly, the ELMo-based model under-performs the first-occurrence baseline, indicating that the LSTM model is not learning much in terms of recognizing complex entity semantics grounded in long term contexts. Comparing the four variants of structuring input in proposed architectures as discussed in Section SECREF4, we observe that the document-level, entity-first model is the best performing variant. Given the left-to-right unidirectional transformer architecture, this model notably forms target-specific representations for all process tokens, compared to using the transformer self-attention only to extract entity specific information at the end of the process. <<</Results>>> <<<Ablations>>> We perform ablations to evaluate the model's dependency on the context and on the target ingredient. Table TABREF23 shows the results for these ablations. <<<Ingredient Specificity>>> In the “no ingredient” baseline (w/o ing.), the model is not provided with the specific ingredient information. Table TABREF23 shows that while not being a strong baseline, the model achieves decent overall accuracy with the drop in UR being higher compared to CR. This indicates that there are some generic indicators (mixture) that it can pick up to try to guess at overall ingredient presence or absence. <<</Ingredient Specificity>>> <<<Context Importance>>> We compare with a “no context” model (w/o context) which ignore the previous context and only use the current recipe step in determining the ingredient's presence. Table TABREF23 shows that the such model is able to perform surprisingly well, nearly as well as the first occurrence baseline. This is because the model can often recognize words like verbs (for example, add) or nouns (for example, mixture) that indicate many ingredients are being used, and can do well without really tracking any specific entity as desired for the task. <<</Context Importance>>> <<</Ablations>>> <<</Experiments: Ingredient Detection>>> <<<State Change Detection (ProPara)>>> Next, we now focus on a structured task to evaluate the performance of the entity tracking architecture in capturing the structural information in the continuous self-attention framework. For this, we use the ProPara dataset and evaluate our proposed model on the comprehension task. Figure FIGREF2b shows an example of a short instance from the ProPara dataset. The task of identifying state change follows a structure satisfying the existence cycle; for example, an entity can not be created after destruction. Our prior work BIBREF19 proposed a structured model for the task that achieved state-of-the-art performance. We adapt our proposed entity tracking transformer models to this structured prediction framework, capturing creation, movement, existence (distinct from movement or creation), destruction, and non-existence. We use the standard evaluation scheme of the ProPara dataset, which is framed as answering the following categories of questions: (Cat-1) Is e created (destroyed, moved) in the process?, (Cat-2) When (step #) is e created (destroyed, moved)?, (Cat-3) Where is e created/destroyed/moved from/to)? <<</State Change Detection (ProPara)>>> <<</Entity-Conditioned Models>>> <<<Challenging Task Phenomena>>> Based on the results in the previous section, our models clearly achieve strong performance compared to past approaches. We now revisit the challenging cases discussed in Section SECREF2 to see if our entity tracking approaches are modeling sophisticated entity phenomena as advertised. For both datasets and associated tasks, we isolate the specific set of challenging cases grounded in tracking (i) intermediate compositions formed as part of combination of entities leading to no explicit mention, and (ii) implicit events which change entities' states without explicit mention of the affects. <<<Ingredient Detection>>> For Recipes, we mainly want to investigate cases of ingredients getting re-engaged in the recipe not in a raw form but in a combined nature with other ingredients and henceforth no explicit mention. For example, eggs in step 4 of Figure FIGREF2a exemplifies this case. The performance in such cases is indicative of how strongly the model can track compositional entities. We also examine the performance for cases where the ingredient is referred by some other name. <<<Intermediate Compositions>>> Formally, we pick the set of examples where the ground truth is a transition from $0 \rightarrow 1$ (not present to present) and the 1 is a “combined” case. Table TABREF31 shows the model's performance on this subset of cases, of which there are 1049 in the test set. The model achieves an accuracy of 51.1% on these bigrams, which is relatively low given the overall model performance. In the error cases, the model defaults to the $1\rightarrow 1$ pattern indicative of the First Occ baseline. <<</Intermediate Compositions>>> <<<Hypernymy and Synonymy>>> We observe the model is able to capture ingredients based on their hypernyms (nuts $\rightarrow $ pecans, salad $\rightarrow $ lettuce) and rough synonymy (bourbon $\rightarrow $ scotch). This performance can be partially attributed to the language model pre-training. We can isolate these cases by filtering for uncombined ingredients when there is no matching ingredient token in the step. Out of 552 such cases in the test set, the model predicts 375 correctly giving a recall of $67.9$. This is lower than overall UR; if pre-training behaves as advertised, we expect little degradation in this case, but instead we see performance significantly below the average on uncombined ingredients. <<</Hypernymy and Synonymy>>> <<<Impact of external data>>> One question we can ask of the model's capabilities is to what extent they arise from domain knowledge in the large pre-trained data. We train transformer models from scratch and additionally investigate using the large corpus of unlabeled recipes for our LM pre-training. As can be seen in Table TABREF35, the incorporation of external data leads to major improvements in the overall performance. This gain is largely due to the increase in combined recall. One possible reason could be that external data leads to better understanding of verb semantics and in turn the specific ingredients forming part of the intermediate compositions. Figure FIGREF37 shows that verbs are a critical clue the model relies on to make predictions. Performing LM fine-tuning on top of GPT also gives gains. <<</Impact of external data>>> <<</Ingredient Detection>>> <<<State Change Detection>>> For ProPara, Table TABREF28 shows that the model does not significantly outperform the SOTA models in state change detection (Cat-1). However, for those correctly detected events, the transformer model outperforms the previous models for detecting the exact step of state change (Cat-2), primarily based on verb semantics. We do a finer-grained study in Table TABREF36 by breaking down the performance for the three state changes: creation (C), movement (M), and destruction (D), separately. Across the three state changes, the model suffers a loss of performance in the movement cases. This is owing to the fact that the movement cases require a deeper compositional and implicit event tracking. Also, a majority of errors leading to false negatives are due to the the formation of new sub-entities which are then mentioned with other names. For example, when talking about weak acid in “the water becomes a weak acid. the water dissolves limestone” the weak acid is also considered to move to the limestone. <<</State Change Detection>>> <<</Challenging Task Phenomena>>> <<<Analysis>>> The model's performance on these challenging task cases suggests that even though it outperforms baselines, it may not be capturing deep reasoning about entities. To understand what the model actually does, we perform analysis of the model's behavior with respect to the input to understand what cues it is picking up on. <<<Gradient based Analysis>>> One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics. In an ideal scenario, we would want the model to track constituent entities by translating the “focus” to track their newly formed compositions with other entities, often aliased by other names like mixture, blend, paste etc. However, the low performance on such cases shown in Section SECREF5 gives further evidence that the model is not doing this. <<</Gradient based Analysis>>> <<<Input Ablations>>> We can study which inputs are important more directly by explicitly removing specific certain words from the input process paragraph and evaluating the performance of the resulting input under the current model setup. We mainly did experiments to examine the importance of: (i) verbs, and (ii) other ingredients. Table TABREF40 presents these ablation studies. We only observe a minor performance drop from $84.59$ to $82.71$ (accuracy) when other ingredients are removed entirely. Removing verbs dropped the performance to $79.08$ and further omitting both leads to $77.79$. This shows the model’s dependence on verb semantics over tracking the other ingredients. <<</Input Ablations>>> <<</Analysis>>> <<<Conclusion>>> In this paper, we examined the capabilities of transformer networks for capturing entity state semantics. First, we show that the conventional framework of using the transformer networks is not rich enough to capture entity semantics in these cases. We then propose entity-centric ways to formulate richer transformer encoding of the process paragraph, guiding the self-attention in a target entity oriented way. This approach leads to significant performance improvements, but examining model performance more deeply, we conclude that these models still do not model the intermediate compositional entities and perform well by largely relying on surface entity mentions and verb semantics. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Conclusion, Analysis" ], "type": "disordered_section" }
1909.02635
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Effective Use of Transformer Networks for Entity Tracking <<<Abstract>>> Tracking entities in procedural language requires understanding the transformations arising from actions on entities as well as those entities' interactions. While self-attention-based pre-trained language encoders like GPT and BERT have been successfully applied across a range of natural language understanding tasks, their ability to handle the nuances of procedural texts is still untested. In this paper, we explore the use of pre-trained transformer networks for entity tracking tasks in procedural text. First, we test standard lightweight approaches for prediction with pre-trained transformers, and find that these approaches underperform even simple baselines. We show that much stronger results can be attained by restructuring the input to guide the transformer model to focus on a particular entity. Second, we assess the degree to which transformer networks capture the process dynamics, investigating such factors as merged entities and oblique entity references. On two different tasks, ingredient detection in recipes and QA over scientific processes, we achieve state-of-the-art results, but our models still largely attend to shallow context clues and do not form complex representations of intermediate entity or process state. <<</Abstract>>> <<<Introduction>>> Transformer based pre-trained language models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 have been shown to perform remarkably well on a range of tasks, including entity-related tasks like coreference resolution BIBREF5 and named entity recognition BIBREF0. This performance has been generally attributed to the robust transfer of lexical semantics to downstream tasks. However, these models are still better at capturing syntax than they are at more entity-focused aspects like coreference BIBREF6, BIBREF7; moreover, existing state-of-the-art architectures for such tasks often perform well looking at only local entity mentions BIBREF8, BIBREF9, BIBREF10 rather than forming truly global entity representations BIBREF11, BIBREF12. Thus, performance on these tasks does not form sufficient evidence that these representations strongly capture entity semantics. Better understanding the models' capabilities requires testing them in domains involving complex entity interactions over longer texts. One such domain is that of procedural language, which is strongly focused on tracking the entities involved and their interactions BIBREF13, BIBREF14, BIBREF15. This paper investigates the question of how transformer-based models form entity representations and what these representations capture. We expect that after fine-tuning on a target task, a transformer's output representations should somehow capture relevant entity properties, in the sense that these properties can be extracted by shallow classification either from entity tokens or from marker tokens. However, we observe that such “post-conditioning” approaches don't perform significantly better than rule-based baselines on the tasks we study. We address this by proposing entity-centric ways of structuring input to the transformer networks, using the entity to guide the intrinsic self-attention and form entity-centric representations for all the tokens. We find that our proposed methods lead to a significant improvement in performance over baselines. Although our entity-specific application of transformers is more effective at the entity tracking tasks we study, we perform additional analysis and find that these tasks still do not encourage transformers to form truly deep entity representations. Our performance gain is largely from better understanding of verb semantics in terms of associating process actions with entity the paragraph is conditioned on. The model also does not specialize in “tracking” composed entities per se, again using surface clues like verbs to identify the components involved in a new composition. We evaluate our models on two datasets specifically designed to invoke procedural understanding: (i) Recipes BIBREF16, and (ii) ProPara BIBREF14. For the Recipes dataset, we classify whether an ingredient was affected in a certain step, which requires understanding when ingredients are combined or the focus of the recipe shifts away from them. The ProPara dataset involves answering a more complex set of questions about physical state changes of components in scientific processes. To handle this more structured setting, our transformer produces potentials consumed by a conditional random field which predicts entity states over time. Using a unidirectional GPT-based architecture, we achieve state-of-the-art results on both the datasets; nevertheless, analysis shows that our approach still falls short of capturing the full space of entity interactions. <<</Introduction>>> <<<Background: Process Understanding>>> Procedural text is a domain of text involved with understanding some kind of process, such as a phenomenon arising in nature or a set of instructions to perform a task. Entity tracking is a core component of understanding such texts. BIBREF14 introduced the ProPara dataset to probe understanding of scientific processes. The goal is to track the sequence of physical state changes (creation, destruction, and movement) entites undergo over long sequences of process steps. Past work involves both modeling entities across time BIBREF17 and capturing structural constraints inherent in the processes BIBREF18, BIBREF19 Figure FIGREF2b shows an example of the dataset posed as a structured prediction task, as in BIBREF19. For such a domain, it is crucial to capture implicit event occurrences beyond explicit entity mentions. For example, in fuel goes into the generator. The generator converts mechanical energy into electrical energy”, the fuel is implicitly destroyed in the process. BIBREF15 introduced the task of detecting state changes in recipes in the Recipes dataset and proposed an entity-centric memory network neural architecture for simulating action dynamics. Figure FIGREF2a shows an example from the Recipes dataset with a grid showing ingredient presence. We focus specifically on this core problem of ingredient detection; while only one of the sub-tasks associated with their dataset, it reflects some complex semantics involving understanding the current state of the recipe. Tracking of ingredients in the cooking domain is challenging owing to the compositional nature of recipes whereby ingredients mix together and are aliased as intermediate compositions. We pose both of these procedural understanding tasks as classification problems, predicting the state of the entity at each timestep from a set of pre-defined classes. In Figure FIGREF2, these classes correspond to either the presence (1) or absence (0) or the sequence of state changes create (C), move (M), destroy (D), exists (E), and none (O). State-of-the-art approaches on these tasks are inherently entity-centric. Separately, it has been shown that entity-centric language modeling in a continuous framework can lead to better performance for LM related tasks BIBREF20, BIBREF21. Moreover, external data has shown to be useful for modeling process understanding tasks in prior work BIBREF18, BIBREF15, suggesting that pre-trained models may be effective. With such tasks in place, a strong model will ideally learn to form robust entity-centric representation at each time step instead of solely relying on extracting information from the local entity mentions. This expectation is primarily due to the evolving nature of the process domain where entities undergo complex interactions, form intermediate compositions, and are often accompanied by implicit state changes. We now investigate to what extent this is true in a standard application of transformer models to this problem. <<</Background: Process Understanding>>> <<<Studying Basic Transformer Representations for Entity Tracking>>> <<<Post-conditioning Models>>> The most natural way to use the pre-trained transformer architectures for the entity tracking tasks is to simply encode the text sequence and then attempt to “read off” entity states from the contextual transformer representation. We call this approach post-conditioning: the transformer runs with no knowledge of which entity or entities we are going to make predictions on, but we only condition on the target entity after the transformer stage. Figure FIGREF4 depicts this model. Formally, for a labelled pair $(\lbrace s_1, s_2, \dots , s_t\rbrace , y_{et})$, we encode the tokenized sequence of steps up to the current timestep (the sentences are separated by using a special [SEP] token), independent of the entity. We denote by $X=[h_{1}, h_{2},\dots , h_{m}]$ the contextualized hidden representation of the $m$ input tokens from the last layer, and by $\textstyle g_{e}\!=\!\!\!\sum \limits _{\text{ent toks}}\!emb(e_i)$ the entity representation for post conditioning. We now use one of the following two ways to make an entity-specific prediction: <<<Task Specific Input Token>>> We append a $\texttt {[CLS]}$ token to the input sequence and use the output representation of the $\texttt {[CLS]}$ token denoted by $h_{ \texttt {[CLS]}}$ concatenated with the learned BPE embeddings of the entity as the representation $c_{e,t}$ for our entity tracking system. We then use a linear layer over it to get class probabilities: The aim of the [CLS] token is to encode information related to general entity related semantics participating in the recipe (sentence priors). We then use a single linear layer to learn sentence priors and entity priors independently, without strong interaction. We call this model GPT$_{indep}$. <<</Task Specific Input Token>>> <<<Entity Based Attention>>> Second, we explore a more fine-grained way of using the GPT model outputs. Specifically, we use bilinear attention between $g_e$ and the transformer output for the process tokens $X$ to get a contextual representation $c_{e,t}$ for a given entity. Finally, using a feed-forward network followed by softmax layer gives us the class probabilities: The bilinear attention over the contextual representations of the process tokens allows the model to fetch token content relevant to that particular entity. We call this model GPT$_{attn}$. <<</Entity Based Attention>>> <<</Post-conditioning Models>>> <<<Results and Observations>>> We evaluate the discussed post-conditioning models on the ingredient detection task of the Recipes dataset. To benchmark the performance, we compare to three rule-based baselines. This includes (i) Majority Class, (ii) Exact Match of an ingredient $e$ in recipe step $s_t$, and (iii) First Occurrence, where we predict the ingredient to be present in all steps following the first exact match. These latter two baselines capture natural modes of reasoning about the dataset: an ingredient is used when it is directly mentioned, or it is used in every step after it is mentioned, reflecting the assumption that a recipe is about incrementally adding ingredients to an ever-growing mixture. We also construct a LSTM baseline to evaluate the performance of ELMo embeddings (ELMo$_{token}$ and ELMo$_{sent}$) BIBREF22 compared to GPT. Table TABREF10 compares the performance of the discussed models against the baselines, evaluating per-step entity prediction performance. Using the ground truth about ingredient's state, we also report the uncombined (UR) and combined (CR) recalls, which are per-timestep ingredient recall distinguished by whether the ingredient is explicitly mentioned (uncombined) or part of a mixture (combined). Note that Exact Match and First Occ baselines represent high-precision and high-recall regimes for this task, respectively. As observed from the results, the post-conditioning frameworks underperform compared to the First Occ baseline. While the CR values appear to be high, which would suggest that the model is capturing the addition of ingredients to the mixture, we note that this value is also lower than the corresponding value for First Occ. This result suggests that the model may be approximating the behavior of this baseline, but doing so poorly. The unconditional self-attention mechanism of the transformers does not seem sufficient to capture the entity details at each time step beyond simple presence or absence. Moreover, we see that GPT$_{indep}$ performs somewhat comparably to GPT$_{attn}$, suggesting that consuming the transformer's output with simple attention is not able to really extract the right entity representation. For ProPara, we observe similar performance trends where the post-conditioning model performed below par with the state-of-the-art architectures. <<</Results and Observations>>> <<</Studying Basic Transformer Representations for Entity Tracking>>> <<<Entity-Conditioned Models>>> The post-conditioning framework assumes that the transformer network can form strong representations containing entity information accessible in a shallow way based on the target entity. We now propose a model architecture which more strongly conditions on the entity as a part of the intrinsic self-attention mechanism of the transformers. Our approach consists of structuring input to the transformer network to use and guide the self-attention of the transformers, conditioning it on the entity. Our main mode of encoding the input, the entity-first method, is shown in Figure FIGREF4. The input sequence begins with a [START] token, then the entity under consideration, then a [SEP] token. After each sentence, a [CLS] token is used to anchor the prediction for that sentence. In this model, the transformer can always observe the entity it should be primarily “attending to” from the standpoint of building representations. We also have an entity-last variant where the entity is primarily observed just before the classification token to condition the [CLS] token's self-attention accordingly. These variants are naturally more computationally-intensive than post-conditioned models, as we need to rerun the transformer for each distinct entity we want to make a prediction for. <<<Sentence Level vs. Document Level>>> As an additional variation, we can either run the transformer once per document with multiple [CLS] tokens (a document-level model as shown in Figure FIGREF4) or specialize the prediction to a single timestep (a sentence-level model). In a sentence level model, we formulate each pair of entity $e$ and process step $t$ as a separate instance for our classification task. Thus, for a process with $T$ steps and $m$ entities we get $T \times m$ input sequences for fine tuning our classification task. <<</Sentence Level vs. Document Level>>> <<<Training Details>>> In most experiments, we initialize the network with the weights of the standard pre-trained GPT model, then subsequently do either domain specific LM fine-tuning and supervised task specific fine-tuning. <<<Domain Specific LM fine-tuning>>> For some procedural domains, we have access to additional unlabeled data. To adapt the LM to capture domain intricacies, we fine-tune the transformer network on this unlabeled corpus. <<</Domain Specific LM fine-tuning>>> <<<Supervised Task Fine-Tuning>>> After the domain specific LM fine-tuning, we fine-tune our network parameters for the end task of entity tracking. For fine-tuning for the task, we have a labelled dataset which we denote by $\mathcal {C}$, the set of labelled pairs $(\lbrace s_1, s_2, \dots , s_t\rbrace , y_{et})$ for a given process. The input is converted according to our chosen entity conditioning procedure, then fed through the pre-trained network. In addition, we observed that adding the language model loss during task specific fine-tuning leads to better performance as well, possibly because it adapts the LM to our task-specific input formulation. Thus, <<</Supervised Task Fine-Tuning>>> <<</Training Details>>> <<<Experiments: Ingredient Detection>>> We first evaluate the proposed entity conditioned self-attention model on the Recipes dataset to compare the performance with the post-conditioning variants. <<<Systems to Compare>>> We use the pre-trained GPT architecture in the proposed entity conditioned framework with all its variants. BERT mainly differs in that it is bidirectional, though we also use the pre-trained [CLS] and [SEP] tokens instead of introducing new tokens in the input vocabulary and training them from scratch during fine-tuning. Owing to the lengths of the processes, all our experiments are performed on BERT$_{BASE}$. <<<Neural Process Networks>>> The most significant prior work on this dataset is the work of BIBREF15. However, their data condition differs significantly from ours: they train on a large noisy training set and do not use any of the high-quality labeled data, instead treating it as dev and test data. Consequently, their model achieves low performance, roughly 56 $F_1 $ while ours achieves $82.5$ $F_1$ (though these are not the exact same test set). Moreover, theirs underperforms the first occurrence baseline, which calls into question the value of that training data. Therefore, we do not compare to this model directly. We use the small set of human-annotated data for our probing task. Our train/dev/test split consists of $600/100/175$ recipes, respectively. <<</Neural Process Networks>>> <<</Systems to Compare>>> <<<Results>>> Table TABREF20 compares the overall performances of our proposed models. Our best ET$_{GPT}$ model achieves an $F_1$ score of $82.50$. Comparing to the baselines (Majority through First) and post-conditioned models, we see that the early entity conditioning is critical to achieve high performance. Although the First model still achieves the highest CR, due to operating in a high-recall regime, we see that the ET$_{GPT}$ models all significantly outperform the post-conditioning models on this metric, indicating better modeling of these compositions. Both recall and precision are substantially increaesd compared to these baseline models. Interestingly, the ELMo-based model under-performs the first-occurrence baseline, indicating that the LSTM model is not learning much in terms of recognizing complex entity semantics grounded in long term contexts. Comparing the four variants of structuring input in proposed architectures as discussed in Section SECREF4, we observe that the document-level, entity-first model is the best performing variant. Given the left-to-right unidirectional transformer architecture, this model notably forms target-specific representations for all process tokens, compared to using the transformer self-attention only to extract entity specific information at the end of the process. <<</Results>>> <<<Ablations>>> We perform ablations to evaluate the model's dependency on the context and on the target ingredient. Table TABREF23 shows the results for these ablations. <<<Ingredient Specificity>>> In the “no ingredient” baseline (w/o ing.), the model is not provided with the specific ingredient information. Table TABREF23 shows that while not being a strong baseline, the model achieves decent overall accuracy with the drop in UR being higher compared to CR. This indicates that there are some generic indicators (mixture) that it can pick up to try to guess at overall ingredient presence or absence. <<</Ingredient Specificity>>> <<<Context Importance>>> We compare with a “no context” model (w/o context) which ignore the previous context and only use the current recipe step in determining the ingredient's presence. Table TABREF23 shows that the such model is able to perform surprisingly well, nearly as well as the first occurrence baseline. This is because the model can often recognize words like verbs (for example, add) or nouns (for example, mixture) that indicate many ingredients are being used, and can do well without really tracking any specific entity as desired for the task. <<</Context Importance>>> <<</Ablations>>> <<</Experiments: Ingredient Detection>>> <<<State Change Detection (ProPara)>>> Next, we now focus on a structured task to evaluate the performance of the entity tracking architecture in capturing the structural information in the continuous self-attention framework. For this, we use the ProPara dataset and evaluate our proposed model on the comprehension task. Figure FIGREF2b shows an example of a short instance from the ProPara dataset. The task of identifying state change follows a structure satisfying the existence cycle; for example, an entity can not be created after destruction. Our prior work BIBREF19 proposed a structured model for the task that achieved state-of-the-art performance. We adapt our proposed entity tracking transformer models to this structured prediction framework, capturing creation, movement, existence (distinct from movement or creation), destruction, and non-existence. We use the standard evaluation scheme of the ProPara dataset, which is framed as answering the following categories of questions: (Cat-1) Is e created (destroyed, moved) in the process?, (Cat-2) When (step #) is e created (destroyed, moved)?, (Cat-3) Where is e created/destroyed/moved from/to)? <<</State Change Detection (ProPara)>>> <<</Entity-Conditioned Models>>> <<<Challenging Task Phenomena>>> Based on the results in the previous section, our models clearly achieve strong performance compared to past approaches. We now revisit the challenging cases discussed in Section SECREF2 to see if our entity tracking approaches are modeling sophisticated entity phenomena as advertised. For both datasets and associated tasks, we isolate the specific set of challenging cases grounded in tracking (i) intermediate compositions formed as part of combination of entities leading to no explicit mention, and (ii) implicit events which change entities' states without explicit mention of the affects. <<<Ingredient Detection>>> For Recipes, we mainly want to investigate cases of ingredients getting re-engaged in the recipe not in a raw form but in a combined nature with other ingredients and henceforth no explicit mention. For example, eggs in step 4 of Figure FIGREF2a exemplifies this case. The performance in such cases is indicative of how strongly the model can track compositional entities. We also examine the performance for cases where the ingredient is referred by some other name. <<<Intermediate Compositions>>> Formally, we pick the set of examples where the ground truth is a transition from $0 \rightarrow 1$ (not present to present) and the 1 is a “combined” case. Table TABREF31 shows the model's performance on this subset of cases, of which there are 1049 in the test set. The model achieves an accuracy of 51.1% on these bigrams, which is relatively low given the overall model performance. In the error cases, the model defaults to the $1\rightarrow 1$ pattern indicative of the First Occ baseline. <<</Intermediate Compositions>>> <<<Hypernymy and Synonymy>>> We observe the model is able to capture ingredients based on their hypernyms (nuts $\rightarrow $ pecans, salad $\rightarrow $ lettuce) and rough synonymy (bourbon $\rightarrow $ scotch). This performance can be partially attributed to the language model pre-training. We can isolate these cases by filtering for uncombined ingredients when there is no matching ingredient token in the step. Out of 552 such cases in the test set, the model predicts 375 correctly giving a recall of $67.9$. This is lower than overall UR; if pre-training behaves as advertised, we expect little degradation in this case, but instead we see performance significantly below the average on uncombined ingredients. <<</Hypernymy and Synonymy>>> <<<Impact of external data>>> One question we can ask of the model's capabilities is to what extent they arise from domain knowledge in the large pre-trained data. We train transformer models from scratch and additionally investigate using the large corpus of unlabeled recipes for our LM pre-training. As can be seen in Table TABREF35, the incorporation of external data leads to major improvements in the overall performance. This gain is largely due to the increase in combined recall. One possible reason could be that external data leads to better understanding of verb semantics and in turn the specific ingredients forming part of the intermediate compositions. Figure FIGREF37 shows that verbs are a critical clue the model relies on to make predictions. Performing LM fine-tuning on top of GPT also gives gains. <<</Impact of external data>>> <<</Ingredient Detection>>> <<<State Change Detection>>> For ProPara, Table TABREF28 shows that the model does not significantly outperform the SOTA models in state change detection (Cat-1). However, for those correctly detected events, the transformer model outperforms the previous models for detecting the exact step of state change (Cat-2), primarily based on verb semantics. We do a finer-grained study in Table TABREF36 by breaking down the performance for the three state changes: creation (C), movement (M), and destruction (D), separately. Across the three state changes, the model suffers a loss of performance in the movement cases. This is owing to the fact that the movement cases require a deeper compositional and implicit event tracking. Also, a majority of errors leading to false negatives are due to the the formation of new sub-entities which are then mentioned with other names. For example, when talking about weak acid in “the water becomes a weak acid. the water dissolves limestone” the weak acid is also considered to move to the limestone. <<</State Change Detection>>> <<</Challenging Task Phenomena>>> <<<Analysis>>> The model's performance on these challenging task cases suggests that even though it outperforms baselines, it may not be capturing deep reasoning about entities. To understand what the model actually does, we perform analysis of the model's behavior with respect to the input to understand what cues it is picking up on. <<<Gradient based Analysis>>> One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics. In an ideal scenario, we would want the model to track constituent entities by translating the “focus” to track their newly formed compositions with other entities, often aliased by other names like mixture, blend, paste etc. However, the low performance on such cases shown in Section SECREF5 gives further evidence that the model is not doing this. <<</Gradient based Analysis>>> <<<Input Ablations>>> We can study which inputs are important more directly by explicitly removing specific certain words from the input process paragraph and evaluating the performance of the resulting input under the current model setup. We mainly did experiments to examine the importance of: (i) verbs, and (ii) other ingredients. Table TABREF40 presents these ablation studies. We only observe a minor performance drop from $84.59$ to $82.71$ (accuracy) when other ingredients are removed entirely. Removing verbs dropped the performance to $79.08$ and further omitting both leads to $77.79$. This shows the model’s dependence on verb semantics over tracking the other ingredients. <<</Input Ablations>>> <<</Analysis>>> <<<Conclusion>>> In this paper, we examined the capabilities of transformer networks for capturing entity state semantics. First, we show that the conventional framework of using the transformer networks is not rich enough to capture entity semantics in these cases. We then propose entity-centric ways to formulate richer transformer encoding of the process paragraph, guiding the self-attention in a target entity oriented way. This approach leads to significant performance improvements, but examining model performance more deeply, we conclude that these models still do not model the intermediate compositional entities and perform well by largely relying on surface entity mentions and verb semantics. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Conclusion, Introduction" ], "type": "disordered_section" }
2004.00139
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> A Swiss German Dictionary: Variation in Speech and Writing <<<Abstract>>> We introduce a dictionary containing forms of common words in various Swiss German dialects normalized into High German. As Swiss German is, for now, a predominantly spoken language, there is a significant variation in the written forms, even between speakers of the same dialect. To alleviate the uncertainty associated with this diversity, we complement the pairs of Swiss German - High German words with the Swiss German phonetic transcriptions (SAMPA). This dictionary becomes thus the first resource to combine large-scale spontaneous translation with phonetic transcriptions. Moreover, we control for the regional distribution and insure the equal representation of the major Swiss dialects. The coupling of the phonetic and written Swiss German forms is powerful. We show that they are sufficient to train a Transformer-based phoneme to grapheme model that generates credible novel Swiss German writings. In addition, we show that the inverse mapping - from graphemes to phonemes - can be modeled with a transformer trained with the novel dictionary. This generation of pronunciations for previously unknown words is key in training extensible automated speech recognition (ASR) systems, which are key beneficiaries of this dictionary. <<</Abstract>>> <<<Introduction>>> Swiss German refers to any of the German varieties that are spoken in about two thirds of Switzerland BIBREF0. Besides at least one of those dialectal varieties, Swiss German people also master standard (or 'High') German which is taught in school as the official language of communication. Swiss German is varies strongly. Many differences exist in the dialectal continuum of the German speaking part of Switzerland. Besides pronunciation, it also varies a lot in writing. Standard German used to be the exclusive language for writing in Switzerland. Writing in Swiss German has only come up rather recently (notably in text messaging). Because of this, there are no orthographic conventions for Swiss German varieties. Even people speaking the same dialect can, and often do, write phonetically identical words differently. In this paper, we present a dictionary of written standard German words paired with their pronunciation in Swiss German words. Additionally Swiss German spontaneous writings, i.e. writings as they may be used in text messages by native speakers, are paired with Swiss German pronunciations. The primary motivation for building this dictionary is rendering Swiss German accessible for technologies such as Automatic Speech Recognition (ASR). This is the first publicly described Swiss German dictionary shared for research purposes. Furthermore, this is the first dictionary that combines pronunciations of Swiss German with spontaneous writings. <<</Introduction>>> <<<Related Work>>> This dictionary complements previously developed resources for Swiss German, which share some common information. Spontaneous noisy writing has already been recorded in text corpora BIBREF1, BIBREF2, BIBREF3, some of which are also normalized. These resources contain relatively large lexicons of words used in context, but they do not contain any information about pronunciation. The features of speech are represented in other resources, such as BIBREF4, BIBREF5, BIBREF6, which, on the other hand, contain relatively small lexicons (small set of words known to vary across dialects). The ArchiMob corpus does contain a large lexicon of speech and writing (Dieth transcription), but the spoken part is available in audio sources only, without phonetic transcription. This dictionary is the first resource to combine all the relevant information together. A relatively large lexicon has been constructed in which phonetic transcriptions (in the SAMPA alphabet) are mapped to various spontaneous writings controlling for the regional distribution. Some of the representations in this dictionary are produced manually, while others are added using automatic processing. Automatic word-level conversion between various writings in Swiss German has been addressed in several projects, mostly for the purpose of writing normalization BIBREF7, BIBREF2, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF0, BIBREF12. The task of normalization consist of mapping multiple variants of a single lexical item into a single writing usually identical to standard German (an example would be the Swiss German words aarbet and arbäit which both map to standard German arbeit ('work')). Early data sets were processed manually (SMS). This was followed by an implementation of character-level statistical machine translation models BIBREF13, BIBREF14 and, more recently, with neural sequence-to-sequence technology. The solution by lusettietal18 employes soft-attention encoder-decoder recurrent networks enhanced with synchronous multilevel decoding. ruzsicsetal19 develop these models further to integrate linguistic (PoS) features. A slightly different task of translating between standard German and Swiss dialects was first addressed with finite state technology BIBREF15. More recently, honnet-etal17 test convolutional neural networks on several data sets. We continue the work on using neural networks for modeling word-level conversion. Unlike previous work, which dealt with written forms only, we train models for mapping phonetic representations to various possible writings. The proposed solution relies on the latest framework for sequence-to-sequence tasks — transformer networks BIBREF16. <<</Related Work>>> <<<Dictionary Content and access>>> We pair 11'248 standard German written words with their phonetical representations in six different Swiss dialects: Zürich, St. Gallen, Basel, Bern, Visp, and Stans (Figure FIGREF1). The phonetic words were written in a modified version of the Speech Assessment Methods Phonetic Alphabet (SAMPA). The Swiss German phonetic words are also paired with Swiss German writings in the latin alphabet. (From here onwards, a phonetic representation of a Swiss German word will be called a SAMPA and a written Swiss German word will be called a GSW.) This dictionary comes in two versions as we used two differently sized sets of SAMPA characters. Our extended set including 137 phones allows for a detailed and adequate representation of the diverse pronunciation in Switzerland. The smaller set of 59 phones is easier to compute. The phone reduction was mainly done by splitting up combined SAMPA-characters such as diphthongs. UI s t r $ \lbrace $ tt @ and U I s t r $ \lbrace $ t t @ for example are both representations of the Stans pronunciation of the standard German word austreten ('step out'). The latter representation belongs to the dictionary based on the smaller phoneset. Table TABREF2 shows an example of five dictionary entries based on the bigger phoneset. For a subset of 9000 of 11'248 standard German words, we have manually annotated GSWs for Visp (9000) and for Zurich (2 x 9000, done by two different annotators). For a subsubset of 600 of those standard German words we have manually annotated GSWs for the four other dialects of St. Gallen, Basel, Bern, and Stans. The remaining writing variants are generated using automatic methods described below. The dictionary is freely available for research purposes under the creative commons share-alike non-commercial licence via this website http://tiny.uzh.ch/11X. <<</Dictionary Content and access>>> <<<Construction of the dictionary>>> In the following we present the steps of construction of our dictionary, also detailing how we chose the six dialects to represent Swiss German and how, starting with a list of standard German words, we retrieved the mapping SAMPAs and GSWs. <<<Discretising continuous variation>>> To be able to represent Swiss German by only a few dialects which differ considerably it is necessary to discretize linguistic varieties. Because, as mentioned earlier, regional language variation in Switzerland is continuous. For this identification of different varieties we used a dialectometric analysis BIBREF17. This analysis is based on lexical, phonological, morphological data of the German speaking areas of Switzerland BIBREF4. As we worked with word-lists and not sentences, we discounted syntactical influences on area boundaries that are also described in that analysis. We represent six differentiated linguistic varieties. We considered working with ten linguistic varieties because this number of areas was the 'best-cut'-analysis in the dialectometric analysis BIBREF17. Yet, due to time restraints and considerable overlap between some of the linguistic varieties, we reduced this number to six. We also made some adjustements to the chosen varieties in order to correspond better to the perception of speakers and in favor of more densely populated areas. One way to represent the six individualized linguistic varieties would have been to annotate the dialectal centers, i.e. those places that have the average values of dialectal properties within the area where the variety is spoken. However, we chose to represent the linguistic varieties by the most convenient urban places. Those were the dialects of the Cities Zurich, St. Gallen, Basel, Bern, and Visp, and Stans. <<</Discretising continuous variation>>> <<<Manual annotation>>> <<<SAMPAs>>> For each standard German word in our dictionary we manually annotated its phonetic representation in the six chosen dialects. The information about the pronunciation of Swiss German words is partially available also from other sources but not fully accessible BIBREF4 BIBREF7. To help us with pronunciation our annotators first used their knowledge as native speakers (for Zurich and Visp). Secondly, they consulted dialect specific grammars BIBREF18 BIBREF19 BIBREF20 BIBREF21 BIBREF22 as well as dialect specific lexica BIBREF23 BIBREF24 BIBREF25. They also considered existing Swiss German dictionaries BIBREF7 BIBREF4, listened to recordings BIBREF0 and conferred with friends and acquaintances originating from the respective locations. <<</SAMPAs>>> <<<GSWs>>> 9000 GSWs for Visp German and 2 x 9000 GSWs for Zurich German were annotated by native speakers of the respective dialect. Our annotators created the GSWs while looking at standard German words and without looking at the corresponding SAMPAs for Visp and Zurich. Through this independence from SAMPAs we are able to avoid biases concerning the phonetics as well as the meaning of the word in generating GSWs. At a later stage of our work, we added each 600 GSWs for the four dialects of St. Gallen, Basel, Bern, and Stans in order to improve our phoneme-to-grapheme(p2g) model (see next section). For the manual annotation of these dialects we had no native speakers. Therefore, when writing the GSWs, our annotators relied on the corresponding SAMPAs of these dialects, which they had made an effort to create before. <<</GSWs>>> <<</Manual annotation>>> <<<Automatic annotation>>> In order to account for the mentioned variety of everyday Swiss German writing, we aimed for more than one GSW per SAMPA. The heterogeneous writing style makes the SAMPA$\,\rightarrow \,$GSW a one to many relation instead of the regular one to one that speakers of standard languages are accustomed to. To save time in generating the many GSWs, we opted for an automatic process. We first tried to automatize the generation of GSWs with a rule-based program. Via SAMPAs together with phoneme-to-grapheme mappings we tried to obtain all possible GSWs. Yet, this yielded mostly impossible writings and also not all the writings we had already done manually. We then set up a phoneme-to-grapheme(p2g) model to generate the most likely spellings. <<<Transformer-based Phoneme to Grapheme (p2g)>>> The process of generating written forms from a given SAMPA can be viewed as a sequence-to-sequence problem, where the input is a sequence of phonemes and the output is a sequence of graphemes. We decided to use a Transformer-based model for the phoneme-to-grapheme (p2g) task. The reason for this is twofold. First, the Transformer has shown great success in seq2seq tasks and it has outperformed LSTM and CNN-based models. Second, it is computationally more efficient than LSTM and CNN networks. The Transformer consists of an encoder and a decoder part. The encoder generates a contextual representation for each input SAMPA that is then fed into the decoder together with the previously decoded grapheme. They both have N identical layers. In the encoder, each layer has a multi-head self-attention layer and a position-wise fully-connected feed-forward layer. While in the decoder, in addition to these two layers, we also have an additional multi-headed attention layer that uses the output of the encoder BIBREF16. We are using a Pytorch implementation of the Transformer. As a result of the small size of the dataset, we are using a smaller model with only 2 layers and 2 heads. The dimension of the key (d_k) and value (d_v) is 32, the dimension of the model (d_model) and the word vectors (d_word_vec) is 50 and the hidden inner dimension (d_inner_hid) is 400. The model is trained for 55 epochs with a batch size of 64 and a dropout of 0.2. For decoding the output of the model, we are using beam search with beam size 10. We experimented with different beam sizes, but we saw that it does not have significant influence on the result. The training set is made of 24'000 phonemes-to-graphemes pairs, which are the result of transcribing 8'000 High German words into two Zurich forms and one Visp form. Those transcriptions were made independently by three native speakers. Due to the scarcity of data, we decided not to distinguish between dialects. Hence, a single model receives a sequence of SAMPA symbols and learns to generate a matching sequence of characters. <<</Transformer-based Phoneme to Grapheme (p2g)>>> <<<Test set and evaluation>>> Our team of Swiss German annotators evaluated a test-set of 1000 words. We aimed to exclude only very far-off forms (tagged '0'), such that they are very probably to be seen as false by Swiss German speakers. The accepted writings (tagged '1') might include some that seem off to the Swiss German reader. In order to consistently rate the output, the criteria shown in table TABREF4 were followed. A GSW was tagged '0' if there was at least one letter added, missing, or changed without comprehensible phonetic reason. GSWs were also tagged '0' if there were at least two mistakes that our annotators saw as minor. 'Minor mistakes' are substitutions of related sounds or spellings, added or omitted geminates, and changes in vowel length. For each of the 1000 words in the test-set, five GSW-predictions in all six dialects were given to our annotators. For Visp and Zurich they tagged each 1000x5 GSW predictions with 1 or 0. For St. Gallen, Basel, Bern, and Stans, they evaluated 200x5. In Table TABREF13 we show the result from this evaluation. We count the number of correct GSWs (labeled as '1') among the top 5 candidates generated by the p2g model, where the first candidate is the most relevant, then the second one and so on. The evaluation was done at a stage where our model was trained only on GSW for Zurich and Visp (see sec. SECREF8). The amount of correct predictions are lower for the dialects of St. Gallen, Basel, Bern, and Stans, mainly because there were some special SAMPA characters we used for those dialects and the model did not have the correlating latin character strings. After the evaluation, we added each 600 GSWs for the four dialects of St. Gallen, Basel, Bern, and Stans to improve the model. <<</Test set and evaluation>>> <<<Grapheme to Phoneme (g2p) and its benefits for ASR>>> Automatic speech recognition (ASR) systems are the main use cases for our dictionary. ASR systems convert spoken language into text. Today, they are widely used in different domains from customer and help centers to voice-controlled assistants and devices. The main resources needed for an ASR system are audio, transcriptions and a phonetic dictionary. The quality of the ASR system is highly dependant of the quality of the dictionary. With our resource we provide such a phonetic dictionary. To increase the benefits of our data for ASR systems, we also trained a grapheme-to-phoneme (g2p) model: Out-of-vocabulary words can be a problem for ASR system. For those out-of-vocabulary words we need a model that can generate pronunciations from a written form, in real time. This is why we train a grapheme-to-phoneme (g2p) model that generates a sequence of phonemes for a given word. We train the g2p model using our dictionary and compare its performance with a widely used joint-sequence g2p model, Sequitur BIBREF26. For the g2p model we are using the same architecture as for the p2g model. The only difference is input and output vocabulary. The Sequitur and our model are using the dictionary with the same train (19'898 samples), test (2'412 samples) and validation (2'212 samples) split. Additionally, we also test their performance only on the items from the Zurich and Visp dialect, because most of the samples are from this two dialects. In Table TABREF15 we show the result of the comparison of the two models. We compute the edit distance between the predicted and the true pronunciation and report the number of exact matches. In the first columns we have the result using the whole test set with all the dialects, and in the 2nd and 3rd columns we show the number of exact matches only on the samples from the test set that are from the Zurich and Visp dialect. For here we can clearly see that our model performs better than the Sequitur model. The reason why we have less matches in the Visp dialect compared to Zurich is because most of the our data is from the Zurich dialect. <<</Grapheme to Phoneme (g2p) and its benefits for ASR>>> <<</Automatic annotation>>> <<</Construction of the dictionary>>> <<<Discussion>>> One of our objectives was to map phonetic words with their writings. There are some mismatches between SAMPA and GSWs in our dictionary, especially when the GSWs were done manually and independently from the SAMPA. Those mismatches occur where there is no straightforward correspondence of a standard German and Swiss German word. Two kinds of such a missing correspondence can be distinguished. First, there are ambiguous standard German words. And that is necessarily so, as our dictionary is based on a list of standard German words without sentential or any other context. An example for a (morphologically) ambiguous word is standard German liebe. As we did not differentiate upper- and lower-case, it can both mean (a) 'I love' or (b) 'the love'. As evident from table 1, liebe (a) and liebi (b) were mixed in our dictionary. The same is the case for standard German frage which means either (a) 'I ask' or (b) 'the question'. Swiss German fröge, froge, fregu (a) and or (b) fraag, froog were mixed. (For both examples, see table 1.) The second case of missing straightforward correspondence is distance between standard German and Swiss German. For one, lexical preferences in Swiss German differ from those in standard German. To express that food is 'tasty' in standard German, the word lecker is used. This is also possible in Swiss German, yet the word fein is much more common. Another example is that the standard German word rasch ('swiftly') is uncommon in Swiss German – synonyms of the word are preferred. Both of this shows in the variety of options our annotators chose for those words (see table 1). Also, the same standard German word may have several dialectal versions in Swiss German. For example there is a short and long version for the standard German word grossvater, namely grospi and grossvatter. A second aim was to represent the way Swiss German speaking people write spontaneously. However, as our annotators wrote the spontaneous GSWs mostly while looking at standard German words, our GSWs might be biased towards standard German orthography. Yet, there is potentially also a standard German influence in the way Swiss German is actually written. We partly revised our dictionary in order to adapt to everyday writing: We introduced explicit boundary marking into our SAMPAs. We inserted an _ in the SAMPA where there would usually be a space in writing. An example where people would conventionally add a space are corresponding forms to standard German preterite forms, for example 'ging'. The Swiss German corresponding past participles – here isch gange – would (most often) be written separately. So entries like b i n k a N @ in table 1 were changed to b i n _ k a N @. <<</Discussion>>> <<<Conclusion>>> In this work we introduced the first Swiss German dictionary. Through its dual nature - both spontaneous written forms in multiple dialects and accompanying phonetic representations - we believe it will become a valuable resource for multiple tasks, including automated speech recognition (ASR). This resource was created using a combination of manual and automated work, in a collaboration between linguists and data scientists that leverages the best of two worlds - domain knowledge and data-driven focus on likely character combinations. Through the combination of complementary skills we overcame the difficulty posed by the important variations in written Swiss German and generated a resource that adds value to downstream tasks. We show that the SAMPA to written Swiss German is useful in speech recognition and can replace the previous state of the art. Moreover the written form to SAMPA is promising and has applications in areas like text-to-speech. We make the dictionary freely available for researchers to expand and use. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Related Work" ], "type": "disordered_section" }
1912.07025
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Indiscapes: Instance Segmentation Networks for Layout Parsing of Historical Indic Manuscripts <<<Abstract>>> Historical palm-leaf manuscript and early paper documents from Indian subcontinent form an important part of the world's literary and cultural heritage. Despite their importance, large-scale annotated Indic manuscript image datasets do not exist. To address this deficiency, we introduce Indiscapes, the first ever dataset with multi-regional layout annotations for historical Indic manuscripts. To address the challenge of large diversity in scripts and presence of dense, irregular layout elements (e.g. text lines, pictures, multiple documents per image), we adapt a Fully Convolutional Deep Neural Network architecture for fully automatic, instance-level spatial layout parsing of manuscript images. We demonstrate the effectiveness of proposed architecture on images from the Indiscapes dataset. For annotation flexibility and keeping the non-technical nature of domain experts in mind, we also contribute a custom, web-based GUI annotation tool and a dashboard-style analytics portal. Overall, our contributions set the stage for enabling downstream applications such as OCR and word-spotting in historical Indic manuscripts at scale. <<</Abstract>>> <<<Introduction>>> The collection and analysis of historical document images is a key component in the preservation of culture and heritage. Given its importance, a number of active research efforts exist across the world BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. In this paper, we focus on palm-leaf and early paper documents from the Indian sub-continent. In contrast with modern or recent era documents, such manuscripts are considerably more fragile, prone to degradation from elements of nature and tend to have a short shelf life BIBREF6, BIBREF7, BIBREF8. More worryingly, the domain experts who can decipher such content are small in number and dwindling. Therefore, it is essential to access the content within these documents before it is lost forever. Surprisingly, no large-scale annotated Indic manuscript image datasets exist for the benefit of researchers in the community. In this paper, we take a significant step to address this gap by creating such a dataset. Given the large diversity in language, script and non-textual regional elements in these manuscripts, spatial layout parsing is crucial in enabling downstream applications such as OCR, word-spotting, style-and-content based retrieval and clustering. For this reason, we first tackle the problem of creating a diverse, annotated spatial layout dataset. This has the immediate advantage of bypassing the hurdle of language and script familiarity for annotators since layout annotation does not require any special expertise unlike text annotation. In general, manuscripts from Indian subcontinent pose many unique challenges (Figure FIGREF1). To begin with, the documents exhibit a large multiplicity of languages. This is further magnified by variations in intra-language script systems. Along with text, manuscripts may contain pictures, tables, non-pictorial decorative elements in non-standard layouts. A unique aspect of Indic and South-East Asian manuscripts is the frequent presence of holes punched in the document for the purpose of binding BIBREF8, BIBREF9, BIBREF6. These holes cause unnatural gaps within text lines. The physical dimensions of the manuscripts are typically smaller compared to other historical documents, resulting in a dense content layout. Sometimes, multiple manuscript pages are present in a single image. Moreover, imaging-related factors such as varying scan quality play a role as well. Given all of these challenges, it is important to develop robust and scalable approaches for the problem of layout parsing. In addition, given the typical non-technical nature of domain experts who study manuscripts, it is also important to develop easy-to-use graphical interfaces for annotation, post-annotation visualization and analytics. We make the following contributions: We introduce Indiscapes, the first ever historical Indic manuscript dataset with detailed spatial layout annotations (Section SECREF3). We adapt a deep neural network architecture for instance-level spatial layout parsing of historical manuscript images (Section SECREF16). We also introduce a lightweight web-based GUI for annotation and dashboard-style analytics keeping in mind the non-technical domain experts and the unique layout-level challenges of Indic manuscripts (Section SECREF11). <<</Introduction>>> <<<Related Work>>> A number of research groups have invested significant efforts in the creation and maintenance of annotated, publicly available historical manuscript image datasets BIBREF10, BIBREF11, BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF12. Other collections contain character-level and word-level spatial annotations for South-East Asian palm-leaf manuscripts BIBREF9, BIBREF4, BIBREF13. In these latter set of works, annotations for lines are obtained by considering the polygonal region formed by union of character bounding boxes as a line. While studies on Indic palm-leaf and paper-based manuscripts exist, these are typically conducted on small and often, private collections of documents BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20. No publicly available large-scale, annotated dataset of historical Indic manuscripts exists to the best of our knowledge. In contrast with existing collections, our proposed dataset contains a much larger diversity in terms of document type (palm-leaf and early paper), scripts and annotated layout elements (see Tables TABREF5,TABREF8). An additional level of complexity arises from the presence of multiple manuscript pages within a single image (see Fig. FIGREF1). A number of contributions can also be found for the task of historical document layout parsing BIBREF21, BIBREF22, BIBREF23, BIBREF24. Wei et al. BIBREF22 explore the effect of using a hybrid feature selection method while using autoencoders for semantic segmentation in five historical English and Medieval European manuscript datasets. Chen et al. BIBREF24 explore the use of Fully Convolutional Networks (FCN) for the same datasets. Barakat et al. BIBREF25 propose a FCN for segmenting closely spaced, arbitrarily oriented text lines from an Arabic manuscript dataset. The mentioned approaches, coupled with efforts to conduct competitions on various aspects of historical document layout analysis have aided progress in this area BIBREF26, BIBREF27, BIBREF28. A variety of layout parsing approaches, including those employing the modern paradigm of deep learning, have been proposed for Indic BIBREF17, BIBREF19, BIBREF29, BIBREF20 and South-East Asian BIBREF23, BIBREF30, BIBREF13, BIBREF31, BIBREF32 palm-leaf and paper manuscript images. However, existing approaches typically employ brittle hand-crafted features or demonstrate performance on datasets which are limited in terms of layout diversity. Similar to many recent works, we employ Fully Convolutional Networks in our approach. However, a crucial distinction lies in our formulation of layout parsing as an instance segmentation problem, rather than just a semantic segmentation problem. This avoids the problem of closely spaced layout regions (e.g. lines) being perceived as contiguous blobs. The ready availability of annotation and analysis tools has facilitated progress in creation and analysis of historical document manuscripts BIBREF33, BIBREF34, BIBREF35. The tool we propose in the paper contains many of the features found in existing annotation systems. However, some of these systems are primarily oriented towards single-user, offline annotation and do not enable a unified management of annotation process and monitoring of annotator performance. In contrast, our web-based system addresses these aspects and provides additional capabilities. Many of the additional features in our system are tailored for annotation and examining annotation analytics for documents with dense and irregular layout elements, especially those found in Indic manuscripts. In this respect, our annotation system is closer to the recent trend of collaborative, cloud/web-based annotation systems and services BIBREF36, BIBREF37, BIBREF38. <<</Related Work>>> <<<Indiscapes: The Indic manuscript dataset>>> The Indic manuscript document images in our dataset are obtained from two sources. The first source is the publicly available Indic manuscript collection from University of Pennsylvania's Rare Book and Manuscript Library BIBREF39, also referred to as Penn-in-Hand (PIH). From the $2{,}880$ Indic manuscript book-sets, we carefully curated 193 manuscript images for annotation. Our curated selection aims to maximize the diversity of the dataset in terms of various attributes such as the extent of document degradation, script language, presence of non-textual elements (e.g. pictures, tables) and number of lines. Some images contain multiple manuscript pages stacked vertically or horizontally (see bottom-left image in Figure FIGREF1). The second source for manuscript images in our dataset is Bhoomi, an assorted collection of 315 images sourced from multiple Oriental Research Institutes and libraries across India. As with the first collection, we chose a subset intended to maximize the overall diversity of the dataset. However, this latter set of images are characterized by a relatively inferior document quality, presence of multiple languages and from a layout point of view, predominantly contain long, closely and irregularly spaced text lines, binding holes and degradations (Figure FIGREF1). Though some document images contain multiple manuscripts, we do not attempt to split the image into multiple pages. While this poses a challenge for annotation and automatic image parsing, retaining such images in the dataset eliminates manual/semi-automatic intervention. As our results show, our approach can successfully handle such multi-page documents, thereby making it truly an end-to-end system. Overall, our dataset contains 508 annotated Indic manuscripts. Some salient aspects of the dataset can be viewed in Table TABREF5 and a pictorial illustration of layout regions can be viewed in Figure FIGREF13. Note that multiple regions can overlap, unlike existing historical document datasets which typically contain disjoint region annotations. For the rest of the section, we discuss the challenges associated with annotating Indic manuscripts (Section SECREF9) and our web-based annotation tool (Section SECREF11). <<<Annotation Challenges>>> A variety of unique challenges exist in the context of annotating Indic manuscript layouts. The challenges arise from three major sources. Content: The documents are written in a large variety of Indic languages. Some languages even exhibit intra-language script variations. A large pool of annotators familiar with the languages and scripts present in the corpus is required to ensure proper annotation of lines and character components. Layout: Unlike some of the existing datasets, Indic manuscripts contain non-textual elements such as color pictures, tables and document decorations. These elements are frequently interspersed with text in non-standard layouts. In many cases, the manuscripts contain one or more physical holes, designed for a thread-like material to pass through and bind the leaves together as a book. Such holes vary in terms of spatial location, count and hole diameter. When the holes are present in the middle of the document, they cause a break in the contiguity of lines. In some documents, the line contiguity is broken by a `virtual' hole-like gap, possibly intended for creation of the punched hole at a future time. In many cases, the separation between lines is extremely small. The handwritten nature of these documents and the surface material result in extremely uneven lines, necessitating meticulous and slow annotation. If multiple manuscript pages are present, the stacking order could be horizontal or vertical. Overall, the sheer variety in layout elements poses a significant challenge, not only for annotation, but also for automated layout parsing. Degradations: Historical Indic manuscripts tend to be inherently fragile and prone to damage due to various sources – wood-and-leaf-boring insects, humidity seepage, improper storage and handling etc. While some degradations cause the edges of the document to become frayed, others manifest as irregularly shaped perforations in the document interior. It may be important to identify such degradations before attempting lexically-focused tasks such as OCR or word-spotting. <<</Annotation Challenges>>> <<<Annotation Tool>>> Keeping the aforementioned challenges in mind, we introduce a new browser-based annotation tool (see Figure FIGREF10). The tool is designed to operate both stand-alone and as a web-service. The web-service mode enables features such as distributed parallel sessions by registered annotators, dashboard-based live session monitoring and a wide variety of annotation-related analytics. On the front-end, a freehand region option is provided alongside the usual rectangle and polygon to enable maximum annotation flexibility. The web-service version also features a `Correction-mode' which enables annotators to correct existing annotations from previous annotators. Additionally, the tool has been designed to enable lexical (text) annotations in future. <<</Annotation Tool>>> <<</Indiscapes: The Indic manuscript dataset>>> <<<Indic Manuscript Layout Parsing>>> To succeed at layout parsing of manuscripts, we require a system which can accurately localize various types of regions (e.g. text lines, isolated character components, physical degradation, pictures, holes). More importantly, we require a system which can isolate individual instances of each region (e.g. multiple text lines) in the manuscript image. Also, in our case, the annotation regions for manuscripts are not disjoint and can overlap (e.g. The annotation region for a text line can overlap with the annotation region of a hole (see Figure FIGREF13)). Therefore, we require a system which can accommodate such overlaps. To meet all of these requirements, we model our problem as one of semantic instance-level segmentation and employ the Mask R-CNN BIBREF40 architecture which has proven to be very effective at the task of object-instance segmentation in photos. Next, we briefly describe the Mask R-CNN architecture and our modifications of the same. Subsequently, we provide details related to implementation (Section SECREF17), model training (Section SECREF18) and inference (Section SECREF19). <<<Network Architecture>>> The Mask-RCNN architecture contains three stages as described below (see Figure FIGREF12). Backbone: The first stage, referred to as the backbone, is used to extract features from the input image. It consists of a convolutional network combined with a feature-pyramid network BIBREF41, thereby enabling multi-scale features to be extracted. We use the first four blocks of ResNet-50 BIBREF42 as the convolutional network. Region Proposal Network (RPN): This is a convolutional network which scans the pyramid feature map generated by the backbone network and generates rectangular regions commonly called `object proposals' which are likely to contain objects of interest. For each level of the feature pyramid and for each spatial location at a given level, a set of level-specific bounding boxes called anchors are generated. The anchors typically span a range of aspect ratios (e.g. $1:2, 1:1, 2:1$) for flexibility in detection. For each anchor, the RPN network predicts (i) the probability of an object being present (`objectness score') (ii) offset coordinates of a bounding box relative to location of the anchor. The generated bounding boxes are first filtered according to the `objectness score'. From boxes which survive the filtering, those that overlap with the underlying object above a certain threshold are chosen. After applying non-maximal suppression to remove overlapping boxes with relatively smaller objectness scores, the final set of boxes which remain are termed `object proposals' or Regions-of-Interest (RoI). Multi-Task Branch Networks: The RoIs obtained from RPN are warped into fixed dimensions and overlaid on feature maps extracted from the backbone to obtain RoI-specific features. These features are fed to three parallel task sub-networks. The first sub-network maps these features to region labels (e.g. Hole,Character-Line-Segment) while the second sub-network maps the RoI features to bounding boxes. The third sub-network is fully convolutional and maps the features to the pixel mask of the underlying region. Note that the ability of the architecture to predict masks independently for each RoI plays a crucial role in obtaining instance segmentations. Another advantage is that it naturally addresses situations where annotations or predictions overlap. <<</Network Architecture>>> <<<Implementation Details>>> The dataset splits used for training, validation and test phases can be seen in Table TABREF6. All manuscript images are adaptively resized to ensure the width does not exceed 1024 pixels. The images are padded with zeros such that the input to the deep network has spatial dimensions of $1024 \times 1024$. The ground truth region masks are initially subjected to a similar resizing procedure. Subsequently, they are downsized to $28 \times 28$ in order to match output dimensions of the mask sub-network. <<<Training>>> The network is initialized with weights obtained from a Mask R-CNN trained on the MS-COCO BIBREF43 dataset with a ResNet-50 backbone. We found that this results in faster convergence and stabler training compared to using weights from a Mask-RCNN trained on ImageNet BIBREF44 or training from scratch. Within the RPN network, we use custom-designed anchors of 5 different scales and with 3 different aspect ratios. Specifically, we use the following aspect ratios – 1:1,1:3,1:10 – keeping in mind the typical spatial extents of the various region classes. We also limit the number of RoIs (`object proposals') to 512. We use categorical cross entropy loss $\mathcal {L}_{RPN}$ for RPN classification network. Within the task branches, we use categorical cross entropy loss $\mathcal {L}_{r}$ for region classification branch, smooth L1 loss BIBREF45 ($\mathcal {L}_{bb}$) for final bounding box prediction and per-pixel binary cross entropy loss $\mathcal {L}_{mask}$ for mask prediction. The total loss is a convex combination of these losses, i.e. $\mathcal {L} = \lambda _{RPN} \mathcal {L}_{RPN} + \lambda _{r} \mathcal {L}_{r} + \lambda _{bb} \mathcal {L}_{bb} + \lambda _{mask} \mathcal {L}_{mask}$. The weighting factors ($\lambda $s) are set to 1. However, to ensure priority for our task of interest namely mask prediction, we set $\lambda _{mask}=2$. For optimization, we use Stochastic Gradient Descent (SGD) optimizer with a gradient norm clipping value of $0.5$. The batch size, momentum and weight decay are set to 1, $0.9$ and $10^{-3}$ respectively. Given the relatively smaller size of our manuscript dataset compared to the photo dataset (MS-COCO) used to originally train the base Mask R-CNN, we adopt a multi-stage training strategy. For the first stage (30 epochs), we train only the task branch sub-networks using a learning rate of $10^{-3}$ while freezing weights in the rest of the overall network. This ensures that the task branches are fine-tuned for the types of regions contained in manuscript images. For the second stage (20 epochs), we additionally train stage-4 and up of the backbone ResNet-50. This enables extraction of appropriate semantic features from manuscript images. The omission of the initial 3 stages in the backbone for training is due to the fact that they provide generic, re-usable low-level features. To ensure priority coverage of hard-to-localize regions, we use focal loss BIBREF46 for mask generation. For the final stage (15 epochs), we train the entire network using a learning rate of $10^{-4}$. <<</Training>>> <<<Inference>>> During inference, the images are rescaled and processed using the procedure described at the beginning of the subsection. The number of RoIs retained after non-maximal suppression (NMS) from the RPN is set to 1000. From these, we choose the top 100 region detections with objectness score exceeding $0.5$ and feed the corresponding RoIs to the mask branch sub-network for mask generation. It is important to note that this strategy is different from the parallel generation of outputs and use of the task sub-networks during training. The generated masks are then binarized using an empirically chosen threshold of $0.4$ and rescaled to their original size using bilinear interpolation. On these generated masks, NMS with a threshold value of $0.5$ is applied to obtain the final set of predicted masks. <<</Inference>>> <<</Implementation Details>>> <<<Evaluation>>> For quantitative evaluation, we compute Average Precision (AP) for a particular IoU threshold, a measure widely reported in instance segmentation literature BIBREF47, BIBREF43. We specifically report $AP_{50}$ and $AP_{75}$, corresponding to AP at IoU thresholds 50 and 75 respectively BIBREF40. In addition, we report an overall score by averaging AP at different IoU thresholds ranging from $0.5$ to $0.95$ in steps of $0.05$. The AP measure characterizes performance at document level. To characterize performance for each region type, we report two additional measures BIBREF24 – average class-wise IoU (cwIoU) and average class-wise per-pixel accuracy (cwAcc). Consider a fixed test document $k$. Suppose there are $r_i$ regions of class $i$ and let ${IoU}_r$ denote the IoU score for one such region $r$, i.e. $1 \leqslant r \leqslant r_i$. The per-class IoU score for class $i$ and document $k$ is computed as ${cwIoU}^d_i = \frac{\sum _r {IoU}_r}{r_i}$. Suppose there are $N_i$ documents containing at least a single region of class $i$ in ground-truth. The overall per-class IoU score for class $i$ is computed as ${cwIoU}_i = \frac{\sum _d {cwIoU}^d_i}{N_i}$. In a similar manner, we define class-wise pixel accuracy ${pwAcc}^d_i$ at document level and average it across all the documents containing class $i$, i.e. ${cwAcc}_i = \frac{\sum _d {pwAcc}^d_i}{N_i}$. Note that our approach for computing class-wise scores prevents documents with a relatively larger number of class instances from dominating the score and in this sense, differs from existing approaches BIBREF24 <<</Evaluation>>> <<</Indic Manuscript Layout Parsing>>> <<<Results>>> We report quantitative results using the measures described in Section SECREF20. Table TABREF14 reports Average Precision and Table TABREF15 reports class-wise average IOUs and per-pixel accuracies. Qualitative results can be viewed in Figure FIGREF13. Despite the challenges posed by manuscripts, our model performs reasonably well across a variety of classes. As the qualitative results indicate, the model predicts accurate masks for almost all the regions. The results also indicate that our model handles overlap between Holes and Character line segments well. From ablative experiments, we found that our choice of focal loss was crucial in obtaining accurate mask boundaries. Unlike traditional semantic segmentation which would have produced a single blob-like region for line segments, our instance-based approach isolates each text line separately. Additionally, the clear demarcation between Page-Boundary and background indicates that our system identifies semantically relevant regions for downstream analysis. As the result at the bottom of Figure FIGREF13 shows, our system can even handle images with multiple pages, thus removing the need for any pre-processing related to isolation of individual pages. From quantitative results, we observe that Holes, Character line segments, Page boundary and Pictures are parsed the best while Physical degradations are difficult to parse due to the relatively small footprint and inconsistent patterns in degradations. The results show that performance for Penn in Hand (PIH) documents is better compared to Bhoomi manuscripts. We conjecture that the presence of closely spaced and unevenly written lines in latter is the cause. In our approach, two (or more) objects may share the same bounding box in terms of overlap and it is not possible to determine which box to choose during mask prediction. Consequently, an underlying line's boundary may either end up not being detected or the predicted mask might be poorly localized. However, this is not a systemic problem since our model achieves good performance even for very dense Bhoomi document line layouts. <<</Results>>> <<<Conclusion>>> Via this paper, we propose Indiscapes, the first dataset with layout annotations for historical Indic manuscripts. We believe that the availability of layout annotations will play a crucial role in reducing the overall complexity for OCR and other tasks such as word-spotting, style-and-content based retrieval. In the long-term, we intend to expand the dataset, not only numerically but also in terms of layout, script and language diversity. As a significant contribution, we have also adapted a deep-network based instance segmentation framework custom modified for fully automatic layout parsing. Given the general nature of our framework, advances in instance segmentation approaches can be leveraged thereby improving performance over time. Our proposed web-based annotator system, although designed for Indic manuscripts, is flexible, and could be reused for similar manuscripts from Asian subcontinent. We intend to expand the capabilities of our annotator system in many useful ways. For instance, the layout estimated by our deep-network could be provided to annotators for correction, thus reducing annotation efforts. Finally, we plan to have our dataset, instance segmentation system and annotator system publicly available. This would enable large-scale data collection and automated analysis efforts for Indic as well as other historical Asian manuscripts. The repositories related to the systems presented in this paper and the Indiscapes dataset can be accessed at https://ihdia.iiit.ac.in. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Conclusion" ], "type": "disordered_section" }
1911.01188
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Analysing Coreference in Transformer Outputs <<<Abstract>>> We analyse coreference phenomena in three neural machine translation systems trained with different data settings with or without access to explicit intra- and cross-sentential anaphoric information. We compare system performance on two different genres: news and TED talks. To do this, we manually annotate (the possibly incorrect) coreference chains in the MT outputs and evaluate the coreference chain translations. We define an error typology that aims to go further than pronoun translation adequacy and includes types such as incorrect word selection or missing words. The features of coreference chains in automatic translations are also compared to those of the source texts and human translations. The analysis shows stronger potential translationese effects in machine translated outputs than in human translations. <<</Abstract>>> <<<Introduction>>> In the present paper, we analyse coreference in the output of three neural machine translation systems (NMT) that were trained under different settings. We use a transformer architecture BIBREF0 and train it on corpora of different sizes with and without the specific coreference information. Transformers are the current state-of-the-art in NMT BIBREF1 and are solely based on attention, therefore, the kind of errors they produce might be different from other architectures such as CNN or RNN-based ones. Here we focus on one architecture to study the different errors produced only under different data configurations. Coreference is an important component of discourse coherence which is achieved in how discourse entities (and events) are introduced and discussed. Coreference chains contain mentions of one and the same discourse element throughout a text. These mentions are realised by a variety of linguistic devices such as pronouns, nominal phrases (NPs) and other linguistic means. As languages differ in the range of such linguistic means BIBREF2, BIBREF3, BIBREF4, BIBREF5 and in their contextual restrictions BIBREF6, these differences give rise to problems that may result in incoherent (automatic) translations. We focus on coreference chains in English-German translations belonging to two different genres. In German, pronouns, articles and adjectives (and some nouns) are subject to grammatical gender agreement, whereas in English, only person pronouns carry gender marking. An incorrect translation of a pronoun or a nominal phrase may lead to an incorrect relation in a discourse and will destroy a coreference chain. Recent studies in automatic coreference translation have shown that dedicated systems can lead to improvements in pronoun translation BIBREF7, BIBREF8. However, standard NMT systems work at sentence level, so improvements in NMT translate into improvements on pronouns with intra-sentential antecedents, but the phenomenon of coreference is not limited to anaphoric pronouns, and even less to a subset of them. Document-level machine translation (MT) systems are needed to deal with coreference as a whole. Although some attempts to include extra-sentential information exist BIBREF9, BIBREF10, BIBREF11, BIBREF12, the problem is far from being solved. Besides that, some further problems of NMT that do not seem to be related to coreference at first glance (such as translation of unknown words and proper names or the hallucination of additional words) cause coreference-related errors. In our work, we focus on the analysis of complete coreference chains, manually annotating them in the three translation variants. We also evaluate them from the point of view of coreference chain translation. The goal of this paper is two-fold. On the one hand, we are interested in various properties of coreference chains in these translations. They include total number of chains, average chain length, the size of the longest chain and the total number of annotated mentions. These features are compared to those of the underlying source texts and also the corresponding human translation reference. On the other hand, we are also interested in the quality of coreference translations. Therefore, we define a typology of errors, and and chain members in MT output are annotated as to whether or not they are correct. The main focus is on such errors as gender, number and case of the mentions, but we also consider wrong word selection or missing words in a chain. Unlike previous work, we do not restrict ourselves to pronouns. Our analyses show that there are further errors that are not directly related to coreference but consequently have an influence on the correctness of coreference chains. The remainder of the paper is organised as follows. Section SECREF2 introduces the main concepts and presents an overview of related MT studies. Section SECREF3 provides details on the data, systems used and annotation procedures. Section SECREF4 analyses the performance of our transformer systems on coreferent mentions. Finally we summarise and draw conclusions in Section SECREF5. <<</Introduction>>> <<<Background and Related Work>>> <<<Coreference>>> Coreference is related to cohesion and coherence. The latter is the logical flow of inter-related ideas in a text, whereas cohesion refers to the text-internal relationship of linguistic elements that are overtly connected via lexico-grammatical devices across sentences BIBREF13. As stated by BIBREF14, this connectedness of texts implies dependencies between sentences. And if these dependencies are neglected in translation, the output text no longer has the property of connectedness which makes a sequence of sentences a text. Coreference expresses identity to a referent mentioned in another textual part (not necessarily in neighbouring sentences) contributing to text connectedness. An addressee is following the mentioned referents and identifies them when they are repeated. Identification of certain referents depends not only on a lexical form, but also on other linguistic means, e.g. articles or modifying pronouns BIBREF15. The use of these is influenced by various factors which can be language-dependent (range of linguistic means available in grammar) and also context-independent (pragmatic situation, genre). Thus, the means of expressing reference differ across languages and genres. This has been shown by some studies in the area of contrastive linguistics BIBREF6, BIBREF3, BIBREF5. Analyses in cross-lingual coreference resolution BIBREF16, BIBREF17, BIBREF18, BIBREF19 show that there are still unsolved problems that should be addressed. <<</Coreference>>> <<<Translation studies>>> Differences between languages and genres in the linguistic means expressing reference are important for translation, as the choice of an appropriate referring expression in the target language poses challenges for both human and machine translation. In translation studies, there is a number of corpus-based works analysing these differences in translation. However, most of them are restricted to individual phenomena within coreference. For instance, BIBREF20 analyse abstract anaphors in English-German translations. To our knowledge, they do not consider chains. BIBREF21 in their contrastive analysis of potential coreference chain members in English-German translations, describe transformation patterns that contain different types of referring expressions. However, the authors rely on automatic tagging and parsing procedures and do not include chains into their analysis. The data used by BIBREF4 and BIBREF22 contain manual chain annotations. The authors focus on different categories of anaphoric pronouns in English-Czech translations, though not paying attention to chain features (e.g. their number or size). Chain features are considered in a contrastive analysis by BIBREF6. Their study concerns different phenomena in a variety of genres in English and German comparable texts. Using contrastive interpretations, they suggest preferred translation strategies from English into German, i.e. translators should use demonstrative pronouns instead of personal pronouns (e.g. dies/das instead of es/it) when translating from English into German and vice versa. However, corpus-based studies show that translators do not necessarily apply such strategies. Instead, they often preserve the source language anaphor's categories BIBREF20 which results in the shining through effects BIBREF23. Moreover, due to the tendency of translators to explicitly realise meanings in translations that were implicit in the source texts BIBREF24, translations are believed to contain more (explicit) referring expressions, and subsequently, more (and longer) coreference chains. Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation. These features include number of mentions, number of chains, average chain length and the longest chain size. Machine-translated texts are compared to their sources and the corresponding human translations in terms of these features. We expect to find shining through and explicitation effects in automatic translations. <<</Translation studies>>> <<<Coreference in MT>>> As explained in the introduction, several recent works tackle the automatic translation of pronouns and also coreference BIBREF25, BIBREF26 and this has, in part, motivated the creation of devoted shared tasks and test sets to evaluate the quality of pronoun translation BIBREF7, BIBREF27, BIBREF28, BIBREF29. But coreference is a wider phenomenon that affects more linguistic elements. Noun phrases also appear in coreference chains but they are usually studied under coherence and consistency in MT. BIBREF30 use topic modelling to extract coherence chains in the source, predict them in the target and then promote them as translations. BIBREF31 use word embeddings to enforce consistency within documents. Before these works, several methods to post-process the translations and even including a second decoding pass were used BIBREF32, BIBREF33, BIBREF34, BIBREF35. Recent NMT systems that include context deal with both phenomena, coreference and coherence, but usually context is limited to the previous sentence, so chains as a whole are never considered. BIBREF10 encode both a source and a context sentence and then combine them to obtain a context-aware input. The same idea was implemented before by BIBREF36 where they concatenate a source sentence with the previous one to include context. Caches BIBREF37, memory networks BIBREF38 and hierarchical attention methods BIBREF39 allow to use a wider context. Finally, our work is also related to BIBREF40 and BIBREF41 where their oracle translations are similar to the data-based approach we introduce in Section SECREF4. <<</Coreference in MT>>> <<</Background and Related Work>>> <<<Systems, Methods and Resources>>> <<<State-of-the-art NMT>>> Our NMT systems are based on a transformer architecture BIBREF0 as implemented in the Marian toolkit BIBREF42 using the transformer big configuration. We train three systems (S1, S2 and S3) with the corpora summarised in Table TABREF5. The first two systems are transformer models trained on different amounts of data (6M vs. 18M parallel sentences as seen in the Table). The third system includes a modification to consider the information of full coreference chains throughout a document augmenting the sentence to be translated with this information and it is trained with the same amount of sentence pairs as S1. A variant of the S3 system participated in the news machine translation of the shared task held at WMT 2019 BIBREF43. <<<S1>>> is trained with the concatenation of Common Crawl, Europarl, a cleaned version of Rapid and the News Commentary corpus. We oversample the latter in order to have a significant representation of data close to the news genre in the final corpus. <<</S1>>> <<<S2>>> uses the same data as S1 with the addition of a filtered portion of Paracrawl. This corpus is known to be noisy, so we use it to create a larger training corpus but it is diluted by a factor 4 to give more importance to high quality translations. <<</S2>>> <<<S3>>> S3 uses the same data as S1, but this time enriched with the cross- and intra-sentential coreference chain markup as described below. The information is included as follows. Source documents are annotated with coreference chains using the neural annotator of Stanford CoreNLP BIBREF44. The tool detects pronouns, nominal phrases and proper names as mentions in a chain. For every mention, CoreNLP extracts its gender (male, female, neutral, unknown), number (singular, plural, unknown), and animacy (animate, inanimate, unknown). This information is not added directly but used to enrich the single sentence-based MT training data by applying a set of heuristics implemented in DocTrans: We enrich pronominal mentions with the exception of "I" with the head (main noun phrase) of the chain. The head is cleaned by removing articles and Saxon genitives and we only consider heads with less than 4 tokens in order to avoid enriching a word with a full sentence We enrich nominal mentions including proper names with the gender of the head The head itself is enriched with she/he/it/they depending on its gender and animacy The enrichment is done with the addition of tags as shown in the examples: I never cook with $<$b_crf$>$ salt $<$e_crf$>$ it. $<$b_crf$>$ she $<$e_crf$>$ Biles arrived late. In the first case heuristic 1 is used, salt is the head of the chain and it is prepended to the pronoun. The second example shows a sentence where heuristic 2 has been used and the proper name Biles has now information about the gender of the person it is referring to. Afterwards, the NMT system is trained at sentence level in the usual way. The data used for the three systems is cleaned, tokenised, truecased with Moses scripts and BPEd with subword-nmt using separated vocabularies with 50 k subword units each. The validation set ($news2014$) and the test sets described in the following section are pre-processed in the same way. <<</S3>>> <<</State-of-the-art NMT>>> <<<Test data under analysis>>> As one of our aims is to compare coreference chain properties in automatic translation with those of the source texts and human reference, we derive data from ParCorFull, an English-German corpus annotated with full coreference chains BIBREF46. The corpus contains ca. 160.7 thousand tokens manually annotated with about 14.9 thousand mentions and 4.7 thousand coreference chains. For our analysis, we select a portion of English news texts and TED talks from ParCorFull and translate them with the three NMT systems described in SECREF4 above. As texts considerably differ in their length, we select 17 news texts (494 sentences) and four TED talks (518 sentences). The size (in tokens) of the total data set under analysis – source (src) and human translations (ref) from ParCorFull and the automatic translations produced within this study (S1, S2 and S3) are presented in Table TABREF20. Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set. <<</Test data under analysis>>> <<<Manual annotation process>>> The English sources and their corresponding human translations into German were already manually annotated for coreference chains. We follow the same scheme as BIBREF47 to annotate the MT outputs with coreference chains. This scheme allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause). The mentions can be defined further in terms of their cohesive function (antecedent, anaphoric, cataphoric, comparative, substitution, ellipsis, apposition). Antecedents can either be marked as simple or split or as entity or event. The annotation scheme also includes pronoun type (personal, possessive, demonstrative, reflexive, relative) and modifier types of NPs (possessive, demonstrative, definite article, or none for proper names), see BIBREF46 for details. The mentions referring to the same discourse item are linked between each other. We use the annotation tool MMAX2 BIBREF48 which was also used for the annotation of ParCorFull. In the next step, chain members are annotated for their correctness. For the incorrect translations of mentions, we include the following error categories: gender, number, case, ambiguous and other. The latter category is open, which means that the annotators can add their own error types during the annotation process. With this, the final typology of errors also considered wrong named entity, wrong word, missing word, wrong syntactic structure, spelling error and addressee reference. The annotation of machine-translated texts was integrated into a university course on discourse phenomena. Our annotators, well-trained students of linguistics, worked in small groups on the assigned annotation tasks (4-5 texts, i.e. 12-15 translations per group). At the beginning of the annotation process, the categories under analysis were discussed within the small groups and also in the class. The final versions of the annotation were then corrected by the instructor. <<</Manual annotation process>>> <<</Systems, Methods and Resources>>> <<<Results and Analyses>>> <<<Chain features>>> First, we compare the distribution of several chain features in the three MT outputs, their source texts and the corresponding human translations. Table TABREF20 shows that, overall, all machine translations contain a greater number of annotated mentions in both news texts and TED talks than in the annotated source (src and src$_{\rm CoreNLP}$) and reference (ref) texts. Notice that src$_{\rm CoreNLP}$ —where coreferences are not manually but automatically annotated with CoreNLP— counts also the tokens that the mentions add to the sentences, but not the tags. The larger number of mentions may indicate a strong explicitation effect observed in machine-translated texts. Interestingly, CoreNLP detects a similar number of mentions in both genres, while human annotators clearly marked more chains for TED than for news. Both genres are in fact quite different in nature; whereas only $37\%$ of the mentions are pronominal in news texts (343 out of 915), the number grows to $58\%$ for TED (577 out of 989), and this could be an indicator of the difficulty of the genres for NMT systems. There is also a variation in terms of chain number between translations of TED talks and news. While automatic translations of news texts contain more chains than the corresponding human annotated sources and references, machine-translated TED talks contain less chains than the sources and human translations. However, there is not much variation between the chain features of the three MT outputs. The chains are also longer in machine-translated output than in reference translations as can be seen by the number of mentions per chain and the length of the longest chain. <<</Chain features>>> <<<MT quality at system level>>> We evaluate the quality of the three transformer engines with two automatic metrics, BLEU BIBREF49 and METEOR BIBREF50. Table TABREF25 shows the scores in two cases: all, when the complete texts are evaluated and coref, when only the subset of sentences that have been augmented in S3 are considered – 265 out of 494 for news and 239 out of 518 for TED. For news, the best system is that trained on more data, S2; but for TED talks S3 with less data has the best performance. The difference between the behaviour of the systems can be related to the different genres. We have seen that news are dominated by nominal mentions while TED is dominated by pronominal ones. Pronouns mostly need coreference information to be properly translated, while noun phrases can be improved simply because more instances of the nouns appear in the training data. With this, S3 improves the baseline S1 in +1.1 BLEU points for TED$_{coref}$ but -0.2 BLEU points for news$_{coref}$. However, even if the systems differ in the overall performance, the change is not related to the number of errors in coreference chains. Table TABREF25 also reports the number of mistakes in the translation of coreferent mentions. Whereas the number of errors correlates with translation quality (as measured by BLEU) for news$_{coref}$ this is not the case of TED$_{coref}$. <<</MT quality at system level>>> <<<Error analysis>>> The total distribution for the 10 categories of errors defined in Section SECREF23 can be seen in Figure FIGREF29. Globally, the proportion of errors due to our closed categories (gender, number, case and ambiguous) is larger for TED talks than for news (see analysis in Section SECREF28). Gender is an issue with all systems and genres which does not get solved by the addition of more data. Additionally, news struggle with wrong words and named entities; for this genre the additional error types (see analysis in Section SECREF30) represent around 60% of the errors of S1/S3 to be compared to the 40% of TED talks. <<<Predefined error categories>>> 0.4em 0.4Within our predefined closed categories (gender, number, case and ambiguous), the gender errors belong to the most frequent errors. They include wrong gender translation of both pronouns, as sie (“her”) instead of ihn (“him”) in example SECREF28 referring to the masculine noun Mindestlohn, and nominal phrases, as der Stasi instead of die Stasi, where a masculine form of the definite article is used instead of a feminine one, in example SECREF28. .src: [The current minimum wage] of 7.25 US dollars is a pittance... She wants to raise [it] to 15 dollars an hour. S3: [Der aktuelle Mindestlohn] von 7,25 US-Dollar sei Almosen... Sie möchte [sie] auf 15 Dollar pro Stunde erhöhen. . src: ...let's have a short look at the history of [the Stasi], because it is really important for understanding [its] self-conception. S2: Lassen sie uns... einen kurzen Blick auf die Geschichte [des Stasi] werfen denn es wirklich wichtig, [seine] Selbstauffassung zu verstehen. The gender-related errors are common to all the automatic translations. Interestingly, systems S1 and S3 have more problems with gender in translations of TED talks, whereas they do better in translating news, which leads us to assume that this is a data-dependent issue: while the antecedent for news is in the same sentence it is not for TED talks. A closer look at the texts with a high number of gender problems confirms this assumption —they contain references to females who were translated with male forms of nouns and pronouns (e.g. Mannschaftskapitän instead of Mannschaftskapitänin). We also observe errors related to gender for the cases of explicitation in translation. Some impersonal English constructions not having direct equivalents in German are translated with personal constructions, which requires an addition of a pronoun. Such cases of explicitation were automatically detected in parallel data in BIBREF21, BIBREF2. They belong to the category of obligatory explicitation, i.e. explicitation dictated by differences in the syntactic and semantic structure of languages, as defined by BIBREF51. An MT system tends to insert a male form instead of a female one even if it's marked as feminine (S3 adds the feminine form she as markup), as illustrated in example SECREF28 where the automatic translation contains the masculine pronoun er (“he”) instead of sie (“she”). . src: [Biles] earned the first one on Tuesday while serving as the exclamation point to retiring national team coordinator Martha Karolyi's going away party. ref: [Biles] holte die erste Medaille am Dienstag, während [sie] auf der Abschiedsfeier der sich in Ruhestand begehenden Mannschaftskoordinatorin Martha Karolyi als Ausrufezeichen diente. S2: [Biles] verdiente den ersten am Dienstag, während [er] als Ausrufezeichen für den pensionierten Koordinator der Nationalmannschaft, Martha Karolyi, diente. Another interesting case of a problem related to gender is the dependence of the referring expressions on grammatical restrictions in German. In example SECREF28, the source chain contains the pronoun him referring to both a 6-year-old boy and The child. In German, these two nominal phrases have different gender (masculine vs. neutral). The pronoun has grammatical agreement with the second noun of the chain (des Kindes) and not its head (ein 6 Jahre alter Junge). . src: Police say [a 6-year-old boy] has been shot in Philadelphia... [The child]'s grandparents identified [him] to CBS Philadelphia as [Mahaj Brown]. S1: Die Polizei behauptet, [ein 6 Jahre alter Junge] sei in Philadelphia erschossen worden... Die Großeltern [des Kindes] identifizierten [ihn] mit CBS Philadelphia als [Mahaj Brown]. Case- and number-related errors are less frequent in our data. However, translations of TED talks with S2 contain much more number-related errors than other outputs. Example SECREF28 illustrates this error type which occurs within a sentence. The English source contains the nominal chain in singular the cost – it, whereas the German correspondence Kosten has a plural form and requires a plural pronoun (sie). However, the automatic translation contains the singular pronoun es. . src: ...to the point where [the cost] is now below 1,000 dollars, and it's confidently predicted that by the year 2015 [it] will be below 100 dollars... S2: bis zu dem Punkt, wo [die Kosten] jetzt unter 1.000 Dollar liegen, und es ist zuversichtlich, dass [es] bis zum Jahr 2015 unter 100 Dollar liegen wird... Ambiguous cases often contain a combination of errors or they are difficult to categorise due to the ambiguity of the source pronouns, as the pronoun it in example SECREF28 which may refer either to the noun trouble or even the clause Democracy is in trouble is translated with the pronoun sie (feminine). In case of the first meaning, the pronoun would be correct, but the form of the following verb should be in plural. In case of a singular form, we would need to use a demonstrative pronoun dies (or possibly the personal pronoun es). . src: Democracy is in trouble... and [it] comes in part from a deep dilemma... S2: Die Demokratie steckt in Schwierigkeiten ... und [sie] rührt teilweise aus einem tiefen Dilemma her... <<</Predefined error categories>>> <<<Additional error types>>> At first glance, the error types discussed in this section do not seem to be related to coreference —a wrong translation of a noun can be traced back to the training data available and the way NMT deals with unknown words. However, a wrong translation of a noun may result in its invalidity to be a referring expression for a certain discourse item. As a consequence, a coreference chain is damaged. We illustrate a chain with a wrong named entity translation in example SECREF30. The source chain contains five nominal mentions referring to an American gymnast Aly Raisman: silver medalist – “Final Five” teammate – Aly Raisman – Aly Raisman – Raisman. All the three systems used different names. Example SECREF30 illustrates the translation with S2, where Aly Donovan and Aly Encence were used instead of Aly Raisman, and the mention Raisman disappears completely from the chain. . src: Her total of 62.198 was well clear of [silver medalist] and [“Final Five” teammate] [Aly Raisman]...United States' Simone Biles, left, and [Aly Raisman] embrace after winning gold and silver respectively... [Raisman]'s performance was a bit of revenge from four years ago, when [she] tied... S2: Ihre Gesamtmenge von 62.198 war deutlich von [Silbermedaillengewinner] und [“Final Five” Teamkollegen] [Aly Donovan]... Die Vereinigten Staaten Simone Biles, links und [Aly Encence] Umarmung nach dem Gewinn von Gold und Silber... Vor vier Jahren, als [sie]... Example SECREF30 illustrates translation of the chain The scaling in the opposite direction – that scale. The noun phrases Die Verlagerung in die entgegengesetzte Richtung (“the shift in the opposite direction”) and dieses Ausmaß (“extent/scale”) used in the S1 output do not corefer (cf. Wachstum in die entgegengesetzte Richtung and Wachstum in the reference translation). Notice that these cases with long noun phrases are not tackled by S3 either. . src: [The scaling in the opposite direction]...drive the structure of business towards the creation of new kinds of institutions that can achieve [that scale]. ref: [Wachstum in die entgegengesetzte Richtung]... steuert die Struktur der Geschäfte in Richtung Erschaffung von neuen Institutionen, die [dieses Wachstum] erreichen können. S1: [Die Verlagerung in die entgegengesetzte Richtung]... treibt die Struktur der Unternehmen in Richtung der Schaffung neuer Arten von Institutionen, die [dieses Ausmaß] erreichen können. <<</Additional error types>>> <<<Types of erroneous mentions>>> Finally, we also analyse the types of the mentions marked as errors. They include either nominal phrases or pronouns. Table TABREF32 shows that there is a variation between the news texts and TED talks in terms of these features. News contain more erroneous nominal phrases, whereas TED talks contain more pronoun-related errors. Whereas both the news and the TED talks have more errors in translating anaphors, there is a higher proportion of erroneous antecedents in the news than in the TED talks. It is also interesting to see that S3 reduces the percentage of errors in anaphors for TED, but has a similar performance to S2 on news. <<</Types of erroneous mentions>>> <<</Error analysis>>> <<</Results and Analyses>>> <<<Summary and Conclusions>>> We analysed coreferences in the translation outputs of three transformer systems that differ in the training data and in whether they have access to explicit intra- and cross-sentential anaphoric information (S3) or not (S1, S2). We see that the translation errors are more dependent on the genre than on the nature of the specific NMT system: whereas news (with mainly NP mentions) contain a majority of errors related to wrong word selection, TED talks (with mainly pronominal mentions) are prone to accumulate errors on gender and number. System S3 was specifically designed to solve this issue, but we cannot trace the improvement from S1 to S3 by just counting the errors and error types, as some errors disappear and others emerge: coreference quality and automatic translation quality do not correlate in our analysis on TED talks. As a further improvement to address the issue, we could add more parallel data to our training corpus with a higher density of coreference chains such as movie subtitles or parallel TED talks. We also characterised the originals and translations according to coreference features such as total number of chains and mentions, average chain length and size of the longest chain. We see how NMT translations increase the number of mentions about $30\%$ with respect to human references showing even a more marked explicitation effect than human translations do. As future work, we consider a more detailed comparison of the human and machine translations, and analyse the purpose of the additional mentions added by the NMT systems. It would be also interesting to evaluate of the quality of the automatically computed coreferences chains used for S3. <<</Summary and Conclusions>>> <<</Title>>>
{ "references": [ "Systems, Methods and Resources, Abstract" ], "type": "disordered_section" }
1910.06701
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> NumNet: Machine Reading Comprehension with Numerical Reasoning <<<Abstract>>> Numerical reasoning, such as addition, subtraction, sorting and counting is a critical skill in human's reading comprehension, which has not been well considered in existing machine reading comprehension (MRC) systems. To address this issue, we propose a numerical MRC model named as NumNet, which utilizes a numerically-aware graph neural network to consider the comparing information and performs numerical reasoning over numbers in the question and passage. Our system achieves an EM-score of 64.56% on the DROP dataset, outperforming all existing machine reading comprehension models by considering the numerical relations among numbers. <<</Abstract>>> <<<Introduction>>> Machine reading comprehension (MRC) aims to infer the answer to a question given the document. In recent years, researchers have proposed lots of MRC models BIBREF0, BIBREF1, BIBREF2, BIBREF3 and these models have achieved remarkable results in various public benchmarks such as SQuAD BIBREF4 and RACE BIBREF5. The success of these models is due to two reasons: (1) Multi-layer architectures which allow these models to read the document and the question iteratively for reasoning; (2) Attention mechanisms which would enable these models to focus on the part related to the question in the document. However, most of existing MRC models are still weak in numerical reasoning such as addition, subtraction, sorting and counting BIBREF6, which are naturally required when reading financial news, scientific articles, etc. BIBREF6 proposed a numerically-aware QANet (NAQANet) model, which divides the answer generation for numerical MRC into three types: (1) extracting spans; (2) counting; (3) addition or subtraction over numbers. NAQANet makes a pioneering attempt to answer numerical questions but still does not explicitly consider numerical reasoning. To tackle this problem, we introduce a novel model NumNet that integrates numerical reasoning into existing MRC models. A key problem to answer questions requiring numerical reasoning is how to perform numerical comparison in MRC systems, which is crucial for two common types of questions: (1) Numerical Comparison: The answers of the questions can be directly obtained via performing numerical comparison, such as sorting and comparison, in the documents. For example, in Table TABREF1, for the first question, if the MRC system knows the fact that “$49>47>36>31>22$”, it could easily extract that the second longest field goal is 47-yard. (2) Numerical Condition: The answers of the questions cannot be directly obtained through simple numerical comparison in the documents, but often require numerical comparison for understanding the text. For example, for the second question in Table TABREF1, an MRC system needs to know which age group made up more than 7% of the population to count the group number. Hence, our NumNet model considers numerical comparing information among numbers when answering numerical questions. As shown in Figure FIGREF3, NumNet first encodes both the question and passages through an encoding module consisting of convolution layers, self-attention layers and feed-forward layers as well as a passage-question attention layer. After that, we feed the question and passage representations into a numerically-aware graph neural network (NumGNN) to further integrate the comparison information among numbers into their representations. Finally, we utilize the numerically-aware representation of passages to infer the answer to the question. The experimental results on a public numerical MRC dataset DROP BIBREF6 show that our NumNet model achieves significant and consistent improvement as compared to all baseline methods by explicitly performing numerical reasoning over numbers in the question and passage. In particular, we show that our model could effectively deal with questions requiring sorting with multi-layer NumGNN. The source code of our paper is available at https://github.com/ranqiu92/NumNet. <<</Introduction>>> <<<Related Work>>> <<<Machine Reading Comprehension>>> Machine reading comprehension (MRC) has become an important research area in NLP. In recent years, researchers have published a large number of annotated MRC datasets such as CNN/Daily Mail BIBREF7, SQuAD BIBREF4, RACE BIBREF5, TriviaQA BIBREF8 and so on. With the blooming of available large-scale MRC datasets, a great number of neural network-based MRC models have been proposed to answer questions for a given document including Attentive Reader BIBREF9, BiDAF BIBREF3, Interactive AoA Reader BIBREF2, Gated Attention Reader BIBREF1, R-Net BIBREF10, DCN BIBREF11, QANet BIBREF12, and achieve promising results in most existing public MRC datasets. Despite the success of neural network-based MRC models, researchers began to analyze the data and rethink to what extent we have solved the problem of MRC. Some works BIBREF0, BIBREF13, BIBREF14 classify the reasoning skills required to answer the questions into the following types: (1) Exact matching/Paraphrasing; (2) Summary; (3) Logic reasoning; (4) Utilizing external knowledge; (5) Numerical reasoning. They found that most existing MRC models are focusing on dealing with the first three types of questions. However, all these models suffer from problems when answering the questions requiring numerical reasoning. To the best of our knowledge, our work is the first one that explicitly incorporates numerical reasoning into the MRC system. The most relevant work to ours is NAQANet BIBREF6, which adapts the output layer of QANet BIBREF12 to support predicting answers based on counting and addition/subtraction over numbers. However, it does not consider numerical reasoning explicitly during encoding or inference. <<</Machine Reading Comprehension>>> <<<Arithmetic Word Problem Solving>>> Recently, understanding and solving arithmetic word problems (AWP) has attracted the growing interest of NLP researchers. BIBREF15 proposed a simple method to address arithmetic word problems, but mostly focusing on subsets of problems which only require addition and subtraction. After that, BIBREF16 proposed an algorithmic approach which could handle arithmetic word problems with multiple steps and operations. BIBREF17 further formalized the AWP problem as that of generating and scoring equation trees via integer linear programming. BIBREF18 and BIBREF19 proposed sequence to sequence solvers for the AWP problems, which are capable of generating unseen expressions and do not rely on sophisticated manual features. BIBREF20 leveraged deep Q-network to solve the AWP problems, achieving a good balance between effectiveness and efficiency. However, all the existing AWP systems are only trained and validated on small benchmark datasets. BIBREF21 found that the performance of these AWP systems sharply degrades on larger datasets. Moreover, from the perspective of NLP, MRC problems are more challenging than AWP since the passages in MRC are mostly real-world texts which require more complex skills to be understood. Above all, it is nontrivial to adapt most existing AWP models to the MRC scenario. Therefore, we focus on enhancing MRC models with numerical reasoning abilities in this work. <<</Arithmetic Word Problem Solving>>> <<</Related Work>>> <<<Methodology>>> In this section, we will introduce the framework of our model NumNet and provide the details of the proposed numerically-aware graph neural network (NumGNN) for numerical reasoning. <<<Framework>>> An overview of our model NumNet is shown in Figure FIGREF3. We compose our model with encoding module, reasoning module and prediction module. Our major contribution is the reasoning module, which leverages a NumGNN between the encoding module and prediction module to explicitly consider the numerical comparison information and perform numerical reasoning. As NAQANet has been shown effective for handling numerical MRC problem BIBREF6, we leverage it as our base model and mainly focus on the design and integration of the NumGNN in this work. <<<Encoding Module>>> Without loss of generality, we use the encoding components of QANet and NAQANet to encode the question and passage into vector-space representations. Formally, the question $Q$ and passage $P$ are first encoded as: and then the passage-aware question representation and the question-aware passage representation are computed as: where $\texttt {QANet-Emb-Enc}(\cdot )$ and $\texttt {QANet-Att}(\cdot )$ denote the “stacked embedding encoder layer” and “context-query attention layer” of QANet respectively. The former consists of convolution, self-attention and feed-forward layers. The latter is a passage-question attention layer. $\bar{\mathbf {Q}}$ and $\bar{\mathbf {P}}$ are used by the following components. <<</Encoding Module>>> <<<Reasoning Module>>> First we build a heterogeneous directed graph $\mathcal {G}=(\mathbf {V};\mathbf {E})$, whose nodes ($\mathbf {V}$) are corresponding to the numbers in the question and passage, and edges ($\mathbf {E}$) are used to encode numerical relationships among the numbers. The details will be explained in Sec. SECREF19. Then we perform reasoning on the graph based on a graph neural network, which can be formally denoted as: where $\mathbf {W}^M$ is a shared weight matrix, $\mathbf {U}$ is the representations of the nodes corresponding to the numbers, $\texttt {QANet-Mod-Enc}(\cdot )$ is the “model encoder layer” defined in QANet which is similar to $\texttt {QANet-Emb-Enc}(\cdot )$, and the definition of $\texttt {Reasoning}(\cdot )$ will be given in Sec. SECREF23. Finally, as $\mathbf {U}$ only contains the representations of numbers, to tackle span-style answers containing non-numerical words, we concatenate $\mathbf {U}$ with $\mathbf {M}^P$ to produce numerically-aware passage representation $\mathbf {M}_0$. Formally, where $[\cdot ;\cdot ]$ denotes matrix concatenation, $\mathbf {W}[k]$ denotes the $k$-th column of a matrix $\mathbf {W}$, $\mathbf {0}$ is a zero vector, $I(i)$ denotes the node index corresponding to the passage word $w_i^p$ which is a number, $\mathbf {W}_0$ is a weight matrix, and $\mathbf {b}_0$ is a bias vector. <<</Reasoning Module>>> <<<Prediction Module>>> Following NAQANet BIBREF6, we divide the answers into four types and use a unique output layer to calculate the conditional answer probability $\Pr (\text{answer}|\text{type})$ for each type : Passage span: The answer is a span of the passage, and the answer probability is defined as the product of the probabilities of the start and end positions. Question span: The answer is a span of the question, and the answer probability is also defined as the product of the probabilities of the start and end positions. Count: The answer is obtained by counting, and it is treated as a multi-class classification problem over ten numbers (0-9), which covers most of the Count type answers in the DROP dataset. Arithmetic expression: The answer is the result of an arithmetic expression. The expression is obtained in three steps: (1) extract all numbers from the passage; (2) assign a sign (plus, minus or zero) for each number; (3) sum the signed numbers . Meanwhile, an extra output layer is also used to predict the probability $\Pr (\text{type})$ of each answer type. At training time, the final answer probability is defined as the joint probability over all feasible answer types, i.e., $\sum _{\text{type}}\Pr (\text{type})\Pr (\text{answer}|\text{type})$. Here, the answer type annotation is not required and the probability $\Pr (\text{type})$ is learnt by the model. At test time, the model first selects the most probable answer type greedily and then predicts the best answer accordingly. Without loss of generality, we leverage the definition of the five output layers in BIBREF6, with $\mathbf {M_0}$ and $\mathbf {Q}$ as inputs. Please refer to the paper for more details due to space limitation. <<</Prediction Module>>> <<<Comparison with NAQANet>>> The major difference between our model and NAQANet is that NAQANet does not have the reasoning module, i.e., $\mathbf {M}_0$ is simply set as $\mathbf {M}^P$. As a result, numbers are treated as common words in NAQANet except in the prediction module, thus NAQANet may struggle to learn the numerical relationships between numbers, and potentially cannot well generalize to unseen numbers. However, as discussed in Sec. SECREF1, the numerical comparison is essential for answering questions requiring numerical reasoning. In our model, the numerical relationships are explicitly represented with the topology of the graph and a NumGNN is used to perform numerical reasoning. Therefore, our NumNet model can handle questions requiring numerical reasoning more effectively, which is verified by the experiments in Sec. SECREF4. <<</Comparison with NAQANet>>> <<</Framework>>> <<<Numerically-aware Graph Construction>>> We regard all numbers from the question and passage as nodes in the graph for reasoning . The set of nodes corresponding to the numbers occurring in question and passage are denoted as $\mathbf {V}^Q$ and $\mathbf {V}^P$ respectively. And we denote all the nodes as $\mathbf {V}=\mathbf {V}^Q\cup \mathbf {V}^P$, and the number corresponding to a node $v\in \mathbf {V}$ as $n(v)$. Two sets of edges are considered in this work: Greater Relation Edge ($\overrightarrow{\mathbf {E}}$): For two nodes $v_i, v_j\in \mathbf {V}$, a directed edge $\overrightarrow{e}_{ij}=(v_i, v_j)$ pointing from $v_i$ to $v_j$ will be added to the graph if $n(v_i)>n(v_j)$, which is denoted as solid arrow in Figure FIGREF3. Lower or Equal Relation Edge ($\overleftarrow{\mathbf {E}}$): For two nodes $v_i, v_j\in \mathbf {V}$, a directed edge $\overleftarrow{e}_{ij}=(v_j, v_i)$ will be added to the graph if $n(v_i)\le n(v_j)$, which is denoted as dashed arrow in Figure FIGREF3. Theoretically, $\overrightarrow{\mathbf {E}}$ and $\overleftarrow{\mathbf {E}}$ are complement to each other . However, as a number may occur several times and represent different facts in a document, we add a distinct node for each occurrence in the graph to prevent potential ambiguity. Therefore, it is more reasonable to use both $\overrightarrow{\mathbf {E}}$ and $\overleftarrow{\mathbf {E}}$ in order to encode the equal information among nodes. <<</Numerically-aware Graph Construction>>> <<<Numerical Reasoning>>> As we built the graph $\mathcal {G}=(\mathbf {V},\mathbf {E})$, we leverage NumGNN to perform reasoning, which is corresponding to the function $\texttt {Reasoning}(\cdot )$ in Eq. DISPLAY_FORM10. The reasoning process is as follows: <<<Initialization>>> For each node $v^P_i\in \mathbf {V}^P$, its representation is initialized as the corresponding column vector of $\mathbf {M}^P$. Formally, the initial representation is $\mathbf {v}_i^P=\mathbf {M}^P[I^P(v_i^P)]$, where $I^P(v^P_i)$ denotes the word index corresponding to $v_i^P$. Similarly, the initial representation $\mathbf {v}_j^Q$ for a node $v^Q_j\in \mathbf {V}^Q$ is set as the corresponding column vector of $\mathbf {M}^Q$. We denote all the initial node representations as $\mathbf {v}^0=\lbrace \mathbf {v}_i^P\rbrace \cup \lbrace \mathbf {v}_j^Q\rbrace $. <<</Initialization>>> <<<One-step Reasoning>>> Given the graph $\mathcal {G}$ and the node representations $\mathbf {v}$, we use a GNN to perform reasoning in three steps: (1) Node Relatedness Measure: As only a few numbers are relevant for answering a question generally, we compute a weight for each node to by-pass irrelevant numbers in reasoning. Formally, the weight for node $v_i$ is computed as: where $\mathbf {W}_v$ is a weight matrix, and $b_v$ is a bias. (2) Message Propagation: As the role a number plays in reasoning is not only decided by itself, but also related to the context, we propagate messages from each node to its neighbors to help to perform reasoning. As numbers in question and passage may play different roles in reasoning and edges corresponding to different numerical relations should be distinguished, we use relation-specific transform matrices in the message propagation. Formally, we define the following propagation function for calculating the forward-pass update of a node: where $\widetilde{\mathbf {v}}^{\prime }_i$ is the message representation of node $v_i$, $\texttt {r}_{ji}$ is the relation assigned to edge $e_{ji}$, $\mathbf {W}^{\texttt {r}_{ji}}$ are relation-specific transform matrices, and $\mathcal {N}_i=\lbrace j|(v_j,v_i)\in \mathbf {E}\rbrace $ is the neighbors of node $v_i$. For each edge $e_{ji}$, $\texttt {r}_{ji}$ is determined by the following two attributes: Number relation: $>$ or $\le $; Node types: the two nodes of the edge corresponding to two numbers that: (1) both from the question ($\text{q-q}$); (2) both from the passage ($\text{p-p}$); (3) from the question and the passage respectively ($\text{q-p}$); (4) from the passage and the question respectively ($\text{p-q}$). Formally, $\texttt {r}_{ij}\in \lbrace >,\le \rbrace \times \lbrace \text{q-q},\text{p-p},\text{q-p},\text{p-q}\rbrace $. (3) Node Representation Update: As the message representation obtained in the previous step only contains information from the neighbors, it needs to be fused with the node representation to combine with the information carried by the node itself, which is performed as: where $\mathbf {W}_f$ is a weight matrix, and $\mathbf {b}_f$ is a bias vector. We denote the entire one-step reasoning process (Eq. DISPLAY_FORM26-DISPLAY_FORM30) as a single function As the graph $\mathcal {G}$ constructed in Sec. SECREF19 has encoded the numerical relations via its topology, the reasoning process is numerically-aware. <<</One-step Reasoning>>> <<<Multi-step Reasoning>>> By single-step reasoning, we can only infer relations between adjacent nodes. However, relations between multiple nodes may be required for certain tasks, e.g., sorting. Therefore, it is essential to perform multi-step reasoning, which can be done as follows: where $t\ge 1$. Suppose we perform $K$ steps of reasoning, $\mathbf {v}^K$ is used as $\mathbf {U}$ in Eq. DISPLAY_FORM10. <<</Multi-step Reasoning>>> <<</Numerical Reasoning>>> <<</Methodology>>> <<<Experiments>>> <<<Dataset and Evaluation Metrics>>> We evaluate our proposed model on DROP dataset BIBREF6, which is a public numerical MRC dataset. The DROP dataset is constructed by crowd-sourcing, which asks the annotators to generate question-answer pairs according to the given Wikipedia passages, which require numerical reasoning such as addition, counting, or sorting over numbers in the passages. There are $77,409$ training samples, $9,536$ development samples and $9,622$ testing samples in the dataset. In this paper, we adopt two metrics including Exact Match (EM) and numerically-focused F1 scores to evaluate our model following BIBREF6. The numerically-focused F1 is set to be 0 when the predicted answer is mismatched for those questions with the numeric golden answer. <<</Dataset and Evaluation Metrics>>> <<<Baselines>>> For comparison, we select several public models as baselines including semantic parsing models: [topsep=2pt, itemsep=0pt] Syn Dep BIBREF6, the neural semantic parsing model (KDG) BIBREF22 with Stanford dependencies based sentence representations; OpenIE BIBREF6, KDG with open information extraction based sentence representations; SRL BIBREF6, KDG with semantic role labeling based sentence representations; and traditional MRC models: [topsep=2pt, itemsep=0pt] BiDAF BIBREF3, an MRC model which utilizes a bi-directional attention flow network to encode the question and passage; QANet BIBREF12, which utilizes convolutions and self-attentions as the building blocks of encoders to represent the question and passage; BERT BIBREF23, a pre-trained bidirectional Transformer-based language model which achieves state-of-the-art performance on lots of public MRC datasets recently; and numerical MRC models: [topsep=2pt, itemsep=0pt] NAQANet BIBREF6, a numerical version of QANet model. NAQANet+, an enhanced version of NAQANet implemented by ourselves, which further considers real number (e.g. “2.5”), richer arithmetic expression, data augmentation, etc. The enhancements are also used in our NumNet model and the details are given in the Appendix. <<</Baselines>>> <<<Experimental Settings>>> In this paper, we tune our model on the development set and use a grid search to determine the optimal parameters. The dimensions of all the representations (e.g., $\mathbf {Q}$, $\mathbf {P}$, $\mathbf {M}^Q$, $\mathbf {M}^P$, $\mathbf {U}$, $\mathbf {M}_0^{\prime }$, $\mathbf {M}_0$ and $\mathbf {v}$) are set to 128. If not specified, the reasoning step $K$ is set to 3. Since other parameters have little effect on the results, we simply follow the settings used in BIBREF6. We use the Adam optimizer BIBREF24 with $\beta _1=0.8$, $\beta _2=0.999$, $\epsilon =10^{-7}$ to minimize the objective function. The learning rate is $5 \times 10^{-4}$, L2 weight decay $\lambda $ is $10^{-7}$ and the maximum norm value of gradient clipping is 5. We also apply exponential moving average with a decay rate $0.9999$ on all trainable variables. The model is trained with a batch size of 16 for 40 epochs. Passages and questions are trimmed to 400 and 50 tokens respectively during training, and trimmed to $1,000$ and 100 tokens respectively during prediction . <<</Experimental Settings>>> <<<Overall Results>>> The performance of our NumNet model and other baselines on DROP dataset are shown in Table TABREF47. From the results, we can observe that: (1) Our NumNet model achieves better results on both the development and testing sets on DROP dataset as compared to semantic parsing-based models, traditional MRC models and even numerical MRC models NAQANet and NAQANet+. The reason is that our NumNet model can make full use of the numerical comparison information over numbers in both question and passage via the proposed NumGNN module. (2) Our implemented NAQANet+ has a much better performance compared to the original version of NAQANet. It verifies the effectiveness of our proposed enhancements for baseline. <<</Overall Results>>> <<<Effect of GNN Structure>>> In this part, we investigate the effect of different GNN structures on the DROP development set. The results are shown in Table TABREF51. The “Comparison”, “Number” and “ALL” are corresponding to the comparing question subset , the number-type answer subset, and the entire development set, respectively . If we replace the proposed numerically-aware graph (Sec. SECREF19) with a fully connected graph, our model fallbacks to a traditional GNN, denoted as “GNN” in the table. Moreover, “- question num” denotes the numbers in the question is not included in the graph, and “- $\le $ type edge” and “- $>$ type edge” denote edges of $\le $ and $>$ types are not adopted respectively. As shown in Table TABREF51, our proposed NumGNN leads to statistically significant improvements compared to traditional GNN on both EM and F1 scores especially for comparing questions. It indicates that considering the comparing information over numbers could effectively help the numerical reasoning for comparing questions. Moreover, we find that the numbers in the question are often related to the numerical reasoning for answering the question, thus considering numbers in questions in NumGNN achieves better performance. And the results also justify that encoding “greater relation” and “lower or equal relation” simultaneously in the graph also benefits our model. <<</Effect of GNN Structure>>> <<<Effect of GNN Layer Number>>> The number of NumGNN layers represents the numerical reasoning ability of our models. A $K$-layer version has the ability for $K$-step numerical inference. In this part, we additionally perform experiments to understand the values of the numbers of NumGNN layers. From Figure FIGREF52, we could observe that: (1) The 2-layer version of NumNet achieves the best performance for the comparing questions. From careful analysis, we find that most comparing questions only require at most 2-step reasoning (e.g., “Who was the second oldest player in the MLB, Clemens or Franco?”), and therefore the 3-layer version of NumNet is more complex but brings no gains for these questions. (2) The performance of our NumNet model on the overall development set is improved consistently as the number of GNN layers increases. The reason is that some of the numerical questions require reasoning over many numbers in the passage, which could benefit from the multi-step reasoning ability of multi-layer GNN. However, further investigation shows that the performance gain is not stable when $K\ge 4$. We believe it is due to the intrinsic over smoothing problem of GNNs BIBREF25. <<</Effect of GNN Layer Number>>> <<<Case Study>>> We further give some examples to show why incorporating comparing information over numbers in the passage could help numerical reasoning in MRC in Table TABREF53. For the first case, we observe that NAQANet+ gives a wrong prediction, and we find that NAQANet+ will give the same prediction for the question “Which age group is smaller: under the age of 18 or 18 and 24?”. The reason is that NAQANet+ cannot distinguish which one is larger for $10.1\%$ and $56.2\%$. For the second case, NAQANet+ cannot recognize the second longest field goal is 22-yard and also gives a wrong prediction. For these two cases, our NumNet model could give the correct answer through the numeric reasoning, which indicates the effectiveness of our NumNet model. <<</Case Study>>> <<<Error Analysis>>> To investigate how well our NumNet model handles sorting/comparison questions and better understand the remaining challenges, we perform an error analysis on a random sample of NumNet predictions. We find that: (1) Our NumNet model can answer about 76% of sorting/comparison questions correctly, which indicates that our NumNet model has achieved numerical reasoning ability to some extend. (2) Among the incorrectly answered sorting/comparison questions, the most ones (26%) are those whose golden answers are multiple nonadjacent spans (row 1 in Table TABREF54), and the second most ones (19%) are those involving comparison with an intermediate number that does not literally occur in the document/question but has to be derived from counting or arithmetic operation (row 1 in Table TABREF54). <<</Error Analysis>>> <<<Discussion>>> By combining the numerically-aware graph and the NumGNN together, our NumNet model achieves the numerical reasoning ability. On one hand, the numerically-aware graph encodes numbers as nodes and relationships between them as the edges, which is required for numerical comparison. On the other hand, through one-step reasoning, our NumGNN could perform comparison and identify the numerical condition. After multiple-step reasoning, our NumGNN could further perform sorting. However, since the numerically-aware graph is pre-defined, our NumNet is not applicable to the case where an intermediate number has to be derived (e.g., from arithmetic operation) in the reasoning process, which is a major limitation of our model. <<</Discussion>>> <<</Experiments>>> <<<Conclusion and Future Work>>> Numerical reasoning skills such as addition, subtraction, sorting and counting are naturally required by machine reading comprehension (MRC) problems in practice. Nevertheless, these skills are not taken into account explicitly for most existing MRC models. In this work, we propose a numerical MRC model named NumNet which performs explicit numerical reasoning while reading the passages. To be specific, NumNet encodes the numerical relations among numbers in the question and passage into a graph as its topology, and leverages a numerically-aware graph neural network to perform numerical reasoning on the graph. Our NumNet model outperforms strong baselines with a large margin on the DROP dataset. In the future, we will explore the following directions: (1)As we use a pre-defined reasoning graph in our model, it is incapable of handling reasoning process which involves intermediate numbers that not presented in the graph. How to incorporate dynamic graph into our model is an interesting problem. (2) Compared with methods proposed for arithmetic word problems (AWPs), our model has better natural language understanding ability. However, the methods for AWPs can handle much richer arithmetic expressions. Therefore, how to combine both of their abilities to develop a more powerful numerical MRC model is an interesting future direction. (3) Symbolic reasoning plays a crucial role in human reading comprehension. Our work integrates numerical reasoning, which is a special case of symbolic reasoning, into traditional MRC systems. How to incorporate more sophisticated symbolic reasoning abilities into MRC systems is also a valuable future direction. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "Introduction, Conclusion and Future Work" ], "type": "disordered_section" }
1910.06701
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> NumNet: Machine Reading Comprehension with Numerical Reasoning <<<Abstract>>> Numerical reasoning, such as addition, subtraction, sorting and counting is a critical skill in human's reading comprehension, which has not been well considered in existing machine reading comprehension (MRC) systems. To address this issue, we propose a numerical MRC model named as NumNet, which utilizes a numerically-aware graph neural network to consider the comparing information and performs numerical reasoning over numbers in the question and passage. Our system achieves an EM-score of 64.56% on the DROP dataset, outperforming all existing machine reading comprehension models by considering the numerical relations among numbers. <<</Abstract>>> <<<Introduction>>> Machine reading comprehension (MRC) aims to infer the answer to a question given the document. In recent years, researchers have proposed lots of MRC models BIBREF0, BIBREF1, BIBREF2, BIBREF3 and these models have achieved remarkable results in various public benchmarks such as SQuAD BIBREF4 and RACE BIBREF5. The success of these models is due to two reasons: (1) Multi-layer architectures which allow these models to read the document and the question iteratively for reasoning; (2) Attention mechanisms which would enable these models to focus on the part related to the question in the document. However, most of existing MRC models are still weak in numerical reasoning such as addition, subtraction, sorting and counting BIBREF6, which are naturally required when reading financial news, scientific articles, etc. BIBREF6 proposed a numerically-aware QANet (NAQANet) model, which divides the answer generation for numerical MRC into three types: (1) extracting spans; (2) counting; (3) addition or subtraction over numbers. NAQANet makes a pioneering attempt to answer numerical questions but still does not explicitly consider numerical reasoning. To tackle this problem, we introduce a novel model NumNet that integrates numerical reasoning into existing MRC models. A key problem to answer questions requiring numerical reasoning is how to perform numerical comparison in MRC systems, which is crucial for two common types of questions: (1) Numerical Comparison: The answers of the questions can be directly obtained via performing numerical comparison, such as sorting and comparison, in the documents. For example, in Table TABREF1, for the first question, if the MRC system knows the fact that “$49>47>36>31>22$”, it could easily extract that the second longest field goal is 47-yard. (2) Numerical Condition: The answers of the questions cannot be directly obtained through simple numerical comparison in the documents, but often require numerical comparison for understanding the text. For example, for the second question in Table TABREF1, an MRC system needs to know which age group made up more than 7% of the population to count the group number. Hence, our NumNet model considers numerical comparing information among numbers when answering numerical questions. As shown in Figure FIGREF3, NumNet first encodes both the question and passages through an encoding module consisting of convolution layers, self-attention layers and feed-forward layers as well as a passage-question attention layer. After that, we feed the question and passage representations into a numerically-aware graph neural network (NumGNN) to further integrate the comparison information among numbers into their representations. Finally, we utilize the numerically-aware representation of passages to infer the answer to the question. The experimental results on a public numerical MRC dataset DROP BIBREF6 show that our NumNet model achieves significant and consistent improvement as compared to all baseline methods by explicitly performing numerical reasoning over numbers in the question and passage. In particular, we show that our model could effectively deal with questions requiring sorting with multi-layer NumGNN. The source code of our paper is available at https://github.com/ranqiu92/NumNet. <<</Introduction>>> <<<Related Work>>> <<<Machine Reading Comprehension>>> Machine reading comprehension (MRC) has become an important research area in NLP. In recent years, researchers have published a large number of annotated MRC datasets such as CNN/Daily Mail BIBREF7, SQuAD BIBREF4, RACE BIBREF5, TriviaQA BIBREF8 and so on. With the blooming of available large-scale MRC datasets, a great number of neural network-based MRC models have been proposed to answer questions for a given document including Attentive Reader BIBREF9, BiDAF BIBREF3, Interactive AoA Reader BIBREF2, Gated Attention Reader BIBREF1, R-Net BIBREF10, DCN BIBREF11, QANet BIBREF12, and achieve promising results in most existing public MRC datasets. Despite the success of neural network-based MRC models, researchers began to analyze the data and rethink to what extent we have solved the problem of MRC. Some works BIBREF0, BIBREF13, BIBREF14 classify the reasoning skills required to answer the questions into the following types: (1) Exact matching/Paraphrasing; (2) Summary; (3) Logic reasoning; (4) Utilizing external knowledge; (5) Numerical reasoning. They found that most existing MRC models are focusing on dealing with the first three types of questions. However, all these models suffer from problems when answering the questions requiring numerical reasoning. To the best of our knowledge, our work is the first one that explicitly incorporates numerical reasoning into the MRC system. The most relevant work to ours is NAQANet BIBREF6, which adapts the output layer of QANet BIBREF12 to support predicting answers based on counting and addition/subtraction over numbers. However, it does not consider numerical reasoning explicitly during encoding or inference. <<</Machine Reading Comprehension>>> <<<Arithmetic Word Problem Solving>>> Recently, understanding and solving arithmetic word problems (AWP) has attracted the growing interest of NLP researchers. BIBREF15 proposed a simple method to address arithmetic word problems, but mostly focusing on subsets of problems which only require addition and subtraction. After that, BIBREF16 proposed an algorithmic approach which could handle arithmetic word problems with multiple steps and operations. BIBREF17 further formalized the AWP problem as that of generating and scoring equation trees via integer linear programming. BIBREF18 and BIBREF19 proposed sequence to sequence solvers for the AWP problems, which are capable of generating unseen expressions and do not rely on sophisticated manual features. BIBREF20 leveraged deep Q-network to solve the AWP problems, achieving a good balance between effectiveness and efficiency. However, all the existing AWP systems are only trained and validated on small benchmark datasets. BIBREF21 found that the performance of these AWP systems sharply degrades on larger datasets. Moreover, from the perspective of NLP, MRC problems are more challenging than AWP since the passages in MRC are mostly real-world texts which require more complex skills to be understood. Above all, it is nontrivial to adapt most existing AWP models to the MRC scenario. Therefore, we focus on enhancing MRC models with numerical reasoning abilities in this work. <<</Arithmetic Word Problem Solving>>> <<</Related Work>>> <<<Methodology>>> In this section, we will introduce the framework of our model NumNet and provide the details of the proposed numerically-aware graph neural network (NumGNN) for numerical reasoning. <<<Framework>>> An overview of our model NumNet is shown in Figure FIGREF3. We compose our model with encoding module, reasoning module and prediction module. Our major contribution is the reasoning module, which leverages a NumGNN between the encoding module and prediction module to explicitly consider the numerical comparison information and perform numerical reasoning. As NAQANet has been shown effective for handling numerical MRC problem BIBREF6, we leverage it as our base model and mainly focus on the design and integration of the NumGNN in this work. <<<Encoding Module>>> Without loss of generality, we use the encoding components of QANet and NAQANet to encode the question and passage into vector-space representations. Formally, the question $Q$ and passage $P$ are first encoded as: and then the passage-aware question representation and the question-aware passage representation are computed as: where $\texttt {QANet-Emb-Enc}(\cdot )$ and $\texttt {QANet-Att}(\cdot )$ denote the “stacked embedding encoder layer” and “context-query attention layer” of QANet respectively. The former consists of convolution, self-attention and feed-forward layers. The latter is a passage-question attention layer. $\bar{\mathbf {Q}}$ and $\bar{\mathbf {P}}$ are used by the following components. <<</Encoding Module>>> <<<Reasoning Module>>> First we build a heterogeneous directed graph $\mathcal {G}=(\mathbf {V};\mathbf {E})$, whose nodes ($\mathbf {V}$) are corresponding to the numbers in the question and passage, and edges ($\mathbf {E}$) are used to encode numerical relationships among the numbers. The details will be explained in Sec. SECREF19. Then we perform reasoning on the graph based on a graph neural network, which can be formally denoted as: where $\mathbf {W}^M$ is a shared weight matrix, $\mathbf {U}$ is the representations of the nodes corresponding to the numbers, $\texttt {QANet-Mod-Enc}(\cdot )$ is the “model encoder layer” defined in QANet which is similar to $\texttt {QANet-Emb-Enc}(\cdot )$, and the definition of $\texttt {Reasoning}(\cdot )$ will be given in Sec. SECREF23. Finally, as $\mathbf {U}$ only contains the representations of numbers, to tackle span-style answers containing non-numerical words, we concatenate $\mathbf {U}$ with $\mathbf {M}^P$ to produce numerically-aware passage representation $\mathbf {M}_0$. Formally, where $[\cdot ;\cdot ]$ denotes matrix concatenation, $\mathbf {W}[k]$ denotes the $k$-th column of a matrix $\mathbf {W}$, $\mathbf {0}$ is a zero vector, $I(i)$ denotes the node index corresponding to the passage word $w_i^p$ which is a number, $\mathbf {W}_0$ is a weight matrix, and $\mathbf {b}_0$ is a bias vector. <<</Reasoning Module>>> <<<Prediction Module>>> Following NAQANet BIBREF6, we divide the answers into four types and use a unique output layer to calculate the conditional answer probability $\Pr (\text{answer}|\text{type})$ for each type : Passage span: The answer is a span of the passage, and the answer probability is defined as the product of the probabilities of the start and end positions. Question span: The answer is a span of the question, and the answer probability is also defined as the product of the probabilities of the start and end positions. Count: The answer is obtained by counting, and it is treated as a multi-class classification problem over ten numbers (0-9), which covers most of the Count type answers in the DROP dataset. Arithmetic expression: The answer is the result of an arithmetic expression. The expression is obtained in three steps: (1) extract all numbers from the passage; (2) assign a sign (plus, minus or zero) for each number; (3) sum the signed numbers . Meanwhile, an extra output layer is also used to predict the probability $\Pr (\text{type})$ of each answer type. At training time, the final answer probability is defined as the joint probability over all feasible answer types, i.e., $\sum _{\text{type}}\Pr (\text{type})\Pr (\text{answer}|\text{type})$. Here, the answer type annotation is not required and the probability $\Pr (\text{type})$ is learnt by the model. At test time, the model first selects the most probable answer type greedily and then predicts the best answer accordingly. Without loss of generality, we leverage the definition of the five output layers in BIBREF6, with $\mathbf {M_0}$ and $\mathbf {Q}$ as inputs. Please refer to the paper for more details due to space limitation. <<</Prediction Module>>> <<<Comparison with NAQANet>>> The major difference between our model and NAQANet is that NAQANet does not have the reasoning module, i.e., $\mathbf {M}_0$ is simply set as $\mathbf {M}^P$. As a result, numbers are treated as common words in NAQANet except in the prediction module, thus NAQANet may struggle to learn the numerical relationships between numbers, and potentially cannot well generalize to unseen numbers. However, as discussed in Sec. SECREF1, the numerical comparison is essential for answering questions requiring numerical reasoning. In our model, the numerical relationships are explicitly represented with the topology of the graph and a NumGNN is used to perform numerical reasoning. Therefore, our NumNet model can handle questions requiring numerical reasoning more effectively, which is verified by the experiments in Sec. SECREF4. <<</Comparison with NAQANet>>> <<</Framework>>> <<<Numerically-aware Graph Construction>>> We regard all numbers from the question and passage as nodes in the graph for reasoning . The set of nodes corresponding to the numbers occurring in question and passage are denoted as $\mathbf {V}^Q$ and $\mathbf {V}^P$ respectively. And we denote all the nodes as $\mathbf {V}=\mathbf {V}^Q\cup \mathbf {V}^P$, and the number corresponding to a node $v\in \mathbf {V}$ as $n(v)$. Two sets of edges are considered in this work: Greater Relation Edge ($\overrightarrow{\mathbf {E}}$): For two nodes $v_i, v_j\in \mathbf {V}$, a directed edge $\overrightarrow{e}_{ij}=(v_i, v_j)$ pointing from $v_i$ to $v_j$ will be added to the graph if $n(v_i)>n(v_j)$, which is denoted as solid arrow in Figure FIGREF3. Lower or Equal Relation Edge ($\overleftarrow{\mathbf {E}}$): For two nodes $v_i, v_j\in \mathbf {V}$, a directed edge $\overleftarrow{e}_{ij}=(v_j, v_i)$ will be added to the graph if $n(v_i)\le n(v_j)$, which is denoted as dashed arrow in Figure FIGREF3. Theoretically, $\overrightarrow{\mathbf {E}}$ and $\overleftarrow{\mathbf {E}}$ are complement to each other . However, as a number may occur several times and represent different facts in a document, we add a distinct node for each occurrence in the graph to prevent potential ambiguity. Therefore, it is more reasonable to use both $\overrightarrow{\mathbf {E}}$ and $\overleftarrow{\mathbf {E}}$ in order to encode the equal information among nodes. <<</Numerically-aware Graph Construction>>> <<<Numerical Reasoning>>> As we built the graph $\mathcal {G}=(\mathbf {V},\mathbf {E})$, we leverage NumGNN to perform reasoning, which is corresponding to the function $\texttt {Reasoning}(\cdot )$ in Eq. DISPLAY_FORM10. The reasoning process is as follows: <<<Initialization>>> For each node $v^P_i\in \mathbf {V}^P$, its representation is initialized as the corresponding column vector of $\mathbf {M}^P$. Formally, the initial representation is $\mathbf {v}_i^P=\mathbf {M}^P[I^P(v_i^P)]$, where $I^P(v^P_i)$ denotes the word index corresponding to $v_i^P$. Similarly, the initial representation $\mathbf {v}_j^Q$ for a node $v^Q_j\in \mathbf {V}^Q$ is set as the corresponding column vector of $\mathbf {M}^Q$. We denote all the initial node representations as $\mathbf {v}^0=\lbrace \mathbf {v}_i^P\rbrace \cup \lbrace \mathbf {v}_j^Q\rbrace $. <<</Initialization>>> <<<One-step Reasoning>>> Given the graph $\mathcal {G}$ and the node representations $\mathbf {v}$, we use a GNN to perform reasoning in three steps: (1) Node Relatedness Measure: As only a few numbers are relevant for answering a question generally, we compute a weight for each node to by-pass irrelevant numbers in reasoning. Formally, the weight for node $v_i$ is computed as: where $\mathbf {W}_v$ is a weight matrix, and $b_v$ is a bias. (2) Message Propagation: As the role a number plays in reasoning is not only decided by itself, but also related to the context, we propagate messages from each node to its neighbors to help to perform reasoning. As numbers in question and passage may play different roles in reasoning and edges corresponding to different numerical relations should be distinguished, we use relation-specific transform matrices in the message propagation. Formally, we define the following propagation function for calculating the forward-pass update of a node: where $\widetilde{\mathbf {v}}^{\prime }_i$ is the message representation of node $v_i$, $\texttt {r}_{ji}$ is the relation assigned to edge $e_{ji}$, $\mathbf {W}^{\texttt {r}_{ji}}$ are relation-specific transform matrices, and $\mathcal {N}_i=\lbrace j|(v_j,v_i)\in \mathbf {E}\rbrace $ is the neighbors of node $v_i$. For each edge $e_{ji}$, $\texttt {r}_{ji}$ is determined by the following two attributes: Number relation: $>$ or $\le $; Node types: the two nodes of the edge corresponding to two numbers that: (1) both from the question ($\text{q-q}$); (2) both from the passage ($\text{p-p}$); (3) from the question and the passage respectively ($\text{q-p}$); (4) from the passage and the question respectively ($\text{p-q}$). Formally, $\texttt {r}_{ij}\in \lbrace >,\le \rbrace \times \lbrace \text{q-q},\text{p-p},\text{q-p},\text{p-q}\rbrace $. (3) Node Representation Update: As the message representation obtained in the previous step only contains information from the neighbors, it needs to be fused with the node representation to combine with the information carried by the node itself, which is performed as: where $\mathbf {W}_f$ is a weight matrix, and $\mathbf {b}_f$ is a bias vector. We denote the entire one-step reasoning process (Eq. DISPLAY_FORM26-DISPLAY_FORM30) as a single function As the graph $\mathcal {G}$ constructed in Sec. SECREF19 has encoded the numerical relations via its topology, the reasoning process is numerically-aware. <<</One-step Reasoning>>> <<<Multi-step Reasoning>>> By single-step reasoning, we can only infer relations between adjacent nodes. However, relations between multiple nodes may be required for certain tasks, e.g., sorting. Therefore, it is essential to perform multi-step reasoning, which can be done as follows: where $t\ge 1$. Suppose we perform $K$ steps of reasoning, $\mathbf {v}^K$ is used as $\mathbf {U}$ in Eq. DISPLAY_FORM10. <<</Multi-step Reasoning>>> <<</Numerical Reasoning>>> <<</Methodology>>> <<<Experiments>>> <<<Dataset and Evaluation Metrics>>> We evaluate our proposed model on DROP dataset BIBREF6, which is a public numerical MRC dataset. The DROP dataset is constructed by crowd-sourcing, which asks the annotators to generate question-answer pairs according to the given Wikipedia passages, which require numerical reasoning such as addition, counting, or sorting over numbers in the passages. There are $77,409$ training samples, $9,536$ development samples and $9,622$ testing samples in the dataset. In this paper, we adopt two metrics including Exact Match (EM) and numerically-focused F1 scores to evaluate our model following BIBREF6. The numerically-focused F1 is set to be 0 when the predicted answer is mismatched for those questions with the numeric golden answer. <<</Dataset and Evaluation Metrics>>> <<<Baselines>>> For comparison, we select several public models as baselines including semantic parsing models: [topsep=2pt, itemsep=0pt] Syn Dep BIBREF6, the neural semantic parsing model (KDG) BIBREF22 with Stanford dependencies based sentence representations; OpenIE BIBREF6, KDG with open information extraction based sentence representations; SRL BIBREF6, KDG with semantic role labeling based sentence representations; and traditional MRC models: [topsep=2pt, itemsep=0pt] BiDAF BIBREF3, an MRC model which utilizes a bi-directional attention flow network to encode the question and passage; QANet BIBREF12, which utilizes convolutions and self-attentions as the building blocks of encoders to represent the question and passage; BERT BIBREF23, a pre-trained bidirectional Transformer-based language model which achieves state-of-the-art performance on lots of public MRC datasets recently; and numerical MRC models: [topsep=2pt, itemsep=0pt] NAQANet BIBREF6, a numerical version of QANet model. NAQANet+, an enhanced version of NAQANet implemented by ourselves, which further considers real number (e.g. “2.5”), richer arithmetic expression, data augmentation, etc. The enhancements are also used in our NumNet model and the details are given in the Appendix. <<</Baselines>>> <<<Experimental Settings>>> In this paper, we tune our model on the development set and use a grid search to determine the optimal parameters. The dimensions of all the representations (e.g., $\mathbf {Q}$, $\mathbf {P}$, $\mathbf {M}^Q$, $\mathbf {M}^P$, $\mathbf {U}$, $\mathbf {M}_0^{\prime }$, $\mathbf {M}_0$ and $\mathbf {v}$) are set to 128. If not specified, the reasoning step $K$ is set to 3. Since other parameters have little effect on the results, we simply follow the settings used in BIBREF6. We use the Adam optimizer BIBREF24 with $\beta _1=0.8$, $\beta _2=0.999$, $\epsilon =10^{-7}$ to minimize the objective function. The learning rate is $5 \times 10^{-4}$, L2 weight decay $\lambda $ is $10^{-7}$ and the maximum norm value of gradient clipping is 5. We also apply exponential moving average with a decay rate $0.9999$ on all trainable variables. The model is trained with a batch size of 16 for 40 epochs. Passages and questions are trimmed to 400 and 50 tokens respectively during training, and trimmed to $1,000$ and 100 tokens respectively during prediction . <<</Experimental Settings>>> <<<Overall Results>>> The performance of our NumNet model and other baselines on DROP dataset are shown in Table TABREF47. From the results, we can observe that: (1) Our NumNet model achieves better results on both the development and testing sets on DROP dataset as compared to semantic parsing-based models, traditional MRC models and even numerical MRC models NAQANet and NAQANet+. The reason is that our NumNet model can make full use of the numerical comparison information over numbers in both question and passage via the proposed NumGNN module. (2) Our implemented NAQANet+ has a much better performance compared to the original version of NAQANet. It verifies the effectiveness of our proposed enhancements for baseline. <<</Overall Results>>> <<<Effect of GNN Structure>>> In this part, we investigate the effect of different GNN structures on the DROP development set. The results are shown in Table TABREF51. The “Comparison”, “Number” and “ALL” are corresponding to the comparing question subset , the number-type answer subset, and the entire development set, respectively . If we replace the proposed numerically-aware graph (Sec. SECREF19) with a fully connected graph, our model fallbacks to a traditional GNN, denoted as “GNN” in the table. Moreover, “- question num” denotes the numbers in the question is not included in the graph, and “- $\le $ type edge” and “- $>$ type edge” denote edges of $\le $ and $>$ types are not adopted respectively. As shown in Table TABREF51, our proposed NumGNN leads to statistically significant improvements compared to traditional GNN on both EM and F1 scores especially for comparing questions. It indicates that considering the comparing information over numbers could effectively help the numerical reasoning for comparing questions. Moreover, we find that the numbers in the question are often related to the numerical reasoning for answering the question, thus considering numbers in questions in NumGNN achieves better performance. And the results also justify that encoding “greater relation” and “lower or equal relation” simultaneously in the graph also benefits our model. <<</Effect of GNN Structure>>> <<<Effect of GNN Layer Number>>> The number of NumGNN layers represents the numerical reasoning ability of our models. A $K$-layer version has the ability for $K$-step numerical inference. In this part, we additionally perform experiments to understand the values of the numbers of NumGNN layers. From Figure FIGREF52, we could observe that: (1) The 2-layer version of NumNet achieves the best performance for the comparing questions. From careful analysis, we find that most comparing questions only require at most 2-step reasoning (e.g., “Who was the second oldest player in the MLB, Clemens or Franco?”), and therefore the 3-layer version of NumNet is more complex but brings no gains for these questions. (2) The performance of our NumNet model on the overall development set is improved consistently as the number of GNN layers increases. The reason is that some of the numerical questions require reasoning over many numbers in the passage, which could benefit from the multi-step reasoning ability of multi-layer GNN. However, further investigation shows that the performance gain is not stable when $K\ge 4$. We believe it is due to the intrinsic over smoothing problem of GNNs BIBREF25. <<</Effect of GNN Layer Number>>> <<<Case Study>>> We further give some examples to show why incorporating comparing information over numbers in the passage could help numerical reasoning in MRC in Table TABREF53. For the first case, we observe that NAQANet+ gives a wrong prediction, and we find that NAQANet+ will give the same prediction for the question “Which age group is smaller: under the age of 18 or 18 and 24?”. The reason is that NAQANet+ cannot distinguish which one is larger for $10.1\%$ and $56.2\%$. For the second case, NAQANet+ cannot recognize the second longest field goal is 22-yard and also gives a wrong prediction. For these two cases, our NumNet model could give the correct answer through the numeric reasoning, which indicates the effectiveness of our NumNet model. <<</Case Study>>> <<<Error Analysis>>> To investigate how well our NumNet model handles sorting/comparison questions and better understand the remaining challenges, we perform an error analysis on a random sample of NumNet predictions. We find that: (1) Our NumNet model can answer about 76% of sorting/comparison questions correctly, which indicates that our NumNet model has achieved numerical reasoning ability to some extend. (2) Among the incorrectly answered sorting/comparison questions, the most ones (26%) are those whose golden answers are multiple nonadjacent spans (row 1 in Table TABREF54), and the second most ones (19%) are those involving comparison with an intermediate number that does not literally occur in the document/question but has to be derived from counting or arithmetic operation (row 1 in Table TABREF54). <<</Error Analysis>>> <<<Discussion>>> By combining the numerically-aware graph and the NumGNN together, our NumNet model achieves the numerical reasoning ability. On one hand, the numerically-aware graph encodes numbers as nodes and relationships between them as the edges, which is required for numerical comparison. On the other hand, through one-step reasoning, our NumGNN could perform comparison and identify the numerical condition. After multiple-step reasoning, our NumGNN could further perform sorting. However, since the numerically-aware graph is pre-defined, our NumNet is not applicable to the case where an intermediate number has to be derived (e.g., from arithmetic operation) in the reasoning process, which is a major limitation of our model. <<</Discussion>>> <<</Experiments>>> <<<Conclusion and Future Work>>> Numerical reasoning skills such as addition, subtraction, sorting and counting are naturally required by machine reading comprehension (MRC) problems in practice. Nevertheless, these skills are not taken into account explicitly for most existing MRC models. In this work, we propose a numerical MRC model named NumNet which performs explicit numerical reasoning while reading the passages. To be specific, NumNet encodes the numerical relations among numbers in the question and passage into a graph as its topology, and leverages a numerically-aware graph neural network to perform numerical reasoning on the graph. Our NumNet model outperforms strong baselines with a large margin on the DROP dataset. In the future, we will explore the following directions: (1)As we use a pre-defined reasoning graph in our model, it is incapable of handling reasoning process which involves intermediate numbers that not presented in the graph. How to incorporate dynamic graph into our model is an interesting problem. (2) Compared with methods proposed for arithmetic word problems (AWPs), our model has better natural language understanding ability. However, the methods for AWPs can handle much richer arithmetic expressions. Therefore, how to combine both of their abilities to develop a more powerful numerical MRC model is an interesting future direction. (3) Symbolic reasoning plays a crucial role in human reading comprehension. Our work integrates numerical reasoning, which is a special case of symbolic reasoning, into traditional MRC systems. How to incorporate more sophisticated symbolic reasoning abilities into MRC systems is also a valuable future direction. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "Experiments, Abstract" ], "type": "disordered_section" }
1910.06701
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> NumNet: Machine Reading Comprehension with Numerical Reasoning <<<Abstract>>> Numerical reasoning, such as addition, subtraction, sorting and counting is a critical skill in human's reading comprehension, which has not been well considered in existing machine reading comprehension (MRC) systems. To address this issue, we propose a numerical MRC model named as NumNet, which utilizes a numerically-aware graph neural network to consider the comparing information and performs numerical reasoning over numbers in the question and passage. Our system achieves an EM-score of 64.56% on the DROP dataset, outperforming all existing machine reading comprehension models by considering the numerical relations among numbers. <<</Abstract>>> <<<Introduction>>> Machine reading comprehension (MRC) aims to infer the answer to a question given the document. In recent years, researchers have proposed lots of MRC models BIBREF0, BIBREF1, BIBREF2, BIBREF3 and these models have achieved remarkable results in various public benchmarks such as SQuAD BIBREF4 and RACE BIBREF5. The success of these models is due to two reasons: (1) Multi-layer architectures which allow these models to read the document and the question iteratively for reasoning; (2) Attention mechanisms which would enable these models to focus on the part related to the question in the document. However, most of existing MRC models are still weak in numerical reasoning such as addition, subtraction, sorting and counting BIBREF6, which are naturally required when reading financial news, scientific articles, etc. BIBREF6 proposed a numerically-aware QANet (NAQANet) model, which divides the answer generation for numerical MRC into three types: (1) extracting spans; (2) counting; (3) addition or subtraction over numbers. NAQANet makes a pioneering attempt to answer numerical questions but still does not explicitly consider numerical reasoning. To tackle this problem, we introduce a novel model NumNet that integrates numerical reasoning into existing MRC models. A key problem to answer questions requiring numerical reasoning is how to perform numerical comparison in MRC systems, which is crucial for two common types of questions: (1) Numerical Comparison: The answers of the questions can be directly obtained via performing numerical comparison, such as sorting and comparison, in the documents. For example, in Table TABREF1, for the first question, if the MRC system knows the fact that “$49>47>36>31>22$”, it could easily extract that the second longest field goal is 47-yard. (2) Numerical Condition: The answers of the questions cannot be directly obtained through simple numerical comparison in the documents, but often require numerical comparison for understanding the text. For example, for the second question in Table TABREF1, an MRC system needs to know which age group made up more than 7% of the population to count the group number. Hence, our NumNet model considers numerical comparing information among numbers when answering numerical questions. As shown in Figure FIGREF3, NumNet first encodes both the question and passages through an encoding module consisting of convolution layers, self-attention layers and feed-forward layers as well as a passage-question attention layer. After that, we feed the question and passage representations into a numerically-aware graph neural network (NumGNN) to further integrate the comparison information among numbers into their representations. Finally, we utilize the numerically-aware representation of passages to infer the answer to the question. The experimental results on a public numerical MRC dataset DROP BIBREF6 show that our NumNet model achieves significant and consistent improvement as compared to all baseline methods by explicitly performing numerical reasoning over numbers in the question and passage. In particular, we show that our model could effectively deal with questions requiring sorting with multi-layer NumGNN. The source code of our paper is available at https://github.com/ranqiu92/NumNet. <<</Introduction>>> <<<Related Work>>> <<<Machine Reading Comprehension>>> Machine reading comprehension (MRC) has become an important research area in NLP. In recent years, researchers have published a large number of annotated MRC datasets such as CNN/Daily Mail BIBREF7, SQuAD BIBREF4, RACE BIBREF5, TriviaQA BIBREF8 and so on. With the blooming of available large-scale MRC datasets, a great number of neural network-based MRC models have been proposed to answer questions for a given document including Attentive Reader BIBREF9, BiDAF BIBREF3, Interactive AoA Reader BIBREF2, Gated Attention Reader BIBREF1, R-Net BIBREF10, DCN BIBREF11, QANet BIBREF12, and achieve promising results in most existing public MRC datasets. Despite the success of neural network-based MRC models, researchers began to analyze the data and rethink to what extent we have solved the problem of MRC. Some works BIBREF0, BIBREF13, BIBREF14 classify the reasoning skills required to answer the questions into the following types: (1) Exact matching/Paraphrasing; (2) Summary; (3) Logic reasoning; (4) Utilizing external knowledge; (5) Numerical reasoning. They found that most existing MRC models are focusing on dealing with the first three types of questions. However, all these models suffer from problems when answering the questions requiring numerical reasoning. To the best of our knowledge, our work is the first one that explicitly incorporates numerical reasoning into the MRC system. The most relevant work to ours is NAQANet BIBREF6, which adapts the output layer of QANet BIBREF12 to support predicting answers based on counting and addition/subtraction over numbers. However, it does not consider numerical reasoning explicitly during encoding or inference. <<</Machine Reading Comprehension>>> <<<Arithmetic Word Problem Solving>>> Recently, understanding and solving arithmetic word problems (AWP) has attracted the growing interest of NLP researchers. BIBREF15 proposed a simple method to address arithmetic word problems, but mostly focusing on subsets of problems which only require addition and subtraction. After that, BIBREF16 proposed an algorithmic approach which could handle arithmetic word problems with multiple steps and operations. BIBREF17 further formalized the AWP problem as that of generating and scoring equation trees via integer linear programming. BIBREF18 and BIBREF19 proposed sequence to sequence solvers for the AWP problems, which are capable of generating unseen expressions and do not rely on sophisticated manual features. BIBREF20 leveraged deep Q-network to solve the AWP problems, achieving a good balance between effectiveness and efficiency. However, all the existing AWP systems are only trained and validated on small benchmark datasets. BIBREF21 found that the performance of these AWP systems sharply degrades on larger datasets. Moreover, from the perspective of NLP, MRC problems are more challenging than AWP since the passages in MRC are mostly real-world texts which require more complex skills to be understood. Above all, it is nontrivial to adapt most existing AWP models to the MRC scenario. Therefore, we focus on enhancing MRC models with numerical reasoning abilities in this work. <<</Arithmetic Word Problem Solving>>> <<</Related Work>>> <<<Methodology>>> In this section, we will introduce the framework of our model NumNet and provide the details of the proposed numerically-aware graph neural network (NumGNN) for numerical reasoning. <<<Framework>>> An overview of our model NumNet is shown in Figure FIGREF3. We compose our model with encoding module, reasoning module and prediction module. Our major contribution is the reasoning module, which leverages a NumGNN between the encoding module and prediction module to explicitly consider the numerical comparison information and perform numerical reasoning. As NAQANet has been shown effective for handling numerical MRC problem BIBREF6, we leverage it as our base model and mainly focus on the design and integration of the NumGNN in this work. <<<Encoding Module>>> Without loss of generality, we use the encoding components of QANet and NAQANet to encode the question and passage into vector-space representations. Formally, the question $Q$ and passage $P$ are first encoded as: and then the passage-aware question representation and the question-aware passage representation are computed as: where $\texttt {QANet-Emb-Enc}(\cdot )$ and $\texttt {QANet-Att}(\cdot )$ denote the “stacked embedding encoder layer” and “context-query attention layer” of QANet respectively. The former consists of convolution, self-attention and feed-forward layers. The latter is a passage-question attention layer. $\bar{\mathbf {Q}}$ and $\bar{\mathbf {P}}$ are used by the following components. <<</Encoding Module>>> <<<Reasoning Module>>> First we build a heterogeneous directed graph $\mathcal {G}=(\mathbf {V};\mathbf {E})$, whose nodes ($\mathbf {V}$) are corresponding to the numbers in the question and passage, and edges ($\mathbf {E}$) are used to encode numerical relationships among the numbers. The details will be explained in Sec. SECREF19. Then we perform reasoning on the graph based on a graph neural network, which can be formally denoted as: where $\mathbf {W}^M$ is a shared weight matrix, $\mathbf {U}$ is the representations of the nodes corresponding to the numbers, $\texttt {QANet-Mod-Enc}(\cdot )$ is the “model encoder layer” defined in QANet which is similar to $\texttt {QANet-Emb-Enc}(\cdot )$, and the definition of $\texttt {Reasoning}(\cdot )$ will be given in Sec. SECREF23. Finally, as $\mathbf {U}$ only contains the representations of numbers, to tackle span-style answers containing non-numerical words, we concatenate $\mathbf {U}$ with $\mathbf {M}^P$ to produce numerically-aware passage representation $\mathbf {M}_0$. Formally, where $[\cdot ;\cdot ]$ denotes matrix concatenation, $\mathbf {W}[k]$ denotes the $k$-th column of a matrix $\mathbf {W}$, $\mathbf {0}$ is a zero vector, $I(i)$ denotes the node index corresponding to the passage word $w_i^p$ which is a number, $\mathbf {W}_0$ is a weight matrix, and $\mathbf {b}_0$ is a bias vector. <<</Reasoning Module>>> <<<Prediction Module>>> Following NAQANet BIBREF6, we divide the answers into four types and use a unique output layer to calculate the conditional answer probability $\Pr (\text{answer}|\text{type})$ for each type : Passage span: The answer is a span of the passage, and the answer probability is defined as the product of the probabilities of the start and end positions. Question span: The answer is a span of the question, and the answer probability is also defined as the product of the probabilities of the start and end positions. Count: The answer is obtained by counting, and it is treated as a multi-class classification problem over ten numbers (0-9), which covers most of the Count type answers in the DROP dataset. Arithmetic expression: The answer is the result of an arithmetic expression. The expression is obtained in three steps: (1) extract all numbers from the passage; (2) assign a sign (plus, minus or zero) for each number; (3) sum the signed numbers . Meanwhile, an extra output layer is also used to predict the probability $\Pr (\text{type})$ of each answer type. At training time, the final answer probability is defined as the joint probability over all feasible answer types, i.e., $\sum _{\text{type}}\Pr (\text{type})\Pr (\text{answer}|\text{type})$. Here, the answer type annotation is not required and the probability $\Pr (\text{type})$ is learnt by the model. At test time, the model first selects the most probable answer type greedily and then predicts the best answer accordingly. Without loss of generality, we leverage the definition of the five output layers in BIBREF6, with $\mathbf {M_0}$ and $\mathbf {Q}$ as inputs. Please refer to the paper for more details due to space limitation. <<</Prediction Module>>> <<<Comparison with NAQANet>>> The major difference between our model and NAQANet is that NAQANet does not have the reasoning module, i.e., $\mathbf {M}_0$ is simply set as $\mathbf {M}^P$. As a result, numbers are treated as common words in NAQANet except in the prediction module, thus NAQANet may struggle to learn the numerical relationships between numbers, and potentially cannot well generalize to unseen numbers. However, as discussed in Sec. SECREF1, the numerical comparison is essential for answering questions requiring numerical reasoning. In our model, the numerical relationships are explicitly represented with the topology of the graph and a NumGNN is used to perform numerical reasoning. Therefore, our NumNet model can handle questions requiring numerical reasoning more effectively, which is verified by the experiments in Sec. SECREF4. <<</Comparison with NAQANet>>> <<</Framework>>> <<<Numerically-aware Graph Construction>>> We regard all numbers from the question and passage as nodes in the graph for reasoning . The set of nodes corresponding to the numbers occurring in question and passage are denoted as $\mathbf {V}^Q$ and $\mathbf {V}^P$ respectively. And we denote all the nodes as $\mathbf {V}=\mathbf {V}^Q\cup \mathbf {V}^P$, and the number corresponding to a node $v\in \mathbf {V}$ as $n(v)$. Two sets of edges are considered in this work: Greater Relation Edge ($\overrightarrow{\mathbf {E}}$): For two nodes $v_i, v_j\in \mathbf {V}$, a directed edge $\overrightarrow{e}_{ij}=(v_i, v_j)$ pointing from $v_i$ to $v_j$ will be added to the graph if $n(v_i)>n(v_j)$, which is denoted as solid arrow in Figure FIGREF3. Lower or Equal Relation Edge ($\overleftarrow{\mathbf {E}}$): For two nodes $v_i, v_j\in \mathbf {V}$, a directed edge $\overleftarrow{e}_{ij}=(v_j, v_i)$ will be added to the graph if $n(v_i)\le n(v_j)$, which is denoted as dashed arrow in Figure FIGREF3. Theoretically, $\overrightarrow{\mathbf {E}}$ and $\overleftarrow{\mathbf {E}}$ are complement to each other . However, as a number may occur several times and represent different facts in a document, we add a distinct node for each occurrence in the graph to prevent potential ambiguity. Therefore, it is more reasonable to use both $\overrightarrow{\mathbf {E}}$ and $\overleftarrow{\mathbf {E}}$ in order to encode the equal information among nodes. <<</Numerically-aware Graph Construction>>> <<<Numerical Reasoning>>> As we built the graph $\mathcal {G}=(\mathbf {V},\mathbf {E})$, we leverage NumGNN to perform reasoning, which is corresponding to the function $\texttt {Reasoning}(\cdot )$ in Eq. DISPLAY_FORM10. The reasoning process is as follows: <<<Initialization>>> For each node $v^P_i\in \mathbf {V}^P$, its representation is initialized as the corresponding column vector of $\mathbf {M}^P$. Formally, the initial representation is $\mathbf {v}_i^P=\mathbf {M}^P[I^P(v_i^P)]$, where $I^P(v^P_i)$ denotes the word index corresponding to $v_i^P$. Similarly, the initial representation $\mathbf {v}_j^Q$ for a node $v^Q_j\in \mathbf {V}^Q$ is set as the corresponding column vector of $\mathbf {M}^Q$. We denote all the initial node representations as $\mathbf {v}^0=\lbrace \mathbf {v}_i^P\rbrace \cup \lbrace \mathbf {v}_j^Q\rbrace $. <<</Initialization>>> <<<One-step Reasoning>>> Given the graph $\mathcal {G}$ and the node representations $\mathbf {v}$, we use a GNN to perform reasoning in three steps: (1) Node Relatedness Measure: As only a few numbers are relevant for answering a question generally, we compute a weight for each node to by-pass irrelevant numbers in reasoning. Formally, the weight for node $v_i$ is computed as: where $\mathbf {W}_v$ is a weight matrix, and $b_v$ is a bias. (2) Message Propagation: As the role a number plays in reasoning is not only decided by itself, but also related to the context, we propagate messages from each node to its neighbors to help to perform reasoning. As numbers in question and passage may play different roles in reasoning and edges corresponding to different numerical relations should be distinguished, we use relation-specific transform matrices in the message propagation. Formally, we define the following propagation function for calculating the forward-pass update of a node: where $\widetilde{\mathbf {v}}^{\prime }_i$ is the message representation of node $v_i$, $\texttt {r}_{ji}$ is the relation assigned to edge $e_{ji}$, $\mathbf {W}^{\texttt {r}_{ji}}$ are relation-specific transform matrices, and $\mathcal {N}_i=\lbrace j|(v_j,v_i)\in \mathbf {E}\rbrace $ is the neighbors of node $v_i$. For each edge $e_{ji}$, $\texttt {r}_{ji}$ is determined by the following two attributes: Number relation: $>$ or $\le $; Node types: the two nodes of the edge corresponding to two numbers that: (1) both from the question ($\text{q-q}$); (2) both from the passage ($\text{p-p}$); (3) from the question and the passage respectively ($\text{q-p}$); (4) from the passage and the question respectively ($\text{p-q}$). Formally, $\texttt {r}_{ij}\in \lbrace >,\le \rbrace \times \lbrace \text{q-q},\text{p-p},\text{q-p},\text{p-q}\rbrace $. (3) Node Representation Update: As the message representation obtained in the previous step only contains information from the neighbors, it needs to be fused with the node representation to combine with the information carried by the node itself, which is performed as: where $\mathbf {W}_f$ is a weight matrix, and $\mathbf {b}_f$ is a bias vector. We denote the entire one-step reasoning process (Eq. DISPLAY_FORM26-DISPLAY_FORM30) as a single function As the graph $\mathcal {G}$ constructed in Sec. SECREF19 has encoded the numerical relations via its topology, the reasoning process is numerically-aware. <<</One-step Reasoning>>> <<<Multi-step Reasoning>>> By single-step reasoning, we can only infer relations between adjacent nodes. However, relations between multiple nodes may be required for certain tasks, e.g., sorting. Therefore, it is essential to perform multi-step reasoning, which can be done as follows: where $t\ge 1$. Suppose we perform $K$ steps of reasoning, $\mathbf {v}^K$ is used as $\mathbf {U}$ in Eq. DISPLAY_FORM10. <<</Multi-step Reasoning>>> <<</Numerical Reasoning>>> <<</Methodology>>> <<<Experiments>>> <<<Dataset and Evaluation Metrics>>> We evaluate our proposed model on DROP dataset BIBREF6, which is a public numerical MRC dataset. The DROP dataset is constructed by crowd-sourcing, which asks the annotators to generate question-answer pairs according to the given Wikipedia passages, which require numerical reasoning such as addition, counting, or sorting over numbers in the passages. There are $77,409$ training samples, $9,536$ development samples and $9,622$ testing samples in the dataset. In this paper, we adopt two metrics including Exact Match (EM) and numerically-focused F1 scores to evaluate our model following BIBREF6. The numerically-focused F1 is set to be 0 when the predicted answer is mismatched for those questions with the numeric golden answer. <<</Dataset and Evaluation Metrics>>> <<<Baselines>>> For comparison, we select several public models as baselines including semantic parsing models: [topsep=2pt, itemsep=0pt] Syn Dep BIBREF6, the neural semantic parsing model (KDG) BIBREF22 with Stanford dependencies based sentence representations; OpenIE BIBREF6, KDG with open information extraction based sentence representations; SRL BIBREF6, KDG with semantic role labeling based sentence representations; and traditional MRC models: [topsep=2pt, itemsep=0pt] BiDAF BIBREF3, an MRC model which utilizes a bi-directional attention flow network to encode the question and passage; QANet BIBREF12, which utilizes convolutions and self-attentions as the building blocks of encoders to represent the question and passage; BERT BIBREF23, a pre-trained bidirectional Transformer-based language model which achieves state-of-the-art performance on lots of public MRC datasets recently; and numerical MRC models: [topsep=2pt, itemsep=0pt] NAQANet BIBREF6, a numerical version of QANet model. NAQANet+, an enhanced version of NAQANet implemented by ourselves, which further considers real number (e.g. “2.5”), richer arithmetic expression, data augmentation, etc. The enhancements are also used in our NumNet model and the details are given in the Appendix. <<</Baselines>>> <<<Experimental Settings>>> In this paper, we tune our model on the development set and use a grid search to determine the optimal parameters. The dimensions of all the representations (e.g., $\mathbf {Q}$, $\mathbf {P}$, $\mathbf {M}^Q$, $\mathbf {M}^P$, $\mathbf {U}$, $\mathbf {M}_0^{\prime }$, $\mathbf {M}_0$ and $\mathbf {v}$) are set to 128. If not specified, the reasoning step $K$ is set to 3. Since other parameters have little effect on the results, we simply follow the settings used in BIBREF6. We use the Adam optimizer BIBREF24 with $\beta _1=0.8$, $\beta _2=0.999$, $\epsilon =10^{-7}$ to minimize the objective function. The learning rate is $5 \times 10^{-4}$, L2 weight decay $\lambda $ is $10^{-7}$ and the maximum norm value of gradient clipping is 5. We also apply exponential moving average with a decay rate $0.9999$ on all trainable variables. The model is trained with a batch size of 16 for 40 epochs. Passages and questions are trimmed to 400 and 50 tokens respectively during training, and trimmed to $1,000$ and 100 tokens respectively during prediction . <<</Experimental Settings>>> <<<Overall Results>>> The performance of our NumNet model and other baselines on DROP dataset are shown in Table TABREF47. From the results, we can observe that: (1) Our NumNet model achieves better results on both the development and testing sets on DROP dataset as compared to semantic parsing-based models, traditional MRC models and even numerical MRC models NAQANet and NAQANet+. The reason is that our NumNet model can make full use of the numerical comparison information over numbers in both question and passage via the proposed NumGNN module. (2) Our implemented NAQANet+ has a much better performance compared to the original version of NAQANet. It verifies the effectiveness of our proposed enhancements for baseline. <<</Overall Results>>> <<<Effect of GNN Structure>>> In this part, we investigate the effect of different GNN structures on the DROP development set. The results are shown in Table TABREF51. The “Comparison”, “Number” and “ALL” are corresponding to the comparing question subset , the number-type answer subset, and the entire development set, respectively . If we replace the proposed numerically-aware graph (Sec. SECREF19) with a fully connected graph, our model fallbacks to a traditional GNN, denoted as “GNN” in the table. Moreover, “- question num” denotes the numbers in the question is not included in the graph, and “- $\le $ type edge” and “- $>$ type edge” denote edges of $\le $ and $>$ types are not adopted respectively. As shown in Table TABREF51, our proposed NumGNN leads to statistically significant improvements compared to traditional GNN on both EM and F1 scores especially for comparing questions. It indicates that considering the comparing information over numbers could effectively help the numerical reasoning for comparing questions. Moreover, we find that the numbers in the question are often related to the numerical reasoning for answering the question, thus considering numbers in questions in NumGNN achieves better performance. And the results also justify that encoding “greater relation” and “lower or equal relation” simultaneously in the graph also benefits our model. <<</Effect of GNN Structure>>> <<<Effect of GNN Layer Number>>> The number of NumGNN layers represents the numerical reasoning ability of our models. A $K$-layer version has the ability for $K$-step numerical inference. In this part, we additionally perform experiments to understand the values of the numbers of NumGNN layers. From Figure FIGREF52, we could observe that: (1) The 2-layer version of NumNet achieves the best performance for the comparing questions. From careful analysis, we find that most comparing questions only require at most 2-step reasoning (e.g., “Who was the second oldest player in the MLB, Clemens or Franco?”), and therefore the 3-layer version of NumNet is more complex but brings no gains for these questions. (2) The performance of our NumNet model on the overall development set is improved consistently as the number of GNN layers increases. The reason is that some of the numerical questions require reasoning over many numbers in the passage, which could benefit from the multi-step reasoning ability of multi-layer GNN. However, further investigation shows that the performance gain is not stable when $K\ge 4$. We believe it is due to the intrinsic over smoothing problem of GNNs BIBREF25. <<</Effect of GNN Layer Number>>> <<<Case Study>>> We further give some examples to show why incorporating comparing information over numbers in the passage could help numerical reasoning in MRC in Table TABREF53. For the first case, we observe that NAQANet+ gives a wrong prediction, and we find that NAQANet+ will give the same prediction for the question “Which age group is smaller: under the age of 18 or 18 and 24?”. The reason is that NAQANet+ cannot distinguish which one is larger for $10.1\%$ and $56.2\%$. For the second case, NAQANet+ cannot recognize the second longest field goal is 22-yard and also gives a wrong prediction. For these two cases, our NumNet model could give the correct answer through the numeric reasoning, which indicates the effectiveness of our NumNet model. <<</Case Study>>> <<<Error Analysis>>> To investigate how well our NumNet model handles sorting/comparison questions and better understand the remaining challenges, we perform an error analysis on a random sample of NumNet predictions. We find that: (1) Our NumNet model can answer about 76% of sorting/comparison questions correctly, which indicates that our NumNet model has achieved numerical reasoning ability to some extend. (2) Among the incorrectly answered sorting/comparison questions, the most ones (26%) are those whose golden answers are multiple nonadjacent spans (row 1 in Table TABREF54), and the second most ones (19%) are those involving comparison with an intermediate number that does not literally occur in the document/question but has to be derived from counting or arithmetic operation (row 1 in Table TABREF54). <<</Error Analysis>>> <<<Discussion>>> By combining the numerically-aware graph and the NumGNN together, our NumNet model achieves the numerical reasoning ability. On one hand, the numerically-aware graph encodes numbers as nodes and relationships between them as the edges, which is required for numerical comparison. On the other hand, through one-step reasoning, our NumGNN could perform comparison and identify the numerical condition. After multiple-step reasoning, our NumGNN could further perform sorting. However, since the numerically-aware graph is pre-defined, our NumNet is not applicable to the case where an intermediate number has to be derived (e.g., from arithmetic operation) in the reasoning process, which is a major limitation of our model. <<</Discussion>>> <<</Experiments>>> <<<Conclusion and Future Work>>> Numerical reasoning skills such as addition, subtraction, sorting and counting are naturally required by machine reading comprehension (MRC) problems in practice. Nevertheless, these skills are not taken into account explicitly for most existing MRC models. In this work, we propose a numerical MRC model named NumNet which performs explicit numerical reasoning while reading the passages. To be specific, NumNet encodes the numerical relations among numbers in the question and passage into a graph as its topology, and leverages a numerically-aware graph neural network to perform numerical reasoning on the graph. Our NumNet model outperforms strong baselines with a large margin on the DROP dataset. In the future, we will explore the following directions: (1)As we use a pre-defined reasoning graph in our model, it is incapable of handling reasoning process which involves intermediate numbers that not presented in the graph. How to incorporate dynamic graph into our model is an interesting problem. (2) Compared with methods proposed for arithmetic word problems (AWPs), our model has better natural language understanding ability. However, the methods for AWPs can handle much richer arithmetic expressions. Therefore, how to combine both of their abilities to develop a more powerful numerical MRC model is an interesting future direction. (3) Symbolic reasoning plays a crucial role in human reading comprehension. Our work integrates numerical reasoning, which is a special case of symbolic reasoning, into traditional MRC systems. How to incorporate more sophisticated symbolic reasoning abilities into MRC systems is also a valuable future direction. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "Experiments, Abstract" ], "type": "disordered_section" }
2001.10179
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Multi-modal Sentiment Analysis using Super Characters Method on Low-power CNN Accelerator Device <<<Abstract>>> Recent years NLP research has witnessed the record-breaking accuracy improvement by DNN models. However, power consumption is one of the practical concerns for deploying NLP systems. Most of the current state-of-the-art algorithms are implemented on GPUs, which is not power-efficient and the deployment cost is also very high. On the other hand, CNN Domain Specific Accelerator (CNN-DSA) has been in mass production providing low-power and low cost computation power. In this paper, we will implement the Super Characters method on the CNN-DSA. In addition, we modify the Super Characters method to utilize the multi-modal data, i.e. text plus tabular data in the CL-Aff sharedtask. <<</Abstract>>> <<<Introduction>>> The need to classify sentiment based on the multi-modal input arises in many different problems in customer related marketing fields. Super Characters BIBREF0 is a two-step method for sentiment analysis. It first converts text into images; then feeds the images into CNN models to classify the sentiment. Sentiment classification performance on large text contents from customer online comments shows that the Super Character method is superior to other existing methods. The Super Characters method also shows that the pretrained models on a larger dataset help improve accuracy by finetuning the CNN model on a smaller dataset. Compared with from-scratch trained Super Characters model, the finetuned one improves the accuracy from 95.7% to 97.8% on the well-known Chinese dataset of Fudan Corpus. Squared English Word (SEW) BIBREF1 is an extension of the Super Characters method into Latin Languages. With the wide availability of low-power CNN accelerator chips BIBREF2 BIBREF3, Super Characters method has the great potential to be deployed in large scale by saving power and fast inference speed. In addition, it is easy to deploy as well. The recent work also extend its applications to chatbot BIBREF4, image captioning BIBREF5, and also tabular data machine learning BIBREF6. The CL-AFF Shared TaskBIBREF7 is part of the Affective Content Analysis workshop at AAAI 2020. It builds upon the OffMyChest datasetBIBREF8, which contains 12,860 samples of training data and 5,000 samples of testing data. Each sample is a multi-modal input containing both text and tabular data. The text input is an English sentence from Reddit. The tabular data is the corresponding log information for each sentence, like wordcount, created utc time and etc. And each sample has six sets of binary classification labels, EmotionDisclosure?(Yes$|$No), InformationDisclosure?(Yes$|$No), Support?(Yes$|$No), EmmotionSupport?(Yes$|$No), InformationSupport?(Yes$|$No), GeneralSupport?(Yes$|$No). In this paper, we will apply Super Characters on this data set to classify the muti-modal input. <<</Introduction>>> <<<Super Characters for Multi-modal Sentiment Analysis and Low-Power Hardware Solution>>> For multi-modal sentiment analysis, we can simply split the image into two parts. One for the text input, and the other for the tabular data. Such that both can be embedded into the Super Characters image. The CNN accelerator chip comes together with a Model Development Kit (MDK) for CNN model training, which feeds the two-dimensional Super Characters images into MDK and then obtain the fixed point model. Then, using the Software Development Kit (SDK) to load the model into the chip and send command to the CNN accelerator chip, such as to read an image, or to forward pass the image through the network to get the inference result. The advantage of using the CNN accelerator is low-power, it consumes only 300mw for an input of size 3x224x224 RGB image at the speed of 140fps. Compared with other models using GPU or FPGA, this solution implement the heavy-lifting DNN computations in the CNN accelerator chip, and the host computer is only responsible for memory read/write to generate the designed Super Character image. This has shown good result on system implementations for NLP applications BIBREF9. <<</Super Characters for Multi-modal Sentiment Analysis and Low-Power Hardware Solution>>> <<<Experiments>>> <<<Data Exploration>>> The training data set has 12,860 samples with 16 columns. The first ten columns are attributes, including sentenceid, author, nchar, created_utc, score, subreddit, label, full_text, wordcount, and id. And the other six columns are labels for each of the tasks of Emotion_disclosure, Information_disclosure, Support, Emmotion_support, Information_support, and General_support. Each task is a binary classification problem based on the ten attributes. So there will be 60 models to be trained for a 10-fold validation. The test data set has 5000 samples with only the ten columns of attributes. The system run will give labels on these test samples based on the 10-fold training. For the training data, unique ids are 3634 compared to the whole training 12,860. While for the testing data, this number is only 2443 compared to the whole testing dataset 5000, meaning some of the records may come from the same discussion thread. And the unique authors are 7556 for training, and 3769 for testing, which means some of the authors are active that they may published more than one comments. Based on this, we have considered to include author names in the multi-modal model as well, since a comment may be biased by the personality of its author. The maximum length of an author's name is 20 charactors, if SEW BIBREF1 is to be used to project the names onto a two-dimensional embedding. On the other hand, the nchar which indicates the number of characters for the full_text has a maximum value of 9993, and the maximum wordcount is 481. The column “label" has 37 unique values, which are different combinations of strings like “husband", “wife", “boyfriend", “girlfriend", and their abbreviations like “bf",“gf". The column “subreddit" is a categorical attribute with values in (“offmychest", “CasualConversation"). After converting the Unix time in the column of “created_utc", we found that the records are generated from 2017 to 2018. The column score has integers ranging from -44 to 1838 with 251 unique values. <<</Data Exploration>>> <<<Design SuperCharacters Image>>> The sentence length distribution is given in Figure FIGREF3. The layout design for the full_text will be based on this. Since we present the English words using SEW BIBREF1 method, the size of each English word on the SuperCharacters image should better be calculated by (224/N)*(224/N) if the whole image is set to 224x224. Here N is an integer. The dimension is set to 224x224 because of the chip specification. <<<Design Option One>>> In this design setting, we only include the full_text information and ignore the other attributes. If N=7, it means each row has 7 words, and each word has (224/7)*(224/7)=32*32 pixels. In this setting we can hold up to 49 words in full_text. For the records with words more than 49, the full_text will ignore the words from the 49th. In this case, only 0.86% of the training data and 1.98% of the testing data will have to cut the sentence at 49 words. An example of this design setting is in Figure FIGREF4. <<</Design Option One>>> <<<Design Option Two>>> If N=8, it means each row has 8 words, and each word has (224/8)*(224/8)=28*28 pixels. And if we set the cutlength=40, it means that we will have 5 rows for the full_text, and the other 3 rows will not be used for text, but all the space of the 224*(3*28) square pixels will be used for the tabular data given in the attributes other than full_text". For the records with words more than 40, the full_text will ignore the words from the 40th. In this case, only 2.03% of the training data and 4.14% of the testing data will have to cut the sentence at 40 words. We have the option to use the bottom part of the image to embed the other attributes. The id and sentenceid should be unrelated to the prediction, so these two attributes are not included. One example having the full_text, author, wordcount, created_utc, subreddit, score, nchar, and label is given in Figure FIGREF4. However, the 10-fold training accuracy on this design is not good. This is partially because some of the attributes do not contribute to prediction but adds more noise instead. For example, the created time may not be very related to the prediction of the tasks but occupies a good portion of the embedding area of the image. In addition, since most of the wordcounts are centered around less than twenty, the two-dimensional embeddings of the full_text should have better resolution if the cutlength is smaller than 40. So the font size will be larger and easier for CNN to learn. <<</Design Option Two>>> <<<Design Option Three>>> This design setting cuts the cut length of the full_text sentence to 42, and leave the space of the last row for some important attributes, including subreddit, wordcount, score, and label. An example of this design setting is in Figure FIGREF4. <<</Design Option Three>>> <<<Design Option Four>>> This is data augmentation for Design Option Three. For a small data set, we need more data with the same semantic meaning generated from the raw labeled data without adding any noise. For Super Characters, the text are projected into the image. Adding some spaces at the front should not change the semantic meaning, and at the same time increased the number of generated Super Characters images. For each sentence, if the sentence length is less than 42, we will add one space at the front and then generate the Super Characters image. This process iterates until the length of the sentence with the added space reaches 42. An example of this design setting is in Figure FIGREF4. <<</Design Option Four>>> <<</Design SuperCharacters Image>>> <<<Experimental Results>>> After comparison, only Design Option One and Design Option Four are kept for the entire 10-fold training and validation. For the system runs, it is limited to submit a maximum of 10 system runs. So, only the first five 10-folds models on both Design Option One and Design Option Four are tested against the 5000 testing data and submitted. The details of these 10 system runs are given in Table TABREF10$-$TABREF15. In general, Design Option Four are a little better than Design Option One, but these results are still not good. The results are a little better than constantly predict one class. We can see that the results on this OffMyChest data is not as good as on AffCon19 CLAFF shared task. And compared with Super Characters on Wikipedia data set, the accuracy on this data is not as accurate as well. Several methods could be used to further improve the accuracy. First, pretrained model may help improve. For this shared task, the size of training examples are relatively small to understand the complex definition of these 6 tasks. Second, other data augmentation method could be introduced in order to further boost the accuracy. For example, replacing word with its synonyms. Third, the data set is skewed data set. We can balance the data set by upsampling. <<</Experimental Results>>> <<</Experiments>>> <<<Conclusion>>> In this paper, we proposed modified version of Super Characters, in order to make it work on multi-modal data. In the case of this AffCon CLAFF shared task, the multi-modal data includes text data and tabular data. In addition, we deploy the models on low-power CNN chips, which proves the feasibility of applying DNN models with consideration of real-world practical concerns such as power and speed. The Super Characters method is relatively new and starts to attrack attentions for application scenarios. Pretrained models on large corpus would be very helpful for the Super Characters method, as success of pretrained model is observed for NLP models like ELMO and BERT. For fine-tuning on small datasets, data augmentation should further boost the generalization capability. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Experiments, Introduction" ], "type": "disordered_section" }
2001.10179
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Multi-modal Sentiment Analysis using Super Characters Method on Low-power CNN Accelerator Device <<<Abstract>>> Recent years NLP research has witnessed the record-breaking accuracy improvement by DNN models. However, power consumption is one of the practical concerns for deploying NLP systems. Most of the current state-of-the-art algorithms are implemented on GPUs, which is not power-efficient and the deployment cost is also very high. On the other hand, CNN Domain Specific Accelerator (CNN-DSA) has been in mass production providing low-power and low cost computation power. In this paper, we will implement the Super Characters method on the CNN-DSA. In addition, we modify the Super Characters method to utilize the multi-modal data, i.e. text plus tabular data in the CL-Aff sharedtask. <<</Abstract>>> <<<Introduction>>> The need to classify sentiment based on the multi-modal input arises in many different problems in customer related marketing fields. Super Characters BIBREF0 is a two-step method for sentiment analysis. It first converts text into images; then feeds the images into CNN models to classify the sentiment. Sentiment classification performance on large text contents from customer online comments shows that the Super Character method is superior to other existing methods. The Super Characters method also shows that the pretrained models on a larger dataset help improve accuracy by finetuning the CNN model on a smaller dataset. Compared with from-scratch trained Super Characters model, the finetuned one improves the accuracy from 95.7% to 97.8% on the well-known Chinese dataset of Fudan Corpus. Squared English Word (SEW) BIBREF1 is an extension of the Super Characters method into Latin Languages. With the wide availability of low-power CNN accelerator chips BIBREF2 BIBREF3, Super Characters method has the great potential to be deployed in large scale by saving power and fast inference speed. In addition, it is easy to deploy as well. The recent work also extend its applications to chatbot BIBREF4, image captioning BIBREF5, and also tabular data machine learning BIBREF6. The CL-AFF Shared TaskBIBREF7 is part of the Affective Content Analysis workshop at AAAI 2020. It builds upon the OffMyChest datasetBIBREF8, which contains 12,860 samples of training data and 5,000 samples of testing data. Each sample is a multi-modal input containing both text and tabular data. The text input is an English sentence from Reddit. The tabular data is the corresponding log information for each sentence, like wordcount, created utc time and etc. And each sample has six sets of binary classification labels, EmotionDisclosure?(Yes$|$No), InformationDisclosure?(Yes$|$No), Support?(Yes$|$No), EmmotionSupport?(Yes$|$No), InformationSupport?(Yes$|$No), GeneralSupport?(Yes$|$No). In this paper, we will apply Super Characters on this data set to classify the muti-modal input. <<</Introduction>>> <<<Super Characters for Multi-modal Sentiment Analysis and Low-Power Hardware Solution>>> For multi-modal sentiment analysis, we can simply split the image into two parts. One for the text input, and the other for the tabular data. Such that both can be embedded into the Super Characters image. The CNN accelerator chip comes together with a Model Development Kit (MDK) for CNN model training, which feeds the two-dimensional Super Characters images into MDK and then obtain the fixed point model. Then, using the Software Development Kit (SDK) to load the model into the chip and send command to the CNN accelerator chip, such as to read an image, or to forward pass the image through the network to get the inference result. The advantage of using the CNN accelerator is low-power, it consumes only 300mw for an input of size 3x224x224 RGB image at the speed of 140fps. Compared with other models using GPU or FPGA, this solution implement the heavy-lifting DNN computations in the CNN accelerator chip, and the host computer is only responsible for memory read/write to generate the designed Super Character image. This has shown good result on system implementations for NLP applications BIBREF9. <<</Super Characters for Multi-modal Sentiment Analysis and Low-Power Hardware Solution>>> <<<Experiments>>> <<<Data Exploration>>> The training data set has 12,860 samples with 16 columns. The first ten columns are attributes, including sentenceid, author, nchar, created_utc, score, subreddit, label, full_text, wordcount, and id. And the other six columns are labels for each of the tasks of Emotion_disclosure, Information_disclosure, Support, Emmotion_support, Information_support, and General_support. Each task is a binary classification problem based on the ten attributes. So there will be 60 models to be trained for a 10-fold validation. The test data set has 5000 samples with only the ten columns of attributes. The system run will give labels on these test samples based on the 10-fold training. For the training data, unique ids are 3634 compared to the whole training 12,860. While for the testing data, this number is only 2443 compared to the whole testing dataset 5000, meaning some of the records may come from the same discussion thread. And the unique authors are 7556 for training, and 3769 for testing, which means some of the authors are active that they may published more than one comments. Based on this, we have considered to include author names in the multi-modal model as well, since a comment may be biased by the personality of its author. The maximum length of an author's name is 20 charactors, if SEW BIBREF1 is to be used to project the names onto a two-dimensional embedding. On the other hand, the nchar which indicates the number of characters for the full_text has a maximum value of 9993, and the maximum wordcount is 481. The column “label" has 37 unique values, which are different combinations of strings like “husband", “wife", “boyfriend", “girlfriend", and their abbreviations like “bf",“gf". The column “subreddit" is a categorical attribute with values in (“offmychest", “CasualConversation"). After converting the Unix time in the column of “created_utc", we found that the records are generated from 2017 to 2018. The column score has integers ranging from -44 to 1838 with 251 unique values. <<</Data Exploration>>> <<<Design SuperCharacters Image>>> The sentence length distribution is given in Figure FIGREF3. The layout design for the full_text will be based on this. Since we present the English words using SEW BIBREF1 method, the size of each English word on the SuperCharacters image should better be calculated by (224/N)*(224/N) if the whole image is set to 224x224. Here N is an integer. The dimension is set to 224x224 because of the chip specification. <<<Design Option One>>> In this design setting, we only include the full_text information and ignore the other attributes. If N=7, it means each row has 7 words, and each word has (224/7)*(224/7)=32*32 pixels. In this setting we can hold up to 49 words in full_text. For the records with words more than 49, the full_text will ignore the words from the 49th. In this case, only 0.86% of the training data and 1.98% of the testing data will have to cut the sentence at 49 words. An example of this design setting is in Figure FIGREF4. <<</Design Option One>>> <<<Design Option Two>>> If N=8, it means each row has 8 words, and each word has (224/8)*(224/8)=28*28 pixels. And if we set the cutlength=40, it means that we will have 5 rows for the full_text, and the other 3 rows will not be used for text, but all the space of the 224*(3*28) square pixels will be used for the tabular data given in the attributes other than full_text". For the records with words more than 40, the full_text will ignore the words from the 40th. In this case, only 2.03% of the training data and 4.14% of the testing data will have to cut the sentence at 40 words. We have the option to use the bottom part of the image to embed the other attributes. The id and sentenceid should be unrelated to the prediction, so these two attributes are not included. One example having the full_text, author, wordcount, created_utc, subreddit, score, nchar, and label is given in Figure FIGREF4. However, the 10-fold training accuracy on this design is not good. This is partially because some of the attributes do not contribute to prediction but adds more noise instead. For example, the created time may not be very related to the prediction of the tasks but occupies a good portion of the embedding area of the image. In addition, since most of the wordcounts are centered around less than twenty, the two-dimensional embeddings of the full_text should have better resolution if the cutlength is smaller than 40. So the font size will be larger and easier for CNN to learn. <<</Design Option Two>>> <<<Design Option Three>>> This design setting cuts the cut length of the full_text sentence to 42, and leave the space of the last row for some important attributes, including subreddit, wordcount, score, and label. An example of this design setting is in Figure FIGREF4. <<</Design Option Three>>> <<<Design Option Four>>> This is data augmentation for Design Option Three. For a small data set, we need more data with the same semantic meaning generated from the raw labeled data without adding any noise. For Super Characters, the text are projected into the image. Adding some spaces at the front should not change the semantic meaning, and at the same time increased the number of generated Super Characters images. For each sentence, if the sentence length is less than 42, we will add one space at the front and then generate the Super Characters image. This process iterates until the length of the sentence with the added space reaches 42. An example of this design setting is in Figure FIGREF4. <<</Design Option Four>>> <<</Design SuperCharacters Image>>> <<<Experimental Results>>> After comparison, only Design Option One and Design Option Four are kept for the entire 10-fold training and validation. For the system runs, it is limited to submit a maximum of 10 system runs. So, only the first five 10-folds models on both Design Option One and Design Option Four are tested against the 5000 testing data and submitted. The details of these 10 system runs are given in Table TABREF10$-$TABREF15. In general, Design Option Four are a little better than Design Option One, but these results are still not good. The results are a little better than constantly predict one class. We can see that the results on this OffMyChest data is not as good as on AffCon19 CLAFF shared task. And compared with Super Characters on Wikipedia data set, the accuracy on this data is not as accurate as well. Several methods could be used to further improve the accuracy. First, pretrained model may help improve. For this shared task, the size of training examples are relatively small to understand the complex definition of these 6 tasks. Second, other data augmentation method could be introduced in order to further boost the accuracy. For example, replacing word with its synonyms. Third, the data set is skewed data set. We can balance the data set by upsampling. <<</Experimental Results>>> <<</Experiments>>> <<<Conclusion>>> In this paper, we proposed modified version of Super Characters, in order to make it work on multi-modal data. In the case of this AffCon CLAFF shared task, the multi-modal data includes text data and tabular data. In addition, we deploy the models on low-power CNN chips, which proves the feasibility of applying DNN models with consideration of real-world practical concerns such as power and speed. The Super Characters method is relatively new and starts to attrack attentions for application scenarios. Pretrained models on large corpus would be very helpful for the Super Characters method, as success of pretrained model is observed for NLP models like ELMO and BERT. For fine-tuning on small datasets, data augmentation should further boost the generalization capability. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Experiments" ], "type": "disordered_section" }
1911.03842
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation <<<Abstract>>> Models often easily learn biases present in the training data, and their predictions directly reflect this bias. We analyze the presence of gender bias in dialogue and examine the subsequent effect on generative chitchat dialogue models. Based on this analysis, we propose a combination of three techniques to mitigate bias: counterfactual data augmentation, targeted data collection, and conditional training. We focus on the multi-player text-based fantasy adventure dataset LIGHT as a testbed for our work. LIGHT contains gender imbalance between male and female characters with around 1.6 times as many male characters, likely because it is entirely collected by crowdworkers and reflects common biases that exist in fantasy or medieval settings. We show that (i) our proposed techniques mitigate gender bias by balancing the genderedness of generated dialogue utterances; and (ii) they work particularly well in combination. Further, we show through various metrics---such as quantity of gendered words, a dialogue safety classifier, and human evaluation---that our models generate less gendered, but still engaging chitchat responses. <<</Abstract>>> <<<Introduction>>> Since machine learning algorithms learn to model patterns present in training datasets, what they learn is affected by data quality. Analysis has found that model predictions directly reflect the biases found in training datasets, such as image classifiers learning to associate ethnicity with specific activities BIBREF1. Recent work in natural language processing has found similar biases, such as in word embeddings BIBREF2, BIBREF3, BIBREF4, object classification BIBREF5, natural language inference BIBREF6, and coreference resolution BIBREF7. Less work has focused on the biases present in dialogue utterances BIBREF8, BIBREF9, despite bias being clearly present in human interactions, and the rapid development of dialogue agents for real-world use-cases, such as interactive assistants. In this work we aim to address this by focusing on mitigating gender bias. We use the dialogue dataset from the LIGHT text adventure world BIBREF0 as a testbed for our investigation into de-biasing dialogues. The dataset consists of a set of crowd-sourced locations, characters, and objects, which form the backdrop for the dialogues between characters. In the dialogue creation phase, crowdworkers are presented with personas for characters—which themselves were written by other crowdworkers—that they should enact; the dialogues the crowdworkers generate from these personas form the dialogue dataset. Dialogue datasets are susceptible to reflecting the biases of the crowdworkers as they are often collected solely via crowdsourcing. Further, the game's medieval setting may encourage crowdworkers to generate text which accentuates the historical biases and inequalities of that time period BIBREF10, BIBREF11. However, despite the fact that the dialogues take place in a fantasy adventure world, LIGHT is a game and thus we are under no obligation to recreate historical biases in this environment, and can instead use creative license to shape it into a fun world with gender parity. We use the dialogues in LIGHT because we find that it is highly imbalanced with respect to gender: there are over 60% more male-gendered characters than female. We primarily address the discrepancy in the representation of male and female genders, although there are many characters that are gender neutral (like “trees") or for which the gender could not be determined. We did not find any explicitly identified non-binary characters. We note that this is a bias in and of itself, and should be addressed in future work. We show that training on gender biased data leads existing generative dialogue models to amplify gender bias further. To offset this, we collect additional in-domain personas and dialogues to balance gender and increase the diversity of personas in the dataset. Next, we combine this approach with Counterfactual Data Augmentation and methods for controllable text generation to mitigate the bias in dialogue generation. Our proposed techniques create models that produce engaging responses with less gender bias. <<</Introduction>>> <<<Sources of Bias in Dialogue Datasets>>> <<<Bias in Character Personas>>> Recent work in dialogue incorporates personas, or personality descriptions that ground speaker's chat, such as I love fishing BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. Personas have been shown to increase engagingness and improve consistency. However, they can be a starting point for bias BIBREF17, BIBREF18, BIBREF9, as bias in the personas propagates to subsequent conversations. <<<Qualitative Examination.>>> Analyzing the personas in LIGHT qualitatively, we find many examples of bias. For example, the character girl contains the line I regularly clean and cook dinner. Further examples are given in Table TABREF1. <<</Qualitative Examination.>>> <<<Quantitative Examination.>>> We quantitatively analyze bias by first examining whether the existing personas are offensive, and second, evaluating their gender balance. To assess the pervasiveness of unsafe content present in personas, we asked three independent annotators to examine each character's persona for potentially offensive content. If annotators selected that the content was offensive or maybe offensive, they were asked to place it in one of four categories – racist, sexist, classist, other – and to provide a reason for their response. Just over 2% of personas were flagged by at least one annotator, and these personas are removed from the dataset. We further examined gender bias in personas. Annotators were asked to label the gender of each character based on their persona description (choosing “neutral" if it was not explicit in the persona). This annotation is possible because some personas include lines such as I am a young woman, although the majority of personas do not mention an explicit gender. Annotators found nearly 50% more male-gendered characters than female-gendered characters (Table TABREF5). While annotators labeled personas as explicitly male, female, or gender-neutral, gender bias may still exist in personas beyond explicit sentences such as I am a young man. For example, personas can contain gendered references such as I want to follow in my father's footsteps rather than mother's footsteps. These relational nouns BIBREF19, BIBREF20 such as father encode a specific relationship that can be gender biased. In this example, that relationship would be between the character and a man, rather than a woman. We analyzed the frequency of references to other gendered characters in the personas by counting the appearance of gendered words using the list compiled by BIBREF21 (for example he vs. she), and find that men are disproportionately referred to in the personas: there are nearly 3x as many mentions of men than women. <<</Quantitative Examination.>>> <<</Bias in Character Personas>>> <<<Bias in Dialogue Utterances>>> After analyzing the bias in LIGHT personas, we go on to analyze the bias in dialogues created from those personas and how to quantify it. <<<Measuring Bias.>>> Sexism is clearly present in many datasets BIBREF9, but finding a good way to measure sexism, especially at scale, can be challenging. A simple answer would be to rely on crowdworkers operating under their own notions of “sexism” to annotate the dialogues. However, in our experience, crowdworkers hold a range of views, often different from ours, as to what counts as sexism, making mere human evaluation far from sufficient. Note that the original LIGHT personas and dialogues were generated by crowdworkers, leaving little reason to believe that crowdworkers will be proficient at spotting the sexism that they themselves embued the dataset with in the first place. Therefore, we supplement our crowdworker-collected human annotations of gender bias with additional quantitative measurements: we measure the ratio of gendered words (taken from the union of several existing gendered word lists that were each created through either automatic means, or by experts BIBREF21, BIBREF22, BIBREF23), and we run an existing dialogue safety classifier BIBREF24 to measure offensiveness of the dialogues. <<</Measuring Bias.>>> <<</Bias in Dialogue Utterances>>> <<</Sources of Bias in Dialogue Datasets>>> <<<Methodology: Mitigating Bias in Generative Dialogue>>> We explore both data augmentation and algorithmic methods to mitigate bias in generative Transformer dialogue models. We describe first our modeling setting and then the three proposed techniques for mitigating bias. Using (i) counterfactual data augmentation BIBREF25 to swap gendered words and (ii) additional data collection with crowdworkers, we create a gender-balanced dataset. Further, (iii) we describe a controllable generation method which moderates the male and female gendered words it produces. <<<Models>>> Following BIBREF0, in all of our experiments we fine-tune a large, pre-trained Transformer encoder-decoder neural network on the dialogues in the LIGHT dataset. The model was pre-trained on Reddit conversations, using a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io. During pre-training, models were trained to generate a comment conditioned on the full thread leading up to the comment. Comments containing URLs or that were under 5 characters in length were removed from the corpus, as were all child comments, resulting in approximately $2,200$ million training examples. The model is a 8 layer encoder, 8 layer decoder with 512 dimensional embeddings and 16 attention heads, and is based on the ParlAI implementation of BIBREF26. For generation, we decode sequences with beam search with beam size 5. <<</Models>>> <<<Counterfactual Data Augmentation>>> One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21. For example, all instances of grandmother are swapped with grandfather. <<</Counterfactual Data Augmentation>>> <<<Positive-Bias Data Collection>>> To create a more gender-balanced dataset, we collect additional data using a Positive-Bias Data Collection (Pos. Data) strategy. <<<Gender-swapping Existing Personas>>> There are a larger number of male-gendered character personas than female-gendered character personas (see Section SECREF2), so we balance existing personas using gender-swapping. For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns. Additionally, we ask annotators to swap the gender of any characters that are referred to in the persona text for a given character. <<</Gender-swapping Existing Personas>>> <<<New and Diverse characters>>> As discussed in Section SECREF2, it is insufficient to simply balance references to men and women in the dataset, as there may be bias in the form of sexism. While it is challenging to detect sexism, we attempt to offset this type of bias by collecting a set of interesting and independent characters. We do this by seeding workers with examples like adventurer with the persona I am an woman passionate about exploring a world I have not yet seen. I embark on ambitious adventures. We give the additional instruction to attempt to create diverse characters. Even with this instruction, crowdworkers still created roughly 3x as many male-gendered characters as female-gendered characters. We exclude male-gendered characters created in this fashion. In combination with the gender swapped personas above, this yields a new set of 2,676 character personas (compared to 1,877 from the original dataset), for which the number of men and women and the number of references to male or female gendered words is roughly balanced: see Table TABREF5. <<</New and Diverse characters>>> <<<New dialogues>>> Finally, we collect additional dialogues with these newly created gender balanced character personas, favoring conversations that feature female gendered characters to offset the imbalance in the original data. We added further instructions for annotators to be mindful of gender bias during their conversations, and in particular to assume equality between genders – social, economic, political, or otherwise – in this fantasy setting. In total, we collect 507 new dialogues containing 6,658 new dialogue utterances in total (about 6% of the size of the full LIGHT dataset). <<</New dialogues>>> <<</Positive-Bias Data Collection>>> <<<Conditional Training>>> Bias in dialogue can manifest itself in various forms, but one form is the imbalanced use of gendered words. For example, LIGHT contains far more male-gendered words than female-gendered words rather than an even split between words of both genders. To create models that can generate a gender-balanced number of gendered words, we propose Conditional Training (CT) for controlling generative model output BIBREF27, BIBREF28, BIBREF29, BIBREF30. Previous work proposed a mechanism to train models with specific control tokens so models learn to associate the control token with the desired text properties BIBREF28, then modifying the control tokens during inference to produce the desired result. Prior to training, each dialogue response is binned into one of four bins – $\text{F}^{0/+}\text{M}^{0/+}$ – where $\text{F}^{0}$ indicates that there are zero female gendered words in the response and $\text{F}^{+}$ indicates the presence of at least one female gendered word. The gendered words are determined via an aggregation of existing lists of gendered nouns and adjectives from BIBREF21, BIBREF22, BIBREF23. The bins are used to train a conditional model by appending a special token (indicating the bin for the target response) to the end of the input which is given to the encoder. At inference time, the bins can be manipulated to produce dialogue outputs with various quantities of gendered words. <<</Conditional Training>>> <<</Methodology: Mitigating Bias in Generative Dialogue>>> <<<Results>>> We train generative Transformer models using each of these methods – Counterfactual Data Augmentation that augments with swaps of gendered words (CDA, §SECREF19), adding new dialogues (Positive-Bias Data Collection, §SECREF20), and controllable generation to control the quantity of gendered words (CT, §SECREF24) – and finally combine all of these methods together (ALL). <<<Bias is Amplified in Generation>>> Existing Transformer generative dialogue models BIBREF31, BIBREF32, BIBREF0 are trained to take as input the dialogue context and generate the next utterance. Previous work has shown that machine learning models reflect the biases present in data BIBREF4, BIBREF3, and that these biases can be easy to learn compared to more challenging reasoning BIBREF2, BIBREF33. Generative models often use beam search or top-k sampling BIBREF34 to decode, and these methods are well-known to produce generic text BIBREF35, which makes them susceptible statistical biases present in datasets. As shown in Table TABREF11, we find that existing models actually amplify bias. When the trained model generates gendered words (i.e., words from our gendered word list), it generates male-gendered words the vast majority of the time – even on utterances for which it is supposed to generate only female-gendered words (i.e., the gold label only contains female-gendered words), it generates male-gendered words nearly $78\%$ of the time. Additionally, following BIBREF8, we run an offensive language classifier on the gold responses and the model generated utterances (Table TABREF16) and find that the model produces more offensive utterances than exist in the dataset. <<</Bias is Amplified in Generation>>> <<<Genderedness of Generated Text>>> We analyze the performance of the various techniques by dividing the test set using the four genderedness bins – $\text{F}^{0}\text{M}^{0}$, $\text{F}^{0}\text{M}^{+}$, $\text{F}^{+}\text{M}^{0}$, and $\text{F}^{+}\text{M}^{+}$ – and calculate the F1 word overlap with the gold response, the percentage of gendered words generated (% gend. words), and the percentage of male-gendered words generated (relative to the sum total of gendered words generated by the model). We compare to the gold labels from the test set and a baseline model that does not use any of the bias mitigation techniques. Results for all methods are displayed in Table TABREF11. Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combining all methods in one – the ALL model is the most advantageous. While ALL has more data than CDA and CT, more data alone is not enough — the Positive-Bias Data Collection model does not achieve as good results. Both the CT and ALL models benefit from knowing the data split ($\text{F}^{0}\text{M}^{0}$, for example), and both models yield a genderedness ratio closest to ground truth. <<</Genderedness of Generated Text>>> <<<Conditional Training Controls Gendered Words>>> Our proposed CT method can be used to control the use of gendered words in generated dialogues. We examine the effect of such training by generating responses on the test set by conditioning the ALL model on a singular bin for all examples. Results are shown in Figure FIGREF12. Changing the bin radically changes the genderedness of generated text without significant changes to F1. Examples of generated text from both the baseline and the ALL model are shown in Table TABREF31. The baseline model generates male-gendered words even when the gold response contains no gendered words or only female-gendered words, even generating unlikely sequences such as “my name is abigail. i am the king of this kingdom.". <<</Conditional Training Controls Gendered Words>>> <<<Safety of Generated Text>>> Using a dialogue safety classifier BIBREF24, we find that our proposed de-biased models are rated as less offensive compared to the baseline generative Transformer and the LIGHT data (see Table TABREF16). <<</Safety of Generated Text>>> <<<Human Evaluation>>> Finally, we use human evaluation to compare the quality of our de-biasing methods. We use the dialogue evaluation system Acute-Eval BIBREF36 to ask human evaluators to compare two conversations from different models and decide which model is more biased and which model is more engaging. Following Acute-Eval, we collect 100 human and model paired chats. Conversations from a human and baseline model are compared to conversations from a human and the ALL model with all generations set to the $\text{F}^{0}\text{M}^{0}$ gender-neutral control bin. Evaluators are asked which model is more engaging and for which model they find it more difficult to predict the gender of the speaker. We found that asking about difficulty of predicting a speaker's gender was much more effective than asking evaluators to evaluate sexism or gender bias. Figure FIGREF17 shows that evaluators rate the ALL model harder to predict the gender of (statistically significant at $p < 0.01$) while engagingness does not change. Our proposed methods are able to mitigate gender bias without degrading dialogue quality. <<</Human Evaluation>>> <<</Results>>> <<<Conclusion>>> We analyze gender bias in dialogue and propose a general purpose method for understanding and mitigating bias in character personas and their associated dialogues. We present techniques using data augmentation and controllable generation to reduce gender bias in neural language generation for dialogue. We use the dataset LIGHT as a testbed for this work. By integrating these methods together, our models provide control over how gendered dialogue is and decrease the offensiveness of the generated utterances. Overall, our proposed methodology reduces the effect of bias while maintaining dialogue engagingness. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Methodology: Mitigating Bias in Generative Dialogue, Introduction" ], "type": "disordered_section" }
1911.06191
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Microsoft Research Asia's Systems for WMT19 <<<Abstract>>> We Microsoft Research Asia made submissions to 11 language directions in the WMT19 news translation tasks. We won the first place for 8 of the 11 directions and the second place for the other three. Our basic systems are built on Transformer, back translation and knowledge distillation. We integrate several of our rececent techniques to enhance the baseline systems: multi-agent dual learning (MADL), masked sequence-to-sequence pre-training (MASS), neural architecture optimization (NAO), and soft contextual data augmentation (SCA). <<</Abstract>>> <<<Introduction>>> We participated in the WMT19 shared news translation task in 11 translation directions. We achieved first place for 8 directions: German$\leftrightarrow $English, German$\leftrightarrow $French, Chinese$\leftrightarrow $English, English$\rightarrow $Lithuanian, English$\rightarrow $Finnish, and Russian$\rightarrow $English, and three other directions were placed second (ranked by teams), which included Lithuanian$\rightarrow $English, Finnish$\rightarrow $English, and English$\rightarrow $Kazakh. Our basic systems are based on Transformer, back translation and knowledge distillation. We experimented with several techniques we proposed recently. In brief, the innovations we introduced are: <<<Multi-agent dual learning (MADL)>>> The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\mathcal {X}$ to domain $\mathcal {Y}$) and dual task (mapping from domain $\mathcal {Y}$ to $\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models. It was integrated into our submitted systems for German$\leftrightarrow $English and German$\leftrightarrow $French translations. <<</Multi-agent dual learning (MADL)>>> <<<Masked sequence-to-sequence pretraining (MASS)>>> Pre-training and fine-tuning have achieved great success in language understanding. MASS BIBREF3, a pre-training method designed for language generation, adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. It was integrated into our submitted systems for Chinese$\rightarrow $English and English$\rightarrow $Lithuanian translations. <<</Masked sequence-to-sequence pretraining (MASS)>>> <<<Neural architecture optimization (NAO)>>> As well known, the evolution of neural network architecture plays a key role in advancing neural machine translation. Neural architecture optimization (NAO), our newly proposed method BIBREF4, leverages the power of a gradient-based method to conduct optimization and guide the creation of better neural architecture in a continuous and more compact space given the historically observed architectures and their performances. It was applied in English$\leftrightarrow $Finnish translations in our submitted systems. <<</Neural architecture optimization (NAO)>>> <<<Soft contextual data augmentation (SCA)>>> While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is relatively limited. SCA BIBREF5 softly augments a randomly chosen word in a sentence by its contextual mixture of multiple related words, i.e., replacing the one-hot representation of a word by a distribution provided by a language model over the vocabulary. It was applied in Russian$\rightarrow $English translation in our submitted systems. <<</Soft contextual data augmentation (SCA)>>> <<</Introduction>>> <<<Our Techniques>>> <<<Masked sequence-to-sequence pre-training (MASS)>>> MASS is a pre-training method for language generation. For machine translation, it can leverage monolingual data in two languages to pre-train a translation model. Given a sentence $x \in \mathcal {X}$, we denote $x^{\setminus u:v}$ as a modified version of $x$ where its fragment from position $u$ to $v$ are masked, $0<u<v<m$ and $m$ is the number of tokens of sentence $x$. We denote $k=v-u+1$ as the number of tokens being masked from position $u$ to $v$. We replace each masked token by a special symbol $[\mathbb {M}]$, and the length of the masked sentence is not changed. $x^{u:v}$ denotes the sentence fragment of $x$ from $u$ to $v$. MASS pre-trains a sequence to sequence model by predicting the sentence fragment $x^{u:v}$ taking the masked sequence $x^{\setminus u:v}$ as input. We use the log likelihood as the objective function: where $\mathcal {X}$, $\mathcal {Y}$ denote the source and target domain. In addition to zero/low-resource setting BIBREF7, we also extend MASS to supervised setting where bilingual sentence pair $(x, y) \in (\mathcal {X}, \mathcal {Y})$ can be leveraged for pre-training. The log likelihood in the supervised setting is as follows: where $[\cdot ;\cdot ]$ represents the concatenation operation. $P(y|x^{\setminus u:v};\theta )$ and $P(x|y^{\setminus u:v};\theta )$ denote the probability of translating a masked sequence to another language, which encourage the encoder to extract meaningful representations of unmasked input tokens in order to predict the masked output sequence. $P(x^{u:v}|[x^{\setminus u:v}; y^{\setminus u:v}];\theta )$ and $P(y^{u:v}|[x^{\setminus u:v}; y^{\setminus u:v}];\theta )$ denote the probability of generating the masked source/target segment given both the masked source and target sequences, which encourage the model to extract cross-lingual information. $P(y^{u:v}|x^{\setminus u:v};\theta )$ and $P(x^{u:v}|y^{\setminus u:v};\theta )$ denote the probability of generating the masked fragment given only the masked sequence in another language. More details about MASS can be found in BIBREF3. <<</Masked sequence-to-sequence pre-training (MASS)>>> <<</Our Techniques>>> <<<Submitted Systems>>> <<<English@!START@$\leftrightarrow $@!END@German>>> We submit constrained systems to both English to German and German to English translations, with the same techniques. <<<Dataset>>> We concatenate “Europarl v9”, “News Commentary v14”, “Common Crawl corpus” and “Document-split Rapid corpus” as the basic bilingual dataset (denoted as $\mathcal {B}_0$). Since “Paracrawl” data is noisy, we select 20M bilingual data from this corpus using the script filter_interactive.py. The two parts of bilingual data are concatenated together (denoted as $\mathcal {B}_1$). We clean $\mathcal {B}_1$ by normalizing the sentences, removing non-printable characters, and tokenization. We share a vocabulary for the two languages and apply BPE for word segmentation with 35000 merge operations. (We tried different BPE merge operations but found no significant differences.) For monolingual data, we use $120M$ English sentences (denoted as $\mathcal {M}_{\text{en}}$) and $120M$ German sentences (denoted as $\mathcal {M}_{\text{de}}$) from Newscrawl, and preprocess them in the same way as bilingual data. We use newstest 2016 and the validation set and newstest 2018 as the test set. <<</Dataset>>> <<<Model Configuration>>> We use the PyTorch implementation of Transformer. We choose the Transformer_big setting, in which both the encoder and decoder are of six layers. The dropout rate is fixed as $0.2$. We set the batchsize as 4096 and the parameter –update-freq as 16. We apply Adam BIBREF10 optimizer with learning rate $5\times 10^{-4}$. <<</Model Configuration>>> <<<Training Pipeline>>> The pipeline consists of three steps: 1. Pre-train two English$\rightarrow $German translation models (denoted as $\bar{f}_1$ and $\bar{f}_2$) and two German$\rightarrow $English translation models (denoted as $\bar{g}_1$ and $\bar{g}_2$) on $\mathcal {B}_1$; pre-train another English$\rightarrow $German (denoted as $\bar{f}_3$) and German$\rightarrow $English (denoted as $\bar{g}_3$) on $\mathcal {B}_0$. 2. Apply back translation following BIBREF11, BIBREF12. We back-translate $\mathcal {M}_{\text{en}}$ and $\mathcal {M}_{\text{de}}$ using $\bar{f}_3$ and $\bar{g}_3$ with beam search, add noise to the translated sentences BIBREF12, merge the synthetic data with $\mathcal {B}_1$, and train one English$\rightarrow $German model $f_0$ and one German$\rightarrow $English model $g_0$ for seven days on eight V100 GPUs. 3. Apply MADL to $f_0$ and $g_0$. That is, the $F_\alpha $ in Eqn.(DISPLAY_FORM8) is specified as the combination of $f_0,\bar{f}_1,\bar{f}_2$ with equal weights; and $G_\beta $ consists of $g_0,\bar{g}_1,\bar{g}_2$. During training, we will only update $f_0$ and $g_0$. To speed up training, we randomly select $20M$ monolingual English and German sentences from $\mathcal {M}_{\text{en}}$ and $\mathcal {M}_{\text{de}}$ respectively instead of using all monolingual sentences. The eventual output models are denoted as $f_1$ and $g_1$ respectively. This step takes 3 days on four P40 GPUs. <<</Training Pipeline>>> <<<Results>>> The results are summarized in Table TABREF24, which are evaluated by sacreBLEU. The baseline is the average accuracy of models using only bitext, i.e., $\bar{f}_1$ and $\bar{f}_2$ for English$\rightarrow $German translation and $\bar{g}_1$ and $\bar{g}_2$ for German$\rightarrow $English, and BT is the accuracy of the model after back-translation training. As can be seen, back translation improves accuracy. For example, back-translation boosts the BLEU score from $45.6$ to $47.4$ on news18 English$\rightarrow $German translation, which is $1.8$ point improvement. MADL further boosts BLEU to $50.4$, obtaining another 3-point improvement, demonstrating the effectiveness of our method. For the final submission, we accumulate many translation models (trained using bitext, back translation, and MADL, with different random seeds) and do knowledge distillation on the source sentences from WMT14 to WMT19 test sets. Take English$\rightarrow $German translation as an example. Denote the English inputs as $\mathcal {T}=\lbrace s_i\rbrace _{i=1}^{N_T}$, where $N_T$ is the size of the test set. For each $s$ in $\mathcal {T}$, we translate $s$ to $d^\prime $ using $M$ English$\rightarrow $German models and eventually obtain where $f^{(j)}$ is the $j$-th translation model we accumulated, $\mathcal {T}$ is the combination of inputs from WMT14 to WMT19. After obtaining $\mathcal {E}$, we randomly select $N_TM$ bitext pairs (denoted as $\mathcal {B}_2$) from $\mathcal {B}_1$ and finetune model $f_1$ on $\mathcal {B}_2\cup \mathcal {E}$. We stop tuning when the BLEU scores of WMT16 (i.e., the validation set) drops. We eventually obtain $44.9$ BLEU score for English$\rightarrow $German and $42.8$ for German$\rightarrow $English on WMT19 test sets and are ranked in the first place in these two translation tasks. <<</Results>>> <<</English@!START@$\leftrightarrow $@!END@German>>> <<<German@!START@$\leftrightarrow $@!END@French>>> For German$\leftrightarrow $French translation, we follow a similar process as the one used to English$\leftrightarrow $German tasks introduced in Section SECREF17. We merge the “commoncrawl”, “europarl-v7” and part of “de-fr.bicleaner07” selected by filter_interactive.py as the bilingual data. We collect $20M$ monolingual sentences for French and $20M$ for German from newscrawl. The data pre-processing rule and training procedure are the same as that used in Section SECREF17. We split $9k$ sentences from the “dev08_14” as the validation set and use the remaining ones as the test set. The results of German$\leftrightarrow $French translation on the test set are summarized in Table TABREF27. Again, our method achieves significant improvement over the baselines. Specifically, MADL boosts the baseline of German$\rightarrow $French and French$\rightarrow $German by 2 and $1.5$ points respectively. Our submitted German$\rightarrow $French is a single system trained by MADL, achieving $37.3$ BLEU on WMT19. The French$\rightarrow $German is an ensemble of three independently trained models, achieving $35.0$ BLEU score. Our systems are ranked in the first place for both German$\rightarrow $French and French$\rightarrow $German in the leaderboard. <<</German@!START@$\leftrightarrow $@!END@French>>> <<<Chinese@!START@$\rightarrow $@!END@English>>> <<<MASS Pre-training>>> We pre-train MASS (Transfomer_big) with both monolingual and bilingual data. We use 100M Chinese and 300M English monolingual sentences for the unsupervised setting (Equation DISPLAY_FORM10), and with a total of 18M and 56M bilingual sentence pairs for the supervised settings (Equation DISPLAY_FORM11). We share the encoder and decoder for all the losses in Equation DISPLAY_FORM10 and DISPLAY_FORM11. We then fine-tune the MASS pre-trained model on both 18M and 56M bilingual sentence pairs to get the baseline translation model for both Chinese$\rightarrow $English and English$\rightarrow $Chinese. <<</MASS Pre-training>>> <<<Back Translation and Knowledge Distillation>>> We randomly choose 40M monolingual sentences for Chinese and English respectively for back translation BIBREF11, BIBREF1 and knowledge distillation BIBREF15, BIBREF16. We iterate back translation and knowledge distillation multiple times, to gradually boost the performance of the model. <<</Back Translation and Knowledge Distillation>>> <<<WMT19 Submission>>> For the WMT19 submission, we conduct fine-tuning and speculation to further boost the accuracy by using the source sentences in the WMT19 test set. We first filter the bilingual as well as pseudo-generated data according to the relevance to the source sentences. We use the filter method in BIBREF17 and continue to train the model on the filtered data. Second, we conduct speculation on the test source sentences following the practice in BIBREF17. The final BLEU score of our submission is 39.3, ranked in the first place in the leaderboard. <<</WMT19 Submission>>> <<</Chinese@!START@$\rightarrow $@!END@English>>> <<<English@!START@$\leftrightarrow $@!END@Lithuanian>>> For English$\leftrightarrow $Lithuanian translation, we follow the similar process as that for Chinese$\rightarrow $English task introduced in Section SECREF28. We use all the WMT bilingual data, which is 2.24M after filtration. We use the same English monolingual data as used in Chinese-English. We select 100M Lithuanian monolingual data from official commoncrawl and use all the wiki and news Lithuanian monolingual data provided by WMT. In addition, we crawl 5M Lithuanian news data from LRT website. We share the BPE vocabulary between English and Lithuanian, and the vocabulary size is 65K. All the bilingual and monolingual data are used for MASS pre-training, and all the bilingual data are used for fine-tuning. For iterative back translation and knowledge distillation, we split 24M English monolingual data as well as 12M Lithuanian monolingual data into 5 parts through sampling with replacement, to get different models independently so as to increase diversity in re-ranking/ensemble. Each model uses 8M English monolingual data and 6M Lithuanian monolingual data. For our WMT19 submission, different from zh-en, speculation technology is not used. The BLEU scores on newsdev19 are shown in Table TABREF41. Our final submissions for WMT19 achieves 20.1 BLEU points for English$\rightarrow $Lithuanian translation (ranked in the first place) and 35.6 for Lithuanian$\rightarrow $English translation (ranked in the second place). <<</English@!START@$\leftrightarrow $@!END@Lithuanian>>> <<<English@!START@$\leftrightarrow $@!END@Finnish>>> <<<Preprocess>>> We use the official English-Finnish data from WMT19, including both bilingual data and monolingual data. After de-duplicating, the bilingual data contains $8.8M$ aligned sentence pairs. We share the vocabulary for English and Finnish with $46k$ BPE units. We use the WMT17 and WMT18 English-Finnish test sets as two development datasets, and tune hyper-parameters based on the concatenation of them. <<</Preprocess>>> <<<Architecture search>>> We use NAO to search sequence-to-sequence architectures for English-Finnish translation tasks, as introduced in subsection SECREF12. We use PyTorch for our implementations. Due to time limitations, we are not targeting at finding better neural architectures than Transformer; instead we target at models with comparable performance to Transformer, while providing diversity in the reranking process. The whole search process takes $2.5$ days on 16 P40 GPU cards and the discovered neural architecture, named as NAONet, is visualized in the Appendix. <<</Architecture search>>> <<<Train single models>>> The final system for English-Finnish is obtained through reranking of three strong model checkpoints, respectively from the Transformer model decoding from left to right (L2R Transformer), the Transformer model decoding from right to left (R2L Transformer) and NAONet decoding from left to right. All the models have 6-6 layers in encoder/decoder, and are obtained using the same process which is detailed as below. Step 1: Base models. Train two models $P_1(x|y)$ and $P_1(y|x)$ based on all the bilingual dataset ($8.8$M), respectively for English$\rightarrow $Finnish and Finnish$\rightarrow $English translations. Step 2: Back translation. Do the normal back translation BIBREF11, BIBREF1 using $P_1$ and $P_2$. Specifically we choose $10M$ monolingual English corpus, use $P_1(y|x)$ to generate the $10M$ pseudo bitext with beam search (beam size is set to 5), and mix it with the bilingual data to continue the training of $P_1(x|y)$. The ratio of mixing is set as $1:1$ through up-sampling. The model obtained through such a process is denoted as $P_2(x|y)$. The same process is applied to the opposite direction and the new model $P_2(y|x)$ is attained. Step 3: Back translation + knowledge distillation. In this step we generate more pseudo bitext by sequence level knowledge distillation BIBREF15 apart from using back translation. To be more concrete, as the first step, similar to Step 2, we choose $15M$ monolingual English and Finnish corpus, and generate the translations using $P_2(y|x)$ and $P_2(x|y)$, respectively. The resulting pseudo bitext is respectively denoted as $D_{x\rightarrow y}$ and $D_{y\rightarrow x}$. Then we concatenate all the bilingual data, $D_{x\rightarrow y}$ and $D_{y\rightarrow x}$, and use the whole corpus to train a new English-Finnish model from scratch. The attained model is denoted as $P_3(y|x)$. Step 4: Finetune. In this step we try a very simple data selection method to handle the domain mismatch problem in WMT. We remove all the bilingual corpus from Paracrawl which is generally assumed to be quite noisy BIBREF18 and use the remaining bilingual corpus ($4.5M$) to finetune $P_3(y|x)$ for one epoch. The resulting model is denoted as $P_4(y|x)$ which is set as the final model checkpoint. To investigate the effects of the four steps, we record the resulting BLEU scores on WMT17 and WMT18 test sets in Table TABREF46, taking the L2R Transformer model as an example. Furthermore, we report the final BLEU scores of the three models after the four steps in Table TABREF47. All the results are obtained via beam size 5 and length penalty $1.0$. The similar results for Finnish-English translation are shown in Table TABREF48. <<</Train single models>>> <<<Re-ranking>>> We use n-best re-ranking to deliver the final translation results using the three model checkpoints introduced in the last subsection. The beam size is set as 12. The weights of the three models, as well as the length penalty in generation, are tuned on the WMT-18 test sets. The results are shown in the second row of Table TABREF50. We would also like to investigate what is the influence of the NAONet to the re-ranking results. To achieve that, in re-ranking we replace NAONet with another model from L2R Transformer, trained with the same process in subsection SECREF45 with the difference only in random seeds, while maintain the other two models unchanged. The results are illustrated in the last row of Table TABREF50. From the comparison of the two rows in Table TABREF50, we can see the new architecture NAONet discovered via NAO brings more diversity in the ranking, thus leading to better results. We also report the similar results for Finnish-English tasks in Table TABREF51. Our systems achieve $27.4$ for and $31.9$ for English$\rightarrow $Finnish and Finnish$\rightarrow $English, ranked in the first place and second place (by teams), respectively. <<</Re-ranking>>> <<</English@!START@$\leftrightarrow $@!END@Finnish>>> <<<Russian@!START@$\rightarrow $@!END@English>>> <<<Our system>>> Our final system for Russian$\rightarrow $English translation is a combination of Transformer network BIBREF9, back translation BIBREF11, knowledge distillation BIBREF15, soft contextual data augmentation BIBREF5, and model ensemble. We use Transformer_big as network architecture. We first train two models, English$\rightarrow $Russian and Russian$\rightarrow $English respectively, on bilingual pairs as baseline model. Based on these two models, we perform back translation and knowledge distillation on monolingual data, generating 40M synthetic data. Combining both bilingual and synthetic data, we get a large train corpus with 56M pairs in total. We upsample the bilingual pairs and shuffle the combined corpus to ensure the balance between bilingual and synthetic data. Finally, we train the Russian$\rightarrow $English model from scratch. During the training, we also use soft contextual data augmentation to further enhance training. Following the above procedures, 5 different models are trained and ensembled for final submission. <<</Our system>>> <<</Russian@!START@$\rightarrow $@!END@English>>> <<<English@!START@$\rightarrow $@!END@Kazakh>>> <<<Result>>> Our final submission achieves 10.6 BLEU score, ranked second by teams in the leaderboard. <<</Result>>> <<</English@!START@$\rightarrow $@!END@Kazakh>>> <<</Submitted Systems>>> <<<Conclusions>>> This paper describes Microsoft Research Asia's neural machine translation systems for the WMT19 shared news translation tasks. Our systems are built on Transformer, back translation and knowledge distillation, enhanced with our recently proposed techniques: multi-agent dual learning (MADL), masked sequence-to-sequence pre-training (MASS), neural architecture optimization (NAO), and soft contextual data augmentation (SCA). Due to time and GPU limitations, we only apply each technique to a subset of translation tasks. We believe combining them together will further improve the translation accuracy and will conduct experiments in the future. Furthermore, some other techniques such as deliberation learning BIBREF20, adversarial learning BIBREF21, and reinforcement learning BIBREF22, BIBREF23 could also hep and are worthy of exploration. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Conclusions, Our Techniques" ], "type": "disordered_section" }
2002.12328
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Few-shot Natural Language Generation for Task-Oriented Dialog <<<Abstract>>> As a crucial component in task-oriented dialog systems, the Natural Language Generation (NLG) module converts a dialog act represented in a semantic form into a response in natural language. The success of traditional template-based or statistical models typically relies on heavily annotated data, which is infeasible for new domains. Therefore, it is pivotal for an NLG system to generalize well with limited labelled data in real applications. To this end, we present FewShotWoz, the first NLG benchmark to simulate the few-shot learning setting in task-oriented dialog systems. Further, we develop the SC-GPT model. It is pre-trained on a large set of annotated NLG corpus to acquire the controllable generation ability, and fine-tuned with only a few domain-specific labels to adapt to new domains. Experiments on FewShotWoz and the large Multi-Domain-WOZ datasets show that the proposed SC-GPT significantly outperforms existing methods, measured by various automatic metrics and human evaluations. <<</Abstract>>> <<<Introduction>>> Task-oriented dialog systems are becoming increasingly popular, as they can assist users in various daily activities such as ticket booking and restaurant reservations. In a typical task-oriented dialog system, the Natural Language Generation (NLG) module plays a crucial role: it converts a system action (often specified in a semantic form selected by a dialog policy) into a final response in natural language. Hence, the response should be adequate to represent semantic dialog actions, and fluent to engage users' attention. As the ultimate interface to interacts with users, NLG plays a significant impact on the users' experience. Existing methods for NLG can be broadly summarized into two major categories. $({1})$ Template-based methods require domain experts to handcraft templates for each domain, and the system fills in slot-values afterward BIBREF0, BIBREF1. Thus, the produced responses are often adequate to contain the required semantic information, but not always fluent and nature, hurting users' experiences. $({2})$ Statistical language models such as neural networks BIBREF2 learn to generate fluent responses via training from labelled corpus. One canonical model is semantically conditioned LSTM (SC-LSTM) BIBREF3, which encodes dialog acts with one-hot representations and uses it as an extra feature to inform the sentence generation process. Despite its good performance on simple domains, it requires large amounts of domain-specific annotated data which is not available for many domains in real-world applications. Even worse, this renders severe scalability issues when the number of possible combinations of dialog acts grows exponentially with the number of slots in more complex domains. We revisit the current research benchmarks for NLG, and notice that each dialog domain is extensively labelled to favor model training. However, this is in contrast to the real-world application scenarios, where only very limited amounts of labelled data are available for new domains. To simulate such a few-shot learning setting, we have developed a new benchmark dataset, called FewShotWOZ, based on the MultiWOZ BIBREF4 and Cambridge NLG datasets BIBREF5. FewShotWOZ consists of dialog utterances from 7 domains. For each domain, we provide less than 50 labeled utterances for fine-tuning. We believe that FewShotWOZ can better inspire research to address the challenge of learning data-hungry statistical models with very limited amounts of labelled data in real-world scenarios. To deal with the challenge of few-shot learning, we develop the SC-GPT model. SC-GPT is a multi-layer Transformer neural language model, trained in three steps: $({1})$ Pre-trained on plain text, similar to GPT-2 BIBREF6; $({2})$ Continuously pre-trained on large amounts of dialog-act labeled utterances corpora to acquire the ability of controllable generation; $({3})$ Fine-tuned for a target domain using very limited amounts of domain labels. Unlike GPT-2, SC-GPT generates semantically controlled responses that are conditioned on the given semantic form, similar to SC-LSTM but requiring much less domain labels to generalize to new domains. In summary, our key contributions are three-fold: A new benchmark FewShotWOZ is introduced to simulate the few-shot adaptation setting where only a handful of training data from each domain is available. We propose a new model SC-GPT. To our best knowledge, this work is the first study of exploiting state-of-the-art pre-trained language models for NLG in task-oriented dialog systems. On the MultiWOZ dataset, SC-GPT creates a new SOTA, outperforming previous models by 4 points in BLEU. On FewShotWOZ, SC-GPT outperforms several strong baselines such as SC-LSTM and HDSA BIBREF7, showing that SC-GPT adapts to new domain much more effectively, requiring much smaller amounts of in-domain labels. We release our code and dataset for reproducible research. <<</Introduction>>> <<<Background>>> A typical task-oriented spoken dialog system uses a pipeline architecture, as shown in Figure FIGREF2 (a), where each dialog turn is processed using a four-step procedure. $({1})$ Transcriptions of user’s input are first passed to the natural language understanding (NLU) module, where the user’s intention and other key information are extracted. $({2})$ This information is then formatted as the input to dialog state tracking (DST), which maintains the current state of the dialog. $({3})$ Outputs of DST are passed to the dialog policy module, which produces a dialog act based on the facts or entities retrieved from external resources (such as a database or a knowledge base). $({4})$ The dialog act emitted by the dialog policy module serves as the input to the NLG, through which a system response in natural language is generated. In this paper, we focus on the NLG component of task-oriented dialog systems, how to produce natural language responses conditioned on dialog acts. Specifically, dialog act $$ is defined as the combination of intent $$ and slot-value pairs $\lbrace (s_i, v_i)\rbrace ^P_{i=1}$: where $P$ is the number of pairs, which varies in different dialog acts. Intents are usually used to distinguish different types of system actions. Typical examples include inform, request, confirm, select Slot-value pairs indicate the category and content of the information to express in the utterance, respectively. The goal of NLG is to translate $$ into a natural language response $= [x_1, \cdots , x_T]$, where $T$ is the sequence length. In Figure FIGREF2 (b), we show an example of the dialog act: $\textit {\texttt {confirm}~(name=Hilton, area=center)}$, and the corresponding natural language response is “Let me confirm that you are searching for Hilton in the center area”. <<</Background>>> <<<Semantically Conditioned GPT>>> We tackle this generation problem using conditional neural language models. Given training data of $N$ samples $=\lbrace (_n, _n)\rbrace _{n=1}^{N}$, our goal is to build a statistical model parameterized by $$ to characterize $p_{}(| )$. To leverage the sequential structure of response, one may further decompose the joint probability of $$ using the chain rule, casting an auto-regressive generation process as follows: where $x_{<t}$ indicates all tokens before $t$. Learning $$ is performed via maximizing the log-likelihood (MLE) of the conditional probabilities in (DISPLAY_FORM13) over the entire training dataset: In this paper, we employ the Transformers BIBREF8 to parameterize the conditionals in (DISPLAY_FORM13). To enable strong generalization and controllable ability for the learned model, we propose the following three-stage procedure as the training recipe. <<<Massive Plain Language Pre-training.>>> Large models trained on massive training corpus usually generalize better to new domains. Inspired by this, we inherit the GPT-2 architecture BIBREF6 as the backbone language model. GPT-2 is an auto-regressive language model that leverages 12-24 layers of masked, multi-head self-attention Transformers. GPT-2 is pre-trained on extremely massive text data OpenWebText BIBREF6. It has demonstrated superior performance on characterizing human language data distribution and knowledge transfer. Given text prompts, GPT-2 can often generate realistic sentences. <<</Massive Plain Language Pre-training.>>> <<<Dialog-Act Controlled Pre-training.>>> To enable the guidance of dialog act in response generation, we propose to continuously pre-train the GPT-2 model on large amounts of annotated (dialog act, response) pairs. The pre-training dataset includes annotated training pairs from Schema-Guided Dialog corpus, MultiWOZ corpus, Frame corpus, and Facebook Multilingual Dialog Corpus. The total size of the pre-training corpus is around 400k examples. We firstly pre-process dialog act $$ into a sequence of control codes using the following format: Meanwhile, the output sequence $^{\prime }$ is pre-processed via appending $$ with a special start token [BOS] and an end token [EOS]. Finally, the sequentialized dialog act $^{\prime }$ is concatenated with its augmented response $^{\prime }$, and then fed into GPT-2. During training, the prediction loss is only computed for $^{\prime }$, and $^{\prime }$ provides the attended conditions. Since the dialog act represents the semantics of the generated sentences, we follow the naming convention of SC-LSTM, and term our model as Semantically Conditioned Generative Pre-training (SC-GPT). The overall architecture of SC-GPT is illustrated in Figure FIGREF12. <<</Dialog-Act Controlled Pre-training.>>> <<<Fine-tuning.>>> For a new domain, a dialog act usually contains novel intents or slot-value pairs, and annotated training samples are often limited. We fine-tune SC-GPT on limited amounts of domain-specific labels for adaptation. The fine-tuning follows the same procedure of dialog-act controlled pre-training, as described above, but uses only a few dozens of domain labels. It is worth noticing that the above recipe has several favorable properties: Flexibility. SC-GPT operates on a sequence of tokens without delexicalization, which means that SC-GPT does not assume a fixed one-hot or tree-structured dialog act representation vectors. Hence, it has great flexibility in extending to novel dialog acts. Controllability. In contrast to GPT-2 that generates natural sentences without high-level semantic guidance, SC-GPT can generate sentences with adequate intent and slot-value information and maintain its fluency. Generalizability. SC-GPT is able to generalize significantly better than SC-LSTM, due to the pre-training on massive plain text corpora and annotated dialog datasets. <<</Fine-tuning.>>> <<</Semantically Conditioned GPT>>> <<<Dataset: FewShotWOZ>>> <<<Revisiting NLG Benchmarks.>>> The three commonly used NLG datasets in developing and evaluating task-oriented dialog systems are E2E NLG BIBREF9 BAGEL BIBREF10 and RNNLG BIBREF5, as summarized in Table TABREF23. We observe two issues from their shared statistics: $({1})$ All the datasets contain a large number of labelled training samples for each domain, ranging from hundreds to tens of thousands. However, the cost of labeling is high in practice, labeling 50 utterances is 5 hours per domain. Creating such an extensively annotated dataset for each new domain is prohibitively expensive. $({2})$ The percentage of distinct delexicalised dialog acts between training and testing data is quite small. For example, the delexicalised dialog acts in testing is 100% covered by the training set for the E2E NLG dataset. It renders difficulties in evaluating the model's generalization ability for new domains. <<</Revisiting NLG Benchmarks.>>> <<<FewShotWOZ.>>> To build a setting for more pragmatic NLG scenarios, we introduce a new dataset FewShotWOZ to better reflect real application complexity, and encourage the community to develop algorithms that are capable of generalizing with only a few domain-specific labels for each (new) domain. The dataset statistics are shown in the last column of Table TABREF23. We see that FewShotWOZ is different from the other datasets in three aspects: $({1})$ More domains. FewShotWOZ contains seven domains in total, which is larger than any existing NLG datasets. $({2})$ Less training instances. Importantly, FewShotWOZ has a much smaller number of training instances per domain, aiming to evaluate the few-shot learning ability. $({3})$ Lower training/testing overlap. FewShotWOZ has only 8.82% overlap, significantly smaller than the other datasets, which amount to more than 90% overlap. The average number of intents per instance in $\mathtt {Attraction}$/ $\mathtt {Taxi}$/ $\mathtt {Train}$ domain is 2, 1.33, and 2.05, respectively. In contrast, there is only one intent for each example in the other datasets. The NLG task defined on FewShotWOZ requires the models to learn to generalize over new compositions of intents. The details of FewShotWOZ is shown in Table TABREF26. <<</FewShotWOZ.>>> <<<Collection Protocols.>>> We construct FewShotWOZ via re-organizing data samples from RNNLG and MultiWOZ datasets BIBREF4. For each domain in RNNLG, we first group utterances according to their delexicalised dialog acts, and keep only one utterance as the target sentence. To ensure diversity, we consider three domains from MultiWOZ: $\mathtt {Attraction}$, $\mathtt {Taxi}$, and $\mathtt {Train}$. Since MultiWOZ is a cross-domain dataset, the dialog act of an utterance may exist in multiple domains. We choose to keep utterances whose dialog act appears only in one domain. Similar delexicalising processing is applied to ensure that each dialog act has only one target utterance. Finally, to simulate the few-shot learning in practice, we randomly sample 50 training examples for each domain, except the $\mathtt {Taxi}$ domain, which has 40 examples. <<</Collection Protocols.>>> <<</Dataset: FewShotWOZ>>> <<<Related Work>>> <<<Pre-trained Models.>>> Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. PLMs are often trained to predict words based on their context on massive text data, and the learned models can be fine-tuned to adapt to various downstream tasks. The closest line of research to ours are GPT-2 BIBREF6, CTRL BIBREF15 and Grover BIBREF17. GPT-2 first investigated missive Transformer-based auto-regressive language models with large-scale text data for pre-training. After fine-tuning, GPT-2 achieves drastic improvements on several generation tasks. One drawback of GPT-2 is the lack of high-level semantic controlling ability in language generation. To alleviate this issue, CTRL BIBREF15 was introduced to train the model based on pre-defined codes such as text style, content description, and task-specific behavior, meanwhile Grover BIBREF17 was proposed to generate news articles conditioned on authors, dates Although conceptually similar to our SC-GPT, CTRL and Grover cannot be readily applied to NLG in task-oriented dialog systems, as the conditioning codes are quite different. Another controllable generation work for GPT-2 is PPLM BIBREF18, which provides a decoding scheme to guide the generation process using key-words or classifiers, without re-training the model. In this paper, we focus on pre-training an NLG model conditioned on finer-grained semantic dialog acts, which are more desirable for dialog systems. <<</Pre-trained Models.>>> <<<Dialog.>>> Various dialog systems have been developed BIBREF2, including task-oriented dialog systems such as Rasa, Microsoft Bot Framework, and Conversational Learner, and chit-chat systems such as XiaoIce BIBREF19, DialoGPT BIBREF20, Meena BIBREF21. In this paper, we focus on task-oriented systems, particularly the NLG module. With the blooming of deep learning, neural sequential models have shown powerful capability and flexibility in NLG. Extensive efforts have been made, including new architecture choices such as RNNs BIBREF22, attention RNNs BIBREF23, SC-LSTM BIBREF3 and its variants BIBREF24, BIBREF25, as well as learning objectives BIBREF26. However, they all require large amounts of annotated data to reach satisfactory performance. A more realistic scenario is to require much less labeling and improve the sample efficiency of models, This is especially important when deploying the models to new domains, where dialog acts need to be labelled from scratch. Our paper aims to formally set up such a research scenario by proposing a new dataset FewShotWOZ, and a new model SC-GPT. <<</Dialog.>>> <<</Related Work>>> <<<Experiments>>> In this section, we evaluate the proposed SC-GPT on the FewShotWOZ and MultiWOZ datasets to answer two research questions: $({1})$ Is SC-GPT an effective model for strong generalization and controllability in dialog response generation? $({2})$ Does FewShotWOZ meet the goal of effectively evaluating the generalization of NLG models in the few-shot learning setting? <<<Experimental Setup>>> <<<Implementation details.>>> The model was built upon Huggingface Pytorch Transformer BIBREF27. We use GPT2-Medium with 345M parameters as the initial checkpoint, and byte pair encodings BIBREF28 for the tokenization. Linear rate scheduler with start rate as 5e-5 was used for both pre-training and fine-tuning. Adam BIBREF29 with weight decay was used to optimize the parameters. For pre-training, the model was trained with a mini-batch of 8 on an 8 Nvidia V100 machine until observing no significant progress on validation loss or up to 20 epochs, whichever is earlier. For fine-tuning on FewShotWOZ, models were trained on each domain separately with five epochs. <<</Implementation details.>>> <<<Automatic metrics.>>> Following BIBREF3, BLEU scores and the slot error rate (ERR) are reported. BLEU score evaluates how natural the generated utterance is compared with human readers. ERR measures the exact matching of the slot tokens in the candidate utterances. $\text{ERR}=(p+q)/M$, where $M$ is the total number of slots in the dialog act, and $p$, $q$ is the number of missing and redundant slots in the given realisation. For each dialog act, we generate five utterances and select the top one with the lowest ERR as the final output. <<</Automatic metrics.>>> <<<Human evaluation.>>> We conducted the human evaluation using Amazon Mechanical Turk to assess subjective quality. We recruit master level workers (who have good prior approval rates) to perform a human comparison between generated responses from two systems (which are randomly sampled from comparison systems). The workers are required to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness. Informativeness indicates the extent to which generated utterance contains all the information specified in the dialog act. Naturalness denotes whether the utterance is as natural as a human does. To reduce judgement bias, we distribute each question to three different workers. Finally, we collected in total of 5800 judges. <<</Human evaluation.>>> <<<Baselines.>>> We compare with three baseline methods. $({1})$ SC-LSTM BIBREF3 is a canonical model and a strong baseline that uses an additional dialog act vector and a reading gate to guide the utterance generation. $({2})$ GPT-2 BIBREF6 is used to directly fine-tune on the domain-specific labels, without pre-training on the large-scale corpus of (dialog act, response) pairs. $({3})$ HDSA BIBREF7 is a state-of-the-art model on MultiWOZ. It leverages dialog act structures to enable transfer in the multi-domain setting, showing superior performance than SC-LSTM. <<</Baselines.>>> <<</Experimental Setup>>> <<<FewShotWOZ>>> Table TABREF33 reports the automatic evaluation performance of different methods on FewShotWOZ. SC-LSTM fails to learn the generation effectively in this few-shot learning setting. The generated utterances are poor in quality and suffer from inaccurate slot rendering. In addition, GPT-2 performs consistently better than SC-LSTM in all the domains. It reveals the feasibility of using a pre-trained language model for NLG, though only limited annotations are available for fine-tuning. Importantly, SC-GPT performs significantly better than GPT and SC-LSTM in terms of both BLEU and ERR. In all the domains, SC-GPT reduces the ERR to a significantly lower level, revealing its strong controllability power. This verifies the importance of pre-training on large annotated dialog data, as SC-GPT learns how to generate utterances specified by the dialog acts accurately. Table TABREF34 shows the human assessment on FewShotWOZ. The results exhibit the same trend with automatic evaluation. SC-GPT outperforms GPT-2 and SC-LSTM significantly in both metrics, SC-GPT can better control the generation to convey information in the dialog act while maintaining good fluency. Note that the gap between SC-GPT and human annotation is still large, indicating that the proposed FewShotWOZ exhibits an under-explored research area, and provides a large space to encourage future research for improvement. <<</FewShotWOZ>>> <<<MultiWOZ>>> The results on MultiWOZ are shown in Table TABREF42. Following BIBREF7, Entity F1 BIBREF30 is used to evaluate the entity coverage accuracy (including all slot values, days, numbers, and reference, ). Again, SC-GPT achieves the best performance on BLEU score. Note that GPT-2 performs similarly with SC-GPT on the full MultiWOZ dataset, this is because MultiWOZ contains 57k utterances, which is large enough for GPT-2 to achieve good performance. The results also confirm that with enough annotated data, conditional language model formulation performs significantly better than HDSA, a strong competitor that leverages graph/tree-structure information to encode dialog acts. To study how SC-GPT performs with different training data sizes. We further conduct experiments with varying percentages of training data on MultiWOZ, ranging from 0.1% (50 examples) to 50%. As shown in Table TABREF43, the observations are consistent with FewShotWOZ. SC-GPT performs consistently better than GPT-2, HDSA, and SC-LSTM for a wide range of dataset sizes, and the improvement is more substantial when the fewer numbers of in-domain labels are used for fine-tuning. Table TABREF44 shows the human assessment results on MultiWOZ. The results are consistent with the automatic evaluation. It is interesting to see that $({1})$ the gap between the new state-of-the-art method (SC-GPT ) and human performance on FewShotWOZ (as shown in Table TABREF34) is much larger than that on MultiWOZ; $({2})$ the human rating on the naturalness of SC-GPT is even higher than humans on MultiWOZ, while there is a visible gap on FewShotWOZ. These results demonstrate that FewShotWOZ presents a challenging few-shot learning setting, SG-GPT serves as a simple and strong baseline in this setting, and the combined provides a platform for researchers to develop NLG models that are able to generalize to new domains and generate semantically conditioned and fluent responses. <<</MultiWOZ>>> <<<Analysis>>> We perform detailed analysis to investigate SG-GPT's flexibility, controllability and generalizability. The test set is split into two subsets - seen and unseen. If a dialog act of an example appears in the training set, the example is marked as seen; otherwise, it is marked as unseen. Table TABREF48 compares different models on the seen and unseen subsets in the $\mathtt {restaurant}$ domain. SC-GPT yields higher BLEU and lower ERR, and the improvement is more significant on the unseen set. For example, SC-GPT reduces ERR to 4.96, an order of magnitude lower than SC-LSTM and only 1/3 of GPT-2. This demonstrates that SC-GPT generalizes well to novel dialog acts, and is able to precisely ground in them to compose fluent responses. This is further confirmed by the quantitative comparison in Table TABREF45, where we compare the generated utterance examples of different models. While the baseline methods prone to over-generate or miss important slots, SC-GPT can successfully generate fluent natural language utterances that share precise semantic conditions with the ground-truth references. We further simulate the process when deploying SC-GPT for a new domain, using the examples provided in the RASA dialog toolkit . We first fine-tune SC-GPT using a few training examples (only 16 instances in this new domain), and then generate utterances based on novel dialog acts that are unseen in training data, shown in Table TABREF49. In practice, it is desirable for an NLG system to deal with an extending domain whose dialog acts change dynamically. We simulate the setting by editing the original input dialog acts, such as inserting or deleting a slot, or substituting a slot value. Since SC-LSTM is infeasible in the setting of an extending domain, we compare SC-GPT with GPT-2. Results show that SC-GPT produces better utterances than GPT-2. SC-GPT can generate reasonably good natural language responses with different combinations of editing operations, showing its high flexibility to generalize to new dialog acts with very limited training data, and produce controllable responses. <<</Analysis>>> <<</Experiments>>> <<<Conclusion and Future Work>>> In this paper, we have made two major contributions towards developing a more pragmatic NLG module for task-oriented dialog systems: $({1})$ A new benchmark FewShotWOZ is introduced to simulate the few-shot learning scenarios with scarce labelled data in real-world applications. $({2})$ A new model SC-GPT is proposed to endow the NLG module with strong semantically controlling and generalization ability. Empirical results on both FewShotWOZ and MultiWOZ show that SC-GPT achieves the best overall performance in both automatic and human evaluations. There are two interesting directions for future work. The first is to design mechanisms to generate more interpersonal responses which are proven to help improve user experiences BIBREF31, BIBREF19. The other is to generalize the generative pre-training idea to all four modules in the dialog system pipeline for end-to-end training. Since these four modules process information in order, one may organize their input/output as segments, and pre-train a segment-level auto-regressive model. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "Related Work, Abstract" ], "type": "disordered_section" }
2002.12328
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Few-shot Natural Language Generation for Task-Oriented Dialog <<<Abstract>>> As a crucial component in task-oriented dialog systems, the Natural Language Generation (NLG) module converts a dialog act represented in a semantic form into a response in natural language. The success of traditional template-based or statistical models typically relies on heavily annotated data, which is infeasible for new domains. Therefore, it is pivotal for an NLG system to generalize well with limited labelled data in real applications. To this end, we present FewShotWoz, the first NLG benchmark to simulate the few-shot learning setting in task-oriented dialog systems. Further, we develop the SC-GPT model. It is pre-trained on a large set of annotated NLG corpus to acquire the controllable generation ability, and fine-tuned with only a few domain-specific labels to adapt to new domains. Experiments on FewShotWoz and the large Multi-Domain-WOZ datasets show that the proposed SC-GPT significantly outperforms existing methods, measured by various automatic metrics and human evaluations. <<</Abstract>>> <<<Introduction>>> Task-oriented dialog systems are becoming increasingly popular, as they can assist users in various daily activities such as ticket booking and restaurant reservations. In a typical task-oriented dialog system, the Natural Language Generation (NLG) module plays a crucial role: it converts a system action (often specified in a semantic form selected by a dialog policy) into a final response in natural language. Hence, the response should be adequate to represent semantic dialog actions, and fluent to engage users' attention. As the ultimate interface to interacts with users, NLG plays a significant impact on the users' experience. Existing methods for NLG can be broadly summarized into two major categories. $({1})$ Template-based methods require domain experts to handcraft templates for each domain, and the system fills in slot-values afterward BIBREF0, BIBREF1. Thus, the produced responses are often adequate to contain the required semantic information, but not always fluent and nature, hurting users' experiences. $({2})$ Statistical language models such as neural networks BIBREF2 learn to generate fluent responses via training from labelled corpus. One canonical model is semantically conditioned LSTM (SC-LSTM) BIBREF3, which encodes dialog acts with one-hot representations and uses it as an extra feature to inform the sentence generation process. Despite its good performance on simple domains, it requires large amounts of domain-specific annotated data which is not available for many domains in real-world applications. Even worse, this renders severe scalability issues when the number of possible combinations of dialog acts grows exponentially with the number of slots in more complex domains. We revisit the current research benchmarks for NLG, and notice that each dialog domain is extensively labelled to favor model training. However, this is in contrast to the real-world application scenarios, where only very limited amounts of labelled data are available for new domains. To simulate such a few-shot learning setting, we have developed a new benchmark dataset, called FewShotWOZ, based on the MultiWOZ BIBREF4 and Cambridge NLG datasets BIBREF5. FewShotWOZ consists of dialog utterances from 7 domains. For each domain, we provide less than 50 labeled utterances for fine-tuning. We believe that FewShotWOZ can better inspire research to address the challenge of learning data-hungry statistical models with very limited amounts of labelled data in real-world scenarios. To deal with the challenge of few-shot learning, we develop the SC-GPT model. SC-GPT is a multi-layer Transformer neural language model, trained in three steps: $({1})$ Pre-trained on plain text, similar to GPT-2 BIBREF6; $({2})$ Continuously pre-trained on large amounts of dialog-act labeled utterances corpora to acquire the ability of controllable generation; $({3})$ Fine-tuned for a target domain using very limited amounts of domain labels. Unlike GPT-2, SC-GPT generates semantically controlled responses that are conditioned on the given semantic form, similar to SC-LSTM but requiring much less domain labels to generalize to new domains. In summary, our key contributions are three-fold: A new benchmark FewShotWOZ is introduced to simulate the few-shot adaptation setting where only a handful of training data from each domain is available. We propose a new model SC-GPT. To our best knowledge, this work is the first study of exploiting state-of-the-art pre-trained language models for NLG in task-oriented dialog systems. On the MultiWOZ dataset, SC-GPT creates a new SOTA, outperforming previous models by 4 points in BLEU. On FewShotWOZ, SC-GPT outperforms several strong baselines such as SC-LSTM and HDSA BIBREF7, showing that SC-GPT adapts to new domain much more effectively, requiring much smaller amounts of in-domain labels. We release our code and dataset for reproducible research. <<</Introduction>>> <<<Background>>> A typical task-oriented spoken dialog system uses a pipeline architecture, as shown in Figure FIGREF2 (a), where each dialog turn is processed using a four-step procedure. $({1})$ Transcriptions of user’s input are first passed to the natural language understanding (NLU) module, where the user’s intention and other key information are extracted. $({2})$ This information is then formatted as the input to dialog state tracking (DST), which maintains the current state of the dialog. $({3})$ Outputs of DST are passed to the dialog policy module, which produces a dialog act based on the facts or entities retrieved from external resources (such as a database or a knowledge base). $({4})$ The dialog act emitted by the dialog policy module serves as the input to the NLG, through which a system response in natural language is generated. In this paper, we focus on the NLG component of task-oriented dialog systems, how to produce natural language responses conditioned on dialog acts. Specifically, dialog act $$ is defined as the combination of intent $$ and slot-value pairs $\lbrace (s_i, v_i)\rbrace ^P_{i=1}$: where $P$ is the number of pairs, which varies in different dialog acts. Intents are usually used to distinguish different types of system actions. Typical examples include inform, request, confirm, select Slot-value pairs indicate the category and content of the information to express in the utterance, respectively. The goal of NLG is to translate $$ into a natural language response $= [x_1, \cdots , x_T]$, where $T$ is the sequence length. In Figure FIGREF2 (b), we show an example of the dialog act: $\textit {\texttt {confirm}~(name=Hilton, area=center)}$, and the corresponding natural language response is “Let me confirm that you are searching for Hilton in the center area”. <<</Background>>> <<<Semantically Conditioned GPT>>> We tackle this generation problem using conditional neural language models. Given training data of $N$ samples $=\lbrace (_n, _n)\rbrace _{n=1}^{N}$, our goal is to build a statistical model parameterized by $$ to characterize $p_{}(| )$. To leverage the sequential structure of response, one may further decompose the joint probability of $$ using the chain rule, casting an auto-regressive generation process as follows: where $x_{<t}$ indicates all tokens before $t$. Learning $$ is performed via maximizing the log-likelihood (MLE) of the conditional probabilities in (DISPLAY_FORM13) over the entire training dataset: In this paper, we employ the Transformers BIBREF8 to parameterize the conditionals in (DISPLAY_FORM13). To enable strong generalization and controllable ability for the learned model, we propose the following three-stage procedure as the training recipe. <<<Massive Plain Language Pre-training.>>> Large models trained on massive training corpus usually generalize better to new domains. Inspired by this, we inherit the GPT-2 architecture BIBREF6 as the backbone language model. GPT-2 is an auto-regressive language model that leverages 12-24 layers of masked, multi-head self-attention Transformers. GPT-2 is pre-trained on extremely massive text data OpenWebText BIBREF6. It has demonstrated superior performance on characterizing human language data distribution and knowledge transfer. Given text prompts, GPT-2 can often generate realistic sentences. <<</Massive Plain Language Pre-training.>>> <<<Dialog-Act Controlled Pre-training.>>> To enable the guidance of dialog act in response generation, we propose to continuously pre-train the GPT-2 model on large amounts of annotated (dialog act, response) pairs. The pre-training dataset includes annotated training pairs from Schema-Guided Dialog corpus, MultiWOZ corpus, Frame corpus, and Facebook Multilingual Dialog Corpus. The total size of the pre-training corpus is around 400k examples. We firstly pre-process dialog act $$ into a sequence of control codes using the following format: Meanwhile, the output sequence $^{\prime }$ is pre-processed via appending $$ with a special start token [BOS] and an end token [EOS]. Finally, the sequentialized dialog act $^{\prime }$ is concatenated with its augmented response $^{\prime }$, and then fed into GPT-2. During training, the prediction loss is only computed for $^{\prime }$, and $^{\prime }$ provides the attended conditions. Since the dialog act represents the semantics of the generated sentences, we follow the naming convention of SC-LSTM, and term our model as Semantically Conditioned Generative Pre-training (SC-GPT). The overall architecture of SC-GPT is illustrated in Figure FIGREF12. <<</Dialog-Act Controlled Pre-training.>>> <<<Fine-tuning.>>> For a new domain, a dialog act usually contains novel intents or slot-value pairs, and annotated training samples are often limited. We fine-tune SC-GPT on limited amounts of domain-specific labels for adaptation. The fine-tuning follows the same procedure of dialog-act controlled pre-training, as described above, but uses only a few dozens of domain labels. It is worth noticing that the above recipe has several favorable properties: Flexibility. SC-GPT operates on a sequence of tokens without delexicalization, which means that SC-GPT does not assume a fixed one-hot or tree-structured dialog act representation vectors. Hence, it has great flexibility in extending to novel dialog acts. Controllability. In contrast to GPT-2 that generates natural sentences without high-level semantic guidance, SC-GPT can generate sentences with adequate intent and slot-value information and maintain its fluency. Generalizability. SC-GPT is able to generalize significantly better than SC-LSTM, due to the pre-training on massive plain text corpora and annotated dialog datasets. <<</Fine-tuning.>>> <<</Semantically Conditioned GPT>>> <<<Dataset: FewShotWOZ>>> <<<Revisiting NLG Benchmarks.>>> The three commonly used NLG datasets in developing and evaluating task-oriented dialog systems are E2E NLG BIBREF9 BAGEL BIBREF10 and RNNLG BIBREF5, as summarized in Table TABREF23. We observe two issues from their shared statistics: $({1})$ All the datasets contain a large number of labelled training samples for each domain, ranging from hundreds to tens of thousands. However, the cost of labeling is high in practice, labeling 50 utterances is 5 hours per domain. Creating such an extensively annotated dataset for each new domain is prohibitively expensive. $({2})$ The percentage of distinct delexicalised dialog acts between training and testing data is quite small. For example, the delexicalised dialog acts in testing is 100% covered by the training set for the E2E NLG dataset. It renders difficulties in evaluating the model's generalization ability for new domains. <<</Revisiting NLG Benchmarks.>>> <<<FewShotWOZ.>>> To build a setting for more pragmatic NLG scenarios, we introduce a new dataset FewShotWOZ to better reflect real application complexity, and encourage the community to develop algorithms that are capable of generalizing with only a few domain-specific labels for each (new) domain. The dataset statistics are shown in the last column of Table TABREF23. We see that FewShotWOZ is different from the other datasets in three aspects: $({1})$ More domains. FewShotWOZ contains seven domains in total, which is larger than any existing NLG datasets. $({2})$ Less training instances. Importantly, FewShotWOZ has a much smaller number of training instances per domain, aiming to evaluate the few-shot learning ability. $({3})$ Lower training/testing overlap. FewShotWOZ has only 8.82% overlap, significantly smaller than the other datasets, which amount to more than 90% overlap. The average number of intents per instance in $\mathtt {Attraction}$/ $\mathtt {Taxi}$/ $\mathtt {Train}$ domain is 2, 1.33, and 2.05, respectively. In contrast, there is only one intent for each example in the other datasets. The NLG task defined on FewShotWOZ requires the models to learn to generalize over new compositions of intents. The details of FewShotWOZ is shown in Table TABREF26. <<</FewShotWOZ.>>> <<<Collection Protocols.>>> We construct FewShotWOZ via re-organizing data samples from RNNLG and MultiWOZ datasets BIBREF4. For each domain in RNNLG, we first group utterances according to their delexicalised dialog acts, and keep only one utterance as the target sentence. To ensure diversity, we consider three domains from MultiWOZ: $\mathtt {Attraction}$, $\mathtt {Taxi}$, and $\mathtt {Train}$. Since MultiWOZ is a cross-domain dataset, the dialog act of an utterance may exist in multiple domains. We choose to keep utterances whose dialog act appears only in one domain. Similar delexicalising processing is applied to ensure that each dialog act has only one target utterance. Finally, to simulate the few-shot learning in practice, we randomly sample 50 training examples for each domain, except the $\mathtt {Taxi}$ domain, which has 40 examples. <<</Collection Protocols.>>> <<</Dataset: FewShotWOZ>>> <<<Related Work>>> <<<Pre-trained Models.>>> Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. PLMs are often trained to predict words based on their context on massive text data, and the learned models can be fine-tuned to adapt to various downstream tasks. The closest line of research to ours are GPT-2 BIBREF6, CTRL BIBREF15 and Grover BIBREF17. GPT-2 first investigated missive Transformer-based auto-regressive language models with large-scale text data for pre-training. After fine-tuning, GPT-2 achieves drastic improvements on several generation tasks. One drawback of GPT-2 is the lack of high-level semantic controlling ability in language generation. To alleviate this issue, CTRL BIBREF15 was introduced to train the model based on pre-defined codes such as text style, content description, and task-specific behavior, meanwhile Grover BIBREF17 was proposed to generate news articles conditioned on authors, dates Although conceptually similar to our SC-GPT, CTRL and Grover cannot be readily applied to NLG in task-oriented dialog systems, as the conditioning codes are quite different. Another controllable generation work for GPT-2 is PPLM BIBREF18, which provides a decoding scheme to guide the generation process using key-words or classifiers, without re-training the model. In this paper, we focus on pre-training an NLG model conditioned on finer-grained semantic dialog acts, which are more desirable for dialog systems. <<</Pre-trained Models.>>> <<<Dialog.>>> Various dialog systems have been developed BIBREF2, including task-oriented dialog systems such as Rasa, Microsoft Bot Framework, and Conversational Learner, and chit-chat systems such as XiaoIce BIBREF19, DialoGPT BIBREF20, Meena BIBREF21. In this paper, we focus on task-oriented systems, particularly the NLG module. With the blooming of deep learning, neural sequential models have shown powerful capability and flexibility in NLG. Extensive efforts have been made, including new architecture choices such as RNNs BIBREF22, attention RNNs BIBREF23, SC-LSTM BIBREF3 and its variants BIBREF24, BIBREF25, as well as learning objectives BIBREF26. However, they all require large amounts of annotated data to reach satisfactory performance. A more realistic scenario is to require much less labeling and improve the sample efficiency of models, This is especially important when deploying the models to new domains, where dialog acts need to be labelled from scratch. Our paper aims to formally set up such a research scenario by proposing a new dataset FewShotWOZ, and a new model SC-GPT. <<</Dialog.>>> <<</Related Work>>> <<<Experiments>>> In this section, we evaluate the proposed SC-GPT on the FewShotWOZ and MultiWOZ datasets to answer two research questions: $({1})$ Is SC-GPT an effective model for strong generalization and controllability in dialog response generation? $({2})$ Does FewShotWOZ meet the goal of effectively evaluating the generalization of NLG models in the few-shot learning setting? <<<Experimental Setup>>> <<<Implementation details.>>> The model was built upon Huggingface Pytorch Transformer BIBREF27. We use GPT2-Medium with 345M parameters as the initial checkpoint, and byte pair encodings BIBREF28 for the tokenization. Linear rate scheduler with start rate as 5e-5 was used for both pre-training and fine-tuning. Adam BIBREF29 with weight decay was used to optimize the parameters. For pre-training, the model was trained with a mini-batch of 8 on an 8 Nvidia V100 machine until observing no significant progress on validation loss or up to 20 epochs, whichever is earlier. For fine-tuning on FewShotWOZ, models were trained on each domain separately with five epochs. <<</Implementation details.>>> <<<Automatic metrics.>>> Following BIBREF3, BLEU scores and the slot error rate (ERR) are reported. BLEU score evaluates how natural the generated utterance is compared with human readers. ERR measures the exact matching of the slot tokens in the candidate utterances. $\text{ERR}=(p+q)/M$, where $M$ is the total number of slots in the dialog act, and $p$, $q$ is the number of missing and redundant slots in the given realisation. For each dialog act, we generate five utterances and select the top one with the lowest ERR as the final output. <<</Automatic metrics.>>> <<<Human evaluation.>>> We conducted the human evaluation using Amazon Mechanical Turk to assess subjective quality. We recruit master level workers (who have good prior approval rates) to perform a human comparison between generated responses from two systems (which are randomly sampled from comparison systems). The workers are required to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness. Informativeness indicates the extent to which generated utterance contains all the information specified in the dialog act. Naturalness denotes whether the utterance is as natural as a human does. To reduce judgement bias, we distribute each question to three different workers. Finally, we collected in total of 5800 judges. <<</Human evaluation.>>> <<<Baselines.>>> We compare with three baseline methods. $({1})$ SC-LSTM BIBREF3 is a canonical model and a strong baseline that uses an additional dialog act vector and a reading gate to guide the utterance generation. $({2})$ GPT-2 BIBREF6 is used to directly fine-tune on the domain-specific labels, without pre-training on the large-scale corpus of (dialog act, response) pairs. $({3})$ HDSA BIBREF7 is a state-of-the-art model on MultiWOZ. It leverages dialog act structures to enable transfer in the multi-domain setting, showing superior performance than SC-LSTM. <<</Baselines.>>> <<</Experimental Setup>>> <<<FewShotWOZ>>> Table TABREF33 reports the automatic evaluation performance of different methods on FewShotWOZ. SC-LSTM fails to learn the generation effectively in this few-shot learning setting. The generated utterances are poor in quality and suffer from inaccurate slot rendering. In addition, GPT-2 performs consistently better than SC-LSTM in all the domains. It reveals the feasibility of using a pre-trained language model for NLG, though only limited annotations are available for fine-tuning. Importantly, SC-GPT performs significantly better than GPT and SC-LSTM in terms of both BLEU and ERR. In all the domains, SC-GPT reduces the ERR to a significantly lower level, revealing its strong controllability power. This verifies the importance of pre-training on large annotated dialog data, as SC-GPT learns how to generate utterances specified by the dialog acts accurately. Table TABREF34 shows the human assessment on FewShotWOZ. The results exhibit the same trend with automatic evaluation. SC-GPT outperforms GPT-2 and SC-LSTM significantly in both metrics, SC-GPT can better control the generation to convey information in the dialog act while maintaining good fluency. Note that the gap between SC-GPT and human annotation is still large, indicating that the proposed FewShotWOZ exhibits an under-explored research area, and provides a large space to encourage future research for improvement. <<</FewShotWOZ>>> <<<MultiWOZ>>> The results on MultiWOZ are shown in Table TABREF42. Following BIBREF7, Entity F1 BIBREF30 is used to evaluate the entity coverage accuracy (including all slot values, days, numbers, and reference, ). Again, SC-GPT achieves the best performance on BLEU score. Note that GPT-2 performs similarly with SC-GPT on the full MultiWOZ dataset, this is because MultiWOZ contains 57k utterances, which is large enough for GPT-2 to achieve good performance. The results also confirm that with enough annotated data, conditional language model formulation performs significantly better than HDSA, a strong competitor that leverages graph/tree-structure information to encode dialog acts. To study how SC-GPT performs with different training data sizes. We further conduct experiments with varying percentages of training data on MultiWOZ, ranging from 0.1% (50 examples) to 50%. As shown in Table TABREF43, the observations are consistent with FewShotWOZ. SC-GPT performs consistently better than GPT-2, HDSA, and SC-LSTM for a wide range of dataset sizes, and the improvement is more substantial when the fewer numbers of in-domain labels are used for fine-tuning. Table TABREF44 shows the human assessment results on MultiWOZ. The results are consistent with the automatic evaluation. It is interesting to see that $({1})$ the gap between the new state-of-the-art method (SC-GPT ) and human performance on FewShotWOZ (as shown in Table TABREF34) is much larger than that on MultiWOZ; $({2})$ the human rating on the naturalness of SC-GPT is even higher than humans on MultiWOZ, while there is a visible gap on FewShotWOZ. These results demonstrate that FewShotWOZ presents a challenging few-shot learning setting, SG-GPT serves as a simple and strong baseline in this setting, and the combined provides a platform for researchers to develop NLG models that are able to generalize to new domains and generate semantically conditioned and fluent responses. <<</MultiWOZ>>> <<<Analysis>>> We perform detailed analysis to investigate SG-GPT's flexibility, controllability and generalizability. The test set is split into two subsets - seen and unseen. If a dialog act of an example appears in the training set, the example is marked as seen; otherwise, it is marked as unseen. Table TABREF48 compares different models on the seen and unseen subsets in the $\mathtt {restaurant}$ domain. SC-GPT yields higher BLEU and lower ERR, and the improvement is more significant on the unseen set. For example, SC-GPT reduces ERR to 4.96, an order of magnitude lower than SC-LSTM and only 1/3 of GPT-2. This demonstrates that SC-GPT generalizes well to novel dialog acts, and is able to precisely ground in them to compose fluent responses. This is further confirmed by the quantitative comparison in Table TABREF45, where we compare the generated utterance examples of different models. While the baseline methods prone to over-generate or miss important slots, SC-GPT can successfully generate fluent natural language utterances that share precise semantic conditions with the ground-truth references. We further simulate the process when deploying SC-GPT for a new domain, using the examples provided in the RASA dialog toolkit . We first fine-tune SC-GPT using a few training examples (only 16 instances in this new domain), and then generate utterances based on novel dialog acts that are unseen in training data, shown in Table TABREF49. In practice, it is desirable for an NLG system to deal with an extending domain whose dialog acts change dynamically. We simulate the setting by editing the original input dialog acts, such as inserting or deleting a slot, or substituting a slot value. Since SC-LSTM is infeasible in the setting of an extending domain, we compare SC-GPT with GPT-2. Results show that SC-GPT produces better utterances than GPT-2. SC-GPT can generate reasonably good natural language responses with different combinations of editing operations, showing its high flexibility to generalize to new dialog acts with very limited training data, and produce controllable responses. <<</Analysis>>> <<</Experiments>>> <<<Conclusion and Future Work>>> In this paper, we have made two major contributions towards developing a more pragmatic NLG module for task-oriented dialog systems: $({1})$ A new benchmark FewShotWOZ is introduced to simulate the few-shot learning scenarios with scarce labelled data in real-world applications. $({2})$ A new model SC-GPT is proposed to endow the NLG module with strong semantically controlling and generalization ability. Empirical results on both FewShotWOZ and MultiWOZ show that SC-GPT achieves the best overall performance in both automatic and human evaluations. There are two interesting directions for future work. The first is to design mechanisms to generate more interpersonal responses which are proven to help improve user experiences BIBREF31, BIBREF19. The other is to generalize the generative pre-training idea to all four modules in the dialog system pipeline for end-to-end training. Since these four modules process information in order, one may organize their input/output as segments, and pre-train a segment-level auto-regressive model. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "Abstract, Dataset: FewShotWOZ" ], "type": "disordered_section" }
1908.09951
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> An Emotional Analysis of False Information in Social Media and News Articles <<<Abstract>>> Fake news is risky since it has been created to manipulate the readers' opinions and beliefs. In this work, we compared the language of false news to the real one of real news from an emotional perspective, considering a set of false information types (propaganda, hoax, clickbait, and satire) from social media and online news articles sources. Our experiments showed that false information has different emotional patterns in each of its types, and emotions play a key role in deceiving the reader. Based on that, we proposed a LSTM neural network model that is emotionally-infused to detect false news. <<</Abstract>>> <<<Introduction>>> With the complicated political and economic situations in many countries, some agendas are publishing suspicious news to affect public opinions regarding specific issues BIBREF0. The spreading of this phenomenon is increasing recently with the large usage of social media and online news sources. Many anonymous accounts in social media platforms start to appear, as well as new online news agencies without presenting a clear identity of the owner. Twitter has recently detected a campaign organized by agencies from two different countries to affect the results of the last U.S. presidential elections of 2016. The initial disclosures by Twitter have included 3,841 accounts. A similar attempt was done by Facebook, as they detected coordinated efforts to influence U.S. politics ahead of the 2018 midterm elections. False information is categorized into 8 types according to BIBREF1. Some of these types are intentional to deceive where others are not. In this work, we are interested in analyzing 4 main types, i.e. hoaxes, propagandas, clickbaits, and satires. These types can be classified into two main categories - misinformation and disinformation - where misinformation considers false information that is published without the intent to deceive (e.g. satire). Disinformation can be seen as a specific kind of false information with the aim to mislead the reader (e.g. hoax, propaganda, and clickbait). Propagandas are fabricated stories spread to harm the interest of a particular party. Hoaxes are similar to propagandas but the main aim of the writer is not to manipulate the readers' opinions but to convince them of the validity of a paranoia-fueled story BIBREF2. Clickbait is another type of disinformation that refers to the deliberate use of misleading headlines, thumbnails, or stories' snippets to redirect attention (for traffic attention). Satire is the only type of misinformation, where the writer's main purpose is not to mislead the reader, but rather to deliver the story in an ironic way (to entertain or to be sarcastic). The topic of fake news is gaining attention due to its risky consequences. A vast set of campaigns has been organized to tackle fake news. The owner of Wikipedia encyclopedia created the news site WikiTribune to encourage the evidence-based journalism. Another way of addressing this issue is by fact-checking websites. These websites like politifact.com, snopes.com and factchecking.org aim to debunk false news by manually assess the credibility of claims that have been circulated massively in online platforms. These campaigns were not limited to the English language where other languages such as Arabic have been targeted by some sites like fatabyyano.net. <<<Hypothesis>>> Trusted news is recounting its content in a naturalistic way without attempting to affect the opinion of the reader. On the other hand, false news is taking advantage of the presented issue sensitivity to affect the readers' emotions which sequentially may affect their opinions as well. A set of works has been done previously to investigate the language of false information. The authors in BIBREF3 have studied rumours in Twitter. They have investigated a corpus of true and false tweets rumours from different aspects. From an emotional point of view, they found that false rumours inspired fear, disgust, and surprise in their replies while the true ones inspired joy and anticipation. Some kinds of false information are similar to other language phenomena. For example, satire by its definition showed similarity with irony language. The work in BIBREF4 showed that affective features work well in the detection of irony. In addition, they confirmed that positive words are more relevant for identifying sarcasm and negative words for irony BIBREF5. The results of these works motivate us to investigate the impact of emotions on false news types. These are the research questions we aim to answer: RQ1 Can emotional features help detecting false information? RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources? RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones? RQ4 What are the top-N emotions that discriminate false information types in both textual sources? In this work, we investigate suspicious news in two different sources: Twitter and online news articles. Concerning the news articles source, we focus on the beginning part of them, since they are fairly long, and the emotional analysis could be biased by their length. We believe that the beginning part of false news articles can present a unique emotional pattern for each false information type since the writer in this part is normally trying to trigger some emotions in the reader. Throughout the emotional analysis, we go beyond the superficial analysis of words. We hope that our findings in this work will contribute to fake news detection. The key contributions of this article are: Model: We propose an approach that combines emotional information from documents in a deep neural network. We compare the obtained results with a set of baselines. The results show that our approach is promising. Analysis: We show a comprehensive analysis on two false information datasets collected from social media and online news articles, based on a large set of emotions. We compare the differences from an affective perspective in both sources, and obtain valuable insights on how emotions can contribute to detect false news. The rest of the paper is structured as follows; After a brief review of related work in Section SECREF2, Section SECREF3 introduces our emotionally-infused model. Then, we present the evaluation framework in Section SECREF4. Section SECREF5 reports the experiments and the results, followed by an analysis on the false information types from emotional perspective in Section SECREF6. Finally, the conclusions of this work are summarized in Section SECREF7. <<</Hypothesis>>> <<</Introduction>>> <<<Related Work>>> The work that has been done previously on the analysis of false information is rather small regarding the approaches that were proposed. In this section, we present some recent works on the language analysis and detection of false information. Recent attempts tried to analyze the language of false news to give a better understanding. A work done in BIBREF6 has studied the false information in Twitter from a linguistic perspective. The authors found that real tweets contain significantly fewer bias markers, hedges, subjective terms, and less harmful words. They also found that propaganda news targets morals more than satires and hoaxes but less than clickbaits. Furthermore, satirical news contains more loyalty and fewer betrayal morals compared to propaganda. In addition, they built a model that combined a set of features (graph-based, cues words, and syntax) and achieved a good performance comparing to other baselines (71% vs. 59% macro-F1). Another similar work BIBREF2 has been done to characterize the language of false information (propaganda, hoax, and satire) in online news articles. The authors have studied the language from different perspectives: the existence of weak and strong subjectivity, hedges, and the degree of dramatization using a lexicon from Wiktionary. As well, they employed in their study the LIWC dictionary to exploit the existence of personal pronouns, swear, sexual, etc. words. The results showed that false news types tend to use first and second personal pronouns more than truthful news. Moreover, the results showed that false news generally uses words to exaggerate (subjectives, superlatives, and modal adverbs), and specifically, the satire type uses more adverbs. Hoax stories tend to use fewer superlatives and comparatives, and propagandas use relatively more assertive verbs. Moving away from these previous false information types, the work in BIBREF3 has focused on analyzing rumours in Twitter (from factuality perspective: True or False). They analyzed about 126,000 rumours and found that falsehood widespread significantly further, faster, deeper, and more broadly than truth in many domains. In addition, they found that false rumours are more novel than truthful ones, which made people more likely to share them. From an emotional perspective, they found that false rumours triggered "fear", "disgust", and "surprise" in replies while truthful ones triggered "anticipation", "sadness", "joy", and "trust". Another work BIBREF7 has studied the problem of detecting hoaxes by analyzing features related to the content in Wikipedia. The work showed that some features like hoaxes articles' length as well as the ratio of wiki markups (images, references, links to other articles and to external URLs, etc.) are important to discriminate hoaxes from legitimate articles. Many approaches have been proposed on fake news detection. In general, they are divided into social media and news claims-based approaches. The authors in BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 have proposed supervised methods using recurrent neural networks or by extracting manual features like a set of regular expressions, content-based, network-based etc. As an example, the work by BIBREF13 assessed the credibility of tweets by analyzing trending topics. They used message-based, user-based, and propagation-based features, and they found that some features related to the user information like user's age, number of followers, statuse counts etc. have helped the most to discriminate truthful from deceitful tweets. Other news claims-based approaches BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18 have been mainly focusing on inferring the credibility of the claims by retrieving evidences from Google or Bing search engines. These approaches have employed a different set of features starting from manual features (e.g. cosine similarity between the claims and the results, Alexa Rank of the evidence source, etc.) to a fully automatic approach using deep learning networks. A recent trend started to appear and is trying to approach the detection of fake news from a stance perspective. The aim is to predict how other articles orient to a specific fact BIBREF19, BIBREF20, BIBREF21. <<</Related Work>>> <<<Emotionally-infused Model>>> In this section we describe the Emotionally-Infused Network we propose (EIN). <<<Emotional Lexicons>>> Several emotional models well-grounded in psychology science have been proposed, such as the ones by Magda Arnold BIBREF22, Paul Ekman BIBREF23, Robert Plutchik BIBREF24, and Gerrod Parrot BIBREF25. On the basis of each of them, many emotional resources (lexicons) were built in the literature. In this work, we consider several emotional resources to increase the coverage of the emotional words in texts as well to have a wider range of emotions in the analysis. Concretely, we use EmoSenticNet, EmoLex, SentiSense, LIWC and Empath: EmoSenticNet BIBREF26 is a lexical resource that assigns WordNet-Affect emotion labels to SenticNet concepts. It has a total of 13,189 entries annotated using the six Ekman's basic emotions. EmoLex BIBREF27 is a word-emotion association lexicon that is labeled using the eight Plutchik's emotions. This lexicon contains 14,181 words. SentiSense BIBREF28 is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. SentiSense has 5,496 words labeled with emotions from a set of 14 emotional categories, which is an edited version of the merge between Arnold, Plutchik, and Parrott models. LIWC BIBREF29 is a linguistic dictionary that contains 4,500 words categorized to analyze psycholinguistic patterns in text. Linguistic Inquiry and Word Count (LIWC) has 4 emotional categories: "sadness", "anger", "positive emotion", and "negative emotion". Empath BIBREF30 is a tool that uses deep learning and word embeddings to build a semantically meaningful lexicon for concepts. Empath uses Parrott's model for the emotional representation, but we use only the primary emotions (6 emotions) in the Pattrott's hierarchy ("love", "joy", "surprise", "anger", "sadness", "fear"). In our study we consider the 17 emotions that we shown in Figure FIGREF14. <<</Emotional Lexicons>>> <<<Model>>> We choose an Long short-term memory (LSTM) BIBREF31 that takes the sequence of words as input and predicts the false information type. The input of our network is based on word embedding (content-based) and emotional features (see Figure FIGREF24). <<</Model>>> <<<Input Representation>>> Our network consists of two branches. In the content-based one, we use an embedding layer followed by a LSTM layer. Then, we add an attention layer BIBREF32 to make this branch focus on (highlighting) particular words over others . The attention mechanism assigns a weight to each word vector result from the LSTM layer with a focus on the classification class. The input representation for this branch is represented as follows: the input sentence $S$ of length $n$ is represented as $[S\textsubscript {1}, S\textsubscript {2} .. S\textsubscript {n}]$ where $S\textsubscript {n} \in {\rm I\!R}^d$; ${\rm I\!R}^d$ is a d-dimensional word embedding vector of the $i$-th word in the input sentence. The output vectors of the words are passed to the LSTM layer, where the LSTM learns the hidden state $h\textsubscript {t}$ by capturing the previous timesteps (past features). The produced hidden state $h\textsubscript {t}$ at each time step is passed to the attention layer which computes a "context" vector $c\textsubscript {t}$ as the weighted mean of the state sequence $h$ by: Where $T$ is the total number of timesteps in the input sequence and $\alpha \textsubscript {tj}$ is a weight computed at each time step $j$ for each state hj. This output vector is then concatenated with the output from the densea (see Figure FIGREF24) layer and passed to the denseb layer, which precedes a final Softmax function to predict the output classes. Since the content-based branch is concatenated with the other emotional-based branch. On the other hand, the input representation for the emotional-based branch is defined as follows: we have $N$ emotional lexicons $L\textsubscript {n}$ where $n\in [1, 5]$, each lexicon has $M$ number of emotions depending on the emotion model that the lexicon uses (e.g. Plutchik, Arnold, etc.). The emotion vector $E\textsubscript {m}$ of an input document using the $n$-th emotional lexicon is $L\textsubscript {n}E\textsubscript {m}$. In our implementation, the emotional vector $E\textsubscript {m}$ of a Lexicon $L\textsubscript {n}$ is built using word frequency and normalized by the input sentence's length. Each input sentence is represented using: Where $v \in {\rm I\!R}^q$ and $q$ is: <<</Input Representation>>> <<</Emotionally-infused Model>>> <<<Evaluation Framework>>> <<<Datasets>>> Annotated data is a crucial source of information to analyze false information. Current status of previous works lacks available datasets of false information, where the majority of the works focus on annotating datasets from a factuality perspective. However, to analyze the existence of emotions across different sources of news, we rely on two publicly available datasets and a list contains suspicious Twitter accounts. <<<News Articles>>> Our dataset source of news articles is described in BIBREF2. This dataset was built from two different sources, for the trusted news (real news) they sampled news articles from the English Gigaword corpus. For the false news, they collected articles from seven different unreliable news sites. These news articles include satires, hoaxes, and propagandas but not clickbaits. Since we are interested also in analyzing clickbaits, we slice a sample from an available clickbait dataset BIBREF33 that was originally collected from two sources: Wikinews articles' headlines and other online sites that are known to publish clickbaits. The satire, hoax, and propaganda news articles are considerably long (some of them reach the length of 5,000 words). This length could affect the quality of the analysis as we mentioned before. We focus on analyzing the initial part of the article. Our intuition is that it is where emotion-bearing words will be more frequent. Therefore, we shorten long news articles into a maximum length of N words (N=300). We choose the value of N based on the length of the shortest articles. Moreover, we process the dataset by removing very short articles, redundant articles or articles that do not have a textual content. <<</News Articles>>> <<<Twitter>>> For this dataset, we rely on a list of several Twitter accounts for each type of false information from BIBREF6. This list was created based on public resources that annotated suspicious Twitter accounts. The authors in BIBREF6 have built a dataset by collecting tweets from these accounts and they made it available. For the real news, we merge this list with another 32 Twitter accounts from BIBREF34. In this work we could not use the previous dataset and we decide to collect tweets again. For each of these accounts, we collected the last M tweets posted (M=1000). By investigating these accounts manually, we found that many tweets just contain links without textual news. Therefore, to ensure of the quality of the crawled data, we chose a high value for M (also to have enough data). After the collecting process, we processed these tweets by removing duplicated, very short tweets, and tweets without textual content. Table TABREF35 shows a summary for both datasets. <<</Twitter>>> <<</Datasets>>> <<<Baselines>>> Emotions have been used in many natural language processing tasks and they showed their efficiency BIBREF35. We aim at investigating their efficiency to detect false information. In addition to EIN, we created a model (Emotion-based Model) that uses emotional features only and compare it to two baselines. Our aim is to investigate if the emotional features independently can detect false news. The two baselines of this model are Majority Class baseline (MC) and the Random selection baseline (RAN). For the EIN model, we compare it to different baselines: a) The first one is bag-of-words with a support vector machine classifier (BOW-SVM). We test different classifiers, and we choose SVM since it gives the highest result in the 10-fold Cross Validation (CV); b) We use another baseline that is based on word embeddings where for each input document we extract an average word embedding vector by taking the mean of the embeddings for the document's words. Similarly, we test different classifiers and the Logistic Regression classifier shows the best performance (WE-LR); c) The last baseline is the same as our neural architecture but without the emotional features branch: an LSTM layer followed by attention and dense layers. <<</Baselines>>> <<</Evaluation Framework>>> <<<Experiments and Results>>> <<<Emotion-based Model>>> In our experiments, we use $20\%$ of each of the datasets for testing and we apply 10-fold cross-validation on the remain part for selecting the best classifier as well for tuning it. We tested many classifiers and we finally choose Random Forest for both datasets since it obtained the best results. Table TABREF39 presents the classification results on both datasets. The results in both datasets show that emotional features clearly detect false news, compared to the baselines (RQ1). The emotional features perform better in the news articles dataset compared with these of tweets. We are interested in investigating also how good are the emotional features in detecting each class comparing to the RAN baseline. We choose the RAN baseline since it shows better results with regard to macro-F1 score. For doing so, we investigated the True Positive (TP) classification ratio for each class in each dataset. The clickbait class shows the highest TPs comparing to the other classes. From this we can infer that clickbaits exploit emotions much more than the other classes to deceive the reader. It is worth to mention that for the hoax class the proposed approach is better than the random baselines with a small ratio ($4\%$ difference). This could be justified by the fact that hoaxes, by definition, try to convince the reader of the credibility of a false story. Hence, the writer tries to deliver the story in a normal way without allowing the reader to fall under suspicion. The number of instances related to the false information classes in the news articles dataset is the same. Therefore, there is not a majority class that the classifier can be biased to. This is not the case in the Twitter dataset. For the Twitter dataset, the dataset is not balanced. Therefore, where the results are biased by the majority class (propaganda). But in general, all the classes' TP ratios are larger than the corresponding ones obtained with RAN baseline. From these results, we can conclude that suspicious news exploits emotions with the aim to mislead the reader. Following, we present the results obtained by the proposed emotionally-infused model. <<</Emotion-based Model>>> <<<Emotionally-Infused Model>>> In the neural model, to reduce the computational costs, instead of the cross-validation process we take another $20\%$ from the training part as a validation set (other than the $20\%$ that is prepared for testing). For the pretrained word embeddings, we use Google News Word2Vec 300-Embeddings in the neural network as well as in the W2V-LR baseline. For the classical machine learning classifiers for the baselines, we use the Scikit-Learn python library, and for the deep learning network, we use Keras library with Tensorflow as backend. To tune our deep learning network (hyper-parameters), we use the Hyperopt library. And to reduce the effect of overfitting, we use early stopping technique. In Table TABREF44 we summarize the parameters with respect to each dataset. We have to mention that we use Dropout after the dense layer in the emotional features branch (Dropc) as well as after the attention layer in the other one (Dropd) before the concatenation process. Since it is a multiclass classification process, we use categorical cross-entropy loss function. A summary of the models' parameters is presented in Table TABREF44. Table TABREF47 summarizes the performance of the proposed model in comparison to those obtained by the baselines. We report Macro- precision, recall, and F1, including also the metric of accuracy; for comparing the models' results we consider the macro of metrics since it shows an averaged result over all the classes. The baselines that we propose clearly show high results, where the LSTM baseline has the best performance in news articles dataset. In Twitter there is a different scenario, the BOW-SVM baseline shows a higher performance with respect to LSTM. We are interested in investigating the reason behind that. Therefore, we checked the coverage ratio of the used embeddings in the Twitter dataset. We have to mention that we excluded stop words during representing the input documents using the pre-trained Google News word embeddings. In the news articles dataset, we found that the coverage ratio of the embeddings is around $94\%$ while in Twitter it is around $70\%$. Therefore, we tuned the word embeddings during the training process to improve the document's representation since we have a larger dataset from Twitter. This process contributed with $1.9\%$ on the final macro-F1 results in Twitter (the result without tuning is $53.51\%$). Even though, the results obtained with the LSTM baseline is still lower than the one obtained with BOW-SVM. This experiment gives us some intuition that the weaker performance on Twitter may be due to the embeddings. Therefore, we tried different embeddings but none of them improved the result. The second baseline (W2V-LR) proved the same issue regarding the embeddings. The W2V-LR macro-F1 result in the news articles dataset is competitive, where it is much lower in Twitter. The usage of LSTM is two folds: in addition to being a good baseline, it shows also how much the emotional features contribute in the emotionally-infused network. EIN results outperform the baselines with a large margin (around 2% in Twitter and 7% in news articles), especially in the news articles dataset. The margin between EIN and the best baseline is lower in the Twitter dataset. The results also show that combining emotional features clearly boosts the performance. We can figure out the improvement by comparing the results of EIN to LSTM. EIN shows superior results in news articles dataset with regard to the LSTM (79.43%). A similar case appears in the Twitter dataset but with a lower margin (59.70%). The results of EIN in Twitter dataset show that emotional features help the weak coverage of word embeddings to improve the performance as well as to overcome the BOW-SVM baseline. We observed before that clickbait TP's ratio of the news articles dataset is the highest one, and this result points out that the clickbait class is less difficult to detect specifically from an emotional perspective. Therefore, in order to assess how our model separates false information types, we employ dimensionality reduction using t-distributed Stochastic Neighbor Embedding (T-SNE) technique BIBREF36 to project the document's representation from a high dimensional space to a 2D plane. Thus, we project the embeddings in EIN by extracting them from the outputs of Denseb layer (see Figure FIGREF48). We extract the embeddings twice, once from a random epoch (epoch 10) at the beginning of the training phase and the other at the last epoch. Our aim from the early epoch projection is to validate what we have noticed: the clickbait class is less difficult to detect with regard to the other classes. As we can notice in the 10-epoch plot, the clickbait class needs few epochs to be separated from the other types, and this supports what we found previously in the manual investigation of the classes' TP ratios. Despite this clear separation, there is still an overlapping with some real-news records. This results points out that emotions in clickbaits play a key role in deceiving the reader. Also, the figure shows that the disinformation classes still need more training epochs for better separation. Real-news records are totally overlapped with the false information classes as well as the false information classes with each other. On the other hand, for the last epoch, clearly, the classes are separated from each other and the more important, from the real news. But generally, there still a small overlapping between satires and hoaxes as well few records from the propaganda class. <<</Emotionally-Infused Model>>> <<<EIN as Clickbaits Detector>>> From the previous results in Section SECREF37 as well as from what we notice in Figure FIGREF48, EIN obtains a clear separability of the clickbait class. These observations motivate us to investigate EIN as clickbait detector. Concretely, we test EIN on the source of our clickbait instances BIBREF33 in the news articles dataset. As we mentioned previously, this dataset originally was built using two different text sources. For clickbaits, the authors have manually identified a set of online sites that publish many clickbait articles. Whereas for the negative class, they collected headlines from a corpus of Wikinews articles collected in other research work. They took 7,500 samples from each class for the final version of the dataset. The authors also proposed a clickbaits detector model (Stop_Clickbait) that employed a combination of features: sentence structure (sentence length, average length of words, the ratio of the number of stop words to the number of thematic words and the longest separation between the syntactically dependent words), word patterns (presence of cardinal number at the beginning of the sentence, presence of unusual punctuation patterns), clickbait language (presence of hyperbolic words, common clickbait phrases, internet slangs and determiners), and N-grams features (word, Part-Of-Speech, and syntactic n-grams). Using this set of features group, the authors tested different classifiers where SVM showed the state-of-the-art results. They considered Accuracy, Precision, Recall and F1 to compare their approach to a baseline (an online web browser extension for clickbaits detection called Downworthy). In this experiment, we consider the third baseline (LSTM) to observe the improvement of the emotional features in the EIN model. Different from the previous experiments, this is a binary classification task. Therefore, we use binary cross-entropy as loss function and we change the Softmax layer to a Sigmoid function. The new parameters for both LSTM and EIN models are mentioned in Table TABREF44. In Table TABREF51 we present the results of the Stop_Clickbait approach, LSTM baseline, and the EIN model. The results show that our baseline outperforms the proposed clickbait detector with a good margin. Furthermore, the results of the EIN are superior to the LSTM and the Stop_Clickbait detector. Considering emotions in the EIN deep learning approach improved the detection of false information. This is due to the fact that in clickbaits emotions are employed to deceive the reader. <<</EIN as Clickbaits Detector>>> <<</Experiments and Results>>> <<<Discussion>>> The results show that the detection of suspicious news in Twitter is harder than detecting them in news articles. Overall, the results of EIN showed that emotional features improve the performance of our model, especially in the case of the news articles dataset. We manually inspected the Twitter dataset and observed that the language of the tweets has differences compared to the news articles one. We found that news in Twitter has many abbreviations (amp, wrt, JFK...etc.), bad words abbreviations (WTF, LMFO...etc.), informal language presentation, and typos. This reduces the coverage ratio of word embeddings. We also noticed that suspicious news in Twitter are more related to sexual issues. To validate our observations, we extracted the mean value of sexual words using a list of sexual terms BIBREF37. The mean value is the average number of times a sexual/bad word appears in a tweet normalized by the length of the tweet. The mean value in Twitter is 0.003 while in news articles is 0.0024. Similarly, suspicious news in Twitter presented more insulting words than in news articles where the mean value in Twitter is 0.0027 and 0.0017 in news articles. Following, we focus on analyzing false information from an emotional perspective. We are aiming to answer the rest of the questions, RQ2, RQ3, and RQ4. RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources? Intuitively, the emotions contribution in the classification process is not the same, where some words could manifest the existence of specific kind of emotions rather than others. To investigate this point, we use Information Gain (IG) in order to identify the importance of emotions in discriminating between real and all the other types of false news (multiclass task) in both Twitter and news articles datasets (see Figure FIGREF54). Before going through the ranking of features importance, we notice that the emotions ranking shapes are very similar in both Twitter and news articles. This states that despite the fact that the language is different, both sources have similar overall emotions distribution. In other words, false news employs a similar emotional pattern in both text sources. Since the news language in Twitter is not presented clearly as in news articles, this observation can help to build a cross-source system that is trained on suspicious news from news articles to detect the corresponding ones in Twitter. Figure FIGREF54 shows also that the emotion "joy" is the most important emotion in both datasets. It also mentions that "despair" and "hate" are almost not used in the classification process. The ranking of the features in both sources is different, where in the news articles dataset the top important emotions are "joy", "anticipation", "fear", and "disgust" respectively. On the other hand, the top ones in Twitter are "joy", "sadness", "fear", and "disgust". . RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones? We measure statically significant differences using the t-test on emotions across real news and false news (binary task) in the both datasets in Figure FIGREF55. These findings provide a deeper understanding of the EIN performance. The results show that "joy", "neg_emo", "ambiguous", "anticipation", "calmness", "disgust", "trust" and "surprise" have significant statistical differences between real and suspicious news in both datasets. Some other emotions such as "despair" and "anger" have no statistical difference in both datasets. It turns out that the results we obtain are generally consistent with the IG results in research question RQ2. We notice in the IG analysis that some emotions have a higher importance in one of the news sources: "sadness", "anger", and "fear" have a higher importance in Twitter than in news articles, and the opposite for "hope". We observe the same findings using the t-test. . RQ4 What are the top-N emotions that discriminate false information types in both textual sources? False information types are different in the way they present the news to the reader. This raises a question: what are the top employed emotions in each type of false information? In Table TABREF57, we present the first three emotions that contribute mostly to the classification process to each type. This can indicate to us what are the emotion types that are used mostly in each type of false information. Table TABREF57 shows that clickbaits express "surprise" and "negative emotion" at the most. This validates the definition of clickbaits as "attention redirection" by exploiting the reader and convincing him/her that there is an unexpected thing with negative emotion. The result of seeing "fear" in the top features in Twitter is interesting; one of the recent studies is presenting the hypothesis that says: curiosity is the best remedy for fear BIBREF38 based on psychological interpretations. Taking into account the definition of clickbaits as "attention redirection", looking at our results, we can proof this hypothesis. Furthermore, despite the language differences in both datasets, we obtain almost the same results, which emphasize our results. For hoaxes, it is not simple to interpret a specific pattern of emotions in the results. We might justify it by the fact that hoaxes are written to convince the reader of the validity of a story. Therefore, the writer is trying to present the story in a normal way (truthful) similar to a real story. Therefore, the top emotions are not unique to the hoax type. But what we find from the top hoaxes emotions in both datasets is that they are generally different except the emotion "like". Despite the natural narrative way of presenting the story, the analysis shows that the writer still uses "like" to grab reader's attention smoothly. Propaganda type has clearer emotional interpretation considering its definition. We find that propaganda expresses "joy", "fear" and at the same time "calmness" in the news articles. Both "joy" and "fear" are contrary from an emotional polar perspective, where "joy" shows the extreme of the positive emotions and "fear" the extreme negative, and at the same time, "calmness" is present. The emotional shifting between the two extremes is a clear attempt of opinion manipulation from an emotional perspective. We obtain a similar emotion set from Twitter, but instead of "joy" we get "hope". Lastly, satire is defined as a type of parody presented in a typical format of mainstream journalism, but in a similar way to irony and sarcasm phenomena BIBREF39. The results of the analysis show that "disgust" and "positive emotion" are present in both datasets, but we get "negative emotion" in the news articles and "sadness" in Twitter (both are placed in the negative side of emotions). We are interested in investigating the cause of the emotion "disgust" which appeared in the results from both datasets. We conduct a manual analysis on the text of the satire type in both datasets in order to shed some light on the possible causes. We notice that the satire language in the news often employs the emotion "disgust" to give a sense of humor. Figure FIGREF58 shows some examples from the news articles dataset highlighting the words that triggered the emotion "disgust". <<</Discussion>>> <<<Conclusions and Future Work>>> In this article we have presented an emotionally-infused deep learning network that uses emotional features to identify false information in Twitter and news articles sources. We performed several experiments to investigate the effectiveness of the emotional features in identifying false information. We validated the performance of the model by comparing it to a LSTM network and other baselines. The results on the two datasets showed that clickbaits have a simpler manipulation language where emotions help detecting them. This demonstrates that emotions play a key role in deceiving the reader. Based on this result, we investigated our model performance on a clickbaits dataset and we compared it to the state-of-the-art performance. Our model showed superior results near to 96% F1 value. Overall results confirmed that emotional features have boosted EIN model performance achieving better results on 3 different datasets (RQ1). These results emphasized the importance of emotional features in the detection of false information. In Twitter, false news content is deliberately sexual oriented and it uses many insulting words. Our analysis showed that emotions can help detecting false information also in Twitter. In the analysis section, we answered a set of questions regarding the emotions distribution in false news. We found that emotions have similar importance distribution in Twitter and news articles regardless of the differences in the used languages (RQ2). The analysis showed that most of the used emotions have statistical significant difference between real and false news (RQ3). Emotions plays a different role in each type of false information in line with its definition (RQ4). We found that clickbaits try to attract the attention of the reader by mainly employing the "surprise" emotion. Propagandas are manipulating the feelings of the readers by using extreme positive and negative emotions, with triggering a sense of "calmness" to confuse the readers and enforcing a feeling of confidence. Satire news instead use the "disgust" emotion to give a sense of humor. To sum up, we can say that the initial part of false news contains more emotions than the rest of document. Our approach exploit this fact for their detection. To the best of our knowledge, this is the first work that analyzes the impact of emotions in the detection of false information considering both social media and news articles. As a future work, the results of our approach as a clickbaits detector motivate us to develop for a clickbaits detector as a web browser extension. Also, we will study how the emotions flow inside the articles of each kind of false information, which is worthy to be investigated as the results of this work confirmed. <<</Conclusions and Future Work>>> <<</Title>>>
{ "references": [ "Conclusions and Future Work, Abstract" ], "type": "disordered_section" }
1908.09951
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> An Emotional Analysis of False Information in Social Media and News Articles <<<Abstract>>> Fake news is risky since it has been created to manipulate the readers' opinions and beliefs. In this work, we compared the language of false news to the real one of real news from an emotional perspective, considering a set of false information types (propaganda, hoax, clickbait, and satire) from social media and online news articles sources. Our experiments showed that false information has different emotional patterns in each of its types, and emotions play a key role in deceiving the reader. Based on that, we proposed a LSTM neural network model that is emotionally-infused to detect false news. <<</Abstract>>> <<<Introduction>>> With the complicated political and economic situations in many countries, some agendas are publishing suspicious news to affect public opinions regarding specific issues BIBREF0. The spreading of this phenomenon is increasing recently with the large usage of social media and online news sources. Many anonymous accounts in social media platforms start to appear, as well as new online news agencies without presenting a clear identity of the owner. Twitter has recently detected a campaign organized by agencies from two different countries to affect the results of the last U.S. presidential elections of 2016. The initial disclosures by Twitter have included 3,841 accounts. A similar attempt was done by Facebook, as they detected coordinated efforts to influence U.S. politics ahead of the 2018 midterm elections. False information is categorized into 8 types according to BIBREF1. Some of these types are intentional to deceive where others are not. In this work, we are interested in analyzing 4 main types, i.e. hoaxes, propagandas, clickbaits, and satires. These types can be classified into two main categories - misinformation and disinformation - where misinformation considers false information that is published without the intent to deceive (e.g. satire). Disinformation can be seen as a specific kind of false information with the aim to mislead the reader (e.g. hoax, propaganda, and clickbait). Propagandas are fabricated stories spread to harm the interest of a particular party. Hoaxes are similar to propagandas but the main aim of the writer is not to manipulate the readers' opinions but to convince them of the validity of a paranoia-fueled story BIBREF2. Clickbait is another type of disinformation that refers to the deliberate use of misleading headlines, thumbnails, or stories' snippets to redirect attention (for traffic attention). Satire is the only type of misinformation, where the writer's main purpose is not to mislead the reader, but rather to deliver the story in an ironic way (to entertain or to be sarcastic). The topic of fake news is gaining attention due to its risky consequences. A vast set of campaigns has been organized to tackle fake news. The owner of Wikipedia encyclopedia created the news site WikiTribune to encourage the evidence-based journalism. Another way of addressing this issue is by fact-checking websites. These websites like politifact.com, snopes.com and factchecking.org aim to debunk false news by manually assess the credibility of claims that have been circulated massively in online platforms. These campaigns were not limited to the English language where other languages such as Arabic have been targeted by some sites like fatabyyano.net. <<<Hypothesis>>> Trusted news is recounting its content in a naturalistic way without attempting to affect the opinion of the reader. On the other hand, false news is taking advantage of the presented issue sensitivity to affect the readers' emotions which sequentially may affect their opinions as well. A set of works has been done previously to investigate the language of false information. The authors in BIBREF3 have studied rumours in Twitter. They have investigated a corpus of true and false tweets rumours from different aspects. From an emotional point of view, they found that false rumours inspired fear, disgust, and surprise in their replies while the true ones inspired joy and anticipation. Some kinds of false information are similar to other language phenomena. For example, satire by its definition showed similarity with irony language. The work in BIBREF4 showed that affective features work well in the detection of irony. In addition, they confirmed that positive words are more relevant for identifying sarcasm and negative words for irony BIBREF5. The results of these works motivate us to investigate the impact of emotions on false news types. These are the research questions we aim to answer: RQ1 Can emotional features help detecting false information? RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources? RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones? RQ4 What are the top-N emotions that discriminate false information types in both textual sources? In this work, we investigate suspicious news in two different sources: Twitter and online news articles. Concerning the news articles source, we focus on the beginning part of them, since they are fairly long, and the emotional analysis could be biased by their length. We believe that the beginning part of false news articles can present a unique emotional pattern for each false information type since the writer in this part is normally trying to trigger some emotions in the reader. Throughout the emotional analysis, we go beyond the superficial analysis of words. We hope that our findings in this work will contribute to fake news detection. The key contributions of this article are: Model: We propose an approach that combines emotional information from documents in a deep neural network. We compare the obtained results with a set of baselines. The results show that our approach is promising. Analysis: We show a comprehensive analysis on two false information datasets collected from social media and online news articles, based on a large set of emotions. We compare the differences from an affective perspective in both sources, and obtain valuable insights on how emotions can contribute to detect false news. The rest of the paper is structured as follows; After a brief review of related work in Section SECREF2, Section SECREF3 introduces our emotionally-infused model. Then, we present the evaluation framework in Section SECREF4. Section SECREF5 reports the experiments and the results, followed by an analysis on the false information types from emotional perspective in Section SECREF6. Finally, the conclusions of this work are summarized in Section SECREF7. <<</Hypothesis>>> <<</Introduction>>> <<<Related Work>>> The work that has been done previously on the analysis of false information is rather small regarding the approaches that were proposed. In this section, we present some recent works on the language analysis and detection of false information. Recent attempts tried to analyze the language of false news to give a better understanding. A work done in BIBREF6 has studied the false information in Twitter from a linguistic perspective. The authors found that real tweets contain significantly fewer bias markers, hedges, subjective terms, and less harmful words. They also found that propaganda news targets morals more than satires and hoaxes but less than clickbaits. Furthermore, satirical news contains more loyalty and fewer betrayal morals compared to propaganda. In addition, they built a model that combined a set of features (graph-based, cues words, and syntax) and achieved a good performance comparing to other baselines (71% vs. 59% macro-F1). Another similar work BIBREF2 has been done to characterize the language of false information (propaganda, hoax, and satire) in online news articles. The authors have studied the language from different perspectives: the existence of weak and strong subjectivity, hedges, and the degree of dramatization using a lexicon from Wiktionary. As well, they employed in their study the LIWC dictionary to exploit the existence of personal pronouns, swear, sexual, etc. words. The results showed that false news types tend to use first and second personal pronouns more than truthful news. Moreover, the results showed that false news generally uses words to exaggerate (subjectives, superlatives, and modal adverbs), and specifically, the satire type uses more adverbs. Hoax stories tend to use fewer superlatives and comparatives, and propagandas use relatively more assertive verbs. Moving away from these previous false information types, the work in BIBREF3 has focused on analyzing rumours in Twitter (from factuality perspective: True or False). They analyzed about 126,000 rumours and found that falsehood widespread significantly further, faster, deeper, and more broadly than truth in many domains. In addition, they found that false rumours are more novel than truthful ones, which made people more likely to share them. From an emotional perspective, they found that false rumours triggered "fear", "disgust", and "surprise" in replies while truthful ones triggered "anticipation", "sadness", "joy", and "trust". Another work BIBREF7 has studied the problem of detecting hoaxes by analyzing features related to the content in Wikipedia. The work showed that some features like hoaxes articles' length as well as the ratio of wiki markups (images, references, links to other articles and to external URLs, etc.) are important to discriminate hoaxes from legitimate articles. Many approaches have been proposed on fake news detection. In general, they are divided into social media and news claims-based approaches. The authors in BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 have proposed supervised methods using recurrent neural networks or by extracting manual features like a set of regular expressions, content-based, network-based etc. As an example, the work by BIBREF13 assessed the credibility of tweets by analyzing trending topics. They used message-based, user-based, and propagation-based features, and they found that some features related to the user information like user's age, number of followers, statuse counts etc. have helped the most to discriminate truthful from deceitful tweets. Other news claims-based approaches BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18 have been mainly focusing on inferring the credibility of the claims by retrieving evidences from Google or Bing search engines. These approaches have employed a different set of features starting from manual features (e.g. cosine similarity between the claims and the results, Alexa Rank of the evidence source, etc.) to a fully automatic approach using deep learning networks. A recent trend started to appear and is trying to approach the detection of fake news from a stance perspective. The aim is to predict how other articles orient to a specific fact BIBREF19, BIBREF20, BIBREF21. <<</Related Work>>> <<<Emotionally-infused Model>>> In this section we describe the Emotionally-Infused Network we propose (EIN). <<<Emotional Lexicons>>> Several emotional models well-grounded in psychology science have been proposed, such as the ones by Magda Arnold BIBREF22, Paul Ekman BIBREF23, Robert Plutchik BIBREF24, and Gerrod Parrot BIBREF25. On the basis of each of them, many emotional resources (lexicons) were built in the literature. In this work, we consider several emotional resources to increase the coverage of the emotional words in texts as well to have a wider range of emotions in the analysis. Concretely, we use EmoSenticNet, EmoLex, SentiSense, LIWC and Empath: EmoSenticNet BIBREF26 is a lexical resource that assigns WordNet-Affect emotion labels to SenticNet concepts. It has a total of 13,189 entries annotated using the six Ekman's basic emotions. EmoLex BIBREF27 is a word-emotion association lexicon that is labeled using the eight Plutchik's emotions. This lexicon contains 14,181 words. SentiSense BIBREF28 is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. SentiSense has 5,496 words labeled with emotions from a set of 14 emotional categories, which is an edited version of the merge between Arnold, Plutchik, and Parrott models. LIWC BIBREF29 is a linguistic dictionary that contains 4,500 words categorized to analyze psycholinguistic patterns in text. Linguistic Inquiry and Word Count (LIWC) has 4 emotional categories: "sadness", "anger", "positive emotion", and "negative emotion". Empath BIBREF30 is a tool that uses deep learning and word embeddings to build a semantically meaningful lexicon for concepts. Empath uses Parrott's model for the emotional representation, but we use only the primary emotions (6 emotions) in the Pattrott's hierarchy ("love", "joy", "surprise", "anger", "sadness", "fear"). In our study we consider the 17 emotions that we shown in Figure FIGREF14. <<</Emotional Lexicons>>> <<<Model>>> We choose an Long short-term memory (LSTM) BIBREF31 that takes the sequence of words as input and predicts the false information type. The input of our network is based on word embedding (content-based) and emotional features (see Figure FIGREF24). <<</Model>>> <<<Input Representation>>> Our network consists of two branches. In the content-based one, we use an embedding layer followed by a LSTM layer. Then, we add an attention layer BIBREF32 to make this branch focus on (highlighting) particular words over others . The attention mechanism assigns a weight to each word vector result from the LSTM layer with a focus on the classification class. The input representation for this branch is represented as follows: the input sentence $S$ of length $n$ is represented as $[S\textsubscript {1}, S\textsubscript {2} .. S\textsubscript {n}]$ where $S\textsubscript {n} \in {\rm I\!R}^d$; ${\rm I\!R}^d$ is a d-dimensional word embedding vector of the $i$-th word in the input sentence. The output vectors of the words are passed to the LSTM layer, where the LSTM learns the hidden state $h\textsubscript {t}$ by capturing the previous timesteps (past features). The produced hidden state $h\textsubscript {t}$ at each time step is passed to the attention layer which computes a "context" vector $c\textsubscript {t}$ as the weighted mean of the state sequence $h$ by: Where $T$ is the total number of timesteps in the input sequence and $\alpha \textsubscript {tj}$ is a weight computed at each time step $j$ for each state hj. This output vector is then concatenated with the output from the densea (see Figure FIGREF24) layer and passed to the denseb layer, which precedes a final Softmax function to predict the output classes. Since the content-based branch is concatenated with the other emotional-based branch. On the other hand, the input representation for the emotional-based branch is defined as follows: we have $N$ emotional lexicons $L\textsubscript {n}$ where $n\in [1, 5]$, each lexicon has $M$ number of emotions depending on the emotion model that the lexicon uses (e.g. Plutchik, Arnold, etc.). The emotion vector $E\textsubscript {m}$ of an input document using the $n$-th emotional lexicon is $L\textsubscript {n}E\textsubscript {m}$. In our implementation, the emotional vector $E\textsubscript {m}$ of a Lexicon $L\textsubscript {n}$ is built using word frequency and normalized by the input sentence's length. Each input sentence is represented using: Where $v \in {\rm I\!R}^q$ and $q$ is: <<</Input Representation>>> <<</Emotionally-infused Model>>> <<<Evaluation Framework>>> <<<Datasets>>> Annotated data is a crucial source of information to analyze false information. Current status of previous works lacks available datasets of false information, where the majority of the works focus on annotating datasets from a factuality perspective. However, to analyze the existence of emotions across different sources of news, we rely on two publicly available datasets and a list contains suspicious Twitter accounts. <<<News Articles>>> Our dataset source of news articles is described in BIBREF2. This dataset was built from two different sources, for the trusted news (real news) they sampled news articles from the English Gigaword corpus. For the false news, they collected articles from seven different unreliable news sites. These news articles include satires, hoaxes, and propagandas but not clickbaits. Since we are interested also in analyzing clickbaits, we slice a sample from an available clickbait dataset BIBREF33 that was originally collected from two sources: Wikinews articles' headlines and other online sites that are known to publish clickbaits. The satire, hoax, and propaganda news articles are considerably long (some of them reach the length of 5,000 words). This length could affect the quality of the analysis as we mentioned before. We focus on analyzing the initial part of the article. Our intuition is that it is where emotion-bearing words will be more frequent. Therefore, we shorten long news articles into a maximum length of N words (N=300). We choose the value of N based on the length of the shortest articles. Moreover, we process the dataset by removing very short articles, redundant articles or articles that do not have a textual content. <<</News Articles>>> <<<Twitter>>> For this dataset, we rely on a list of several Twitter accounts for each type of false information from BIBREF6. This list was created based on public resources that annotated suspicious Twitter accounts. The authors in BIBREF6 have built a dataset by collecting tweets from these accounts and they made it available. For the real news, we merge this list with another 32 Twitter accounts from BIBREF34. In this work we could not use the previous dataset and we decide to collect tweets again. For each of these accounts, we collected the last M tweets posted (M=1000). By investigating these accounts manually, we found that many tweets just contain links without textual news. Therefore, to ensure of the quality of the crawled data, we chose a high value for M (also to have enough data). After the collecting process, we processed these tweets by removing duplicated, very short tweets, and tweets without textual content. Table TABREF35 shows a summary for both datasets. <<</Twitter>>> <<</Datasets>>> <<<Baselines>>> Emotions have been used in many natural language processing tasks and they showed their efficiency BIBREF35. We aim at investigating their efficiency to detect false information. In addition to EIN, we created a model (Emotion-based Model) that uses emotional features only and compare it to two baselines. Our aim is to investigate if the emotional features independently can detect false news. The two baselines of this model are Majority Class baseline (MC) and the Random selection baseline (RAN). For the EIN model, we compare it to different baselines: a) The first one is bag-of-words with a support vector machine classifier (BOW-SVM). We test different classifiers, and we choose SVM since it gives the highest result in the 10-fold Cross Validation (CV); b) We use another baseline that is based on word embeddings where for each input document we extract an average word embedding vector by taking the mean of the embeddings for the document's words. Similarly, we test different classifiers and the Logistic Regression classifier shows the best performance (WE-LR); c) The last baseline is the same as our neural architecture but without the emotional features branch: an LSTM layer followed by attention and dense layers. <<</Baselines>>> <<</Evaluation Framework>>> <<<Experiments and Results>>> <<<Emotion-based Model>>> In our experiments, we use $20\%$ of each of the datasets for testing and we apply 10-fold cross-validation on the remain part for selecting the best classifier as well for tuning it. We tested many classifiers and we finally choose Random Forest for both datasets since it obtained the best results. Table TABREF39 presents the classification results on both datasets. The results in both datasets show that emotional features clearly detect false news, compared to the baselines (RQ1). The emotional features perform better in the news articles dataset compared with these of tweets. We are interested in investigating also how good are the emotional features in detecting each class comparing to the RAN baseline. We choose the RAN baseline since it shows better results with regard to macro-F1 score. For doing so, we investigated the True Positive (TP) classification ratio for each class in each dataset. The clickbait class shows the highest TPs comparing to the other classes. From this we can infer that clickbaits exploit emotions much more than the other classes to deceive the reader. It is worth to mention that for the hoax class the proposed approach is better than the random baselines with a small ratio ($4\%$ difference). This could be justified by the fact that hoaxes, by definition, try to convince the reader of the credibility of a false story. Hence, the writer tries to deliver the story in a normal way without allowing the reader to fall under suspicion. The number of instances related to the false information classes in the news articles dataset is the same. Therefore, there is not a majority class that the classifier can be biased to. This is not the case in the Twitter dataset. For the Twitter dataset, the dataset is not balanced. Therefore, where the results are biased by the majority class (propaganda). But in general, all the classes' TP ratios are larger than the corresponding ones obtained with RAN baseline. From these results, we can conclude that suspicious news exploits emotions with the aim to mislead the reader. Following, we present the results obtained by the proposed emotionally-infused model. <<</Emotion-based Model>>> <<<Emotionally-Infused Model>>> In the neural model, to reduce the computational costs, instead of the cross-validation process we take another $20\%$ from the training part as a validation set (other than the $20\%$ that is prepared for testing). For the pretrained word embeddings, we use Google News Word2Vec 300-Embeddings in the neural network as well as in the W2V-LR baseline. For the classical machine learning classifiers for the baselines, we use the Scikit-Learn python library, and for the deep learning network, we use Keras library with Tensorflow as backend. To tune our deep learning network (hyper-parameters), we use the Hyperopt library. And to reduce the effect of overfitting, we use early stopping technique. In Table TABREF44 we summarize the parameters with respect to each dataset. We have to mention that we use Dropout after the dense layer in the emotional features branch (Dropc) as well as after the attention layer in the other one (Dropd) before the concatenation process. Since it is a multiclass classification process, we use categorical cross-entropy loss function. A summary of the models' parameters is presented in Table TABREF44. Table TABREF47 summarizes the performance of the proposed model in comparison to those obtained by the baselines. We report Macro- precision, recall, and F1, including also the metric of accuracy; for comparing the models' results we consider the macro of metrics since it shows an averaged result over all the classes. The baselines that we propose clearly show high results, where the LSTM baseline has the best performance in news articles dataset. In Twitter there is a different scenario, the BOW-SVM baseline shows a higher performance with respect to LSTM. We are interested in investigating the reason behind that. Therefore, we checked the coverage ratio of the used embeddings in the Twitter dataset. We have to mention that we excluded stop words during representing the input documents using the pre-trained Google News word embeddings. In the news articles dataset, we found that the coverage ratio of the embeddings is around $94\%$ while in Twitter it is around $70\%$. Therefore, we tuned the word embeddings during the training process to improve the document's representation since we have a larger dataset from Twitter. This process contributed with $1.9\%$ on the final macro-F1 results in Twitter (the result without tuning is $53.51\%$). Even though, the results obtained with the LSTM baseline is still lower than the one obtained with BOW-SVM. This experiment gives us some intuition that the weaker performance on Twitter may be due to the embeddings. Therefore, we tried different embeddings but none of them improved the result. The second baseline (W2V-LR) proved the same issue regarding the embeddings. The W2V-LR macro-F1 result in the news articles dataset is competitive, where it is much lower in Twitter. The usage of LSTM is two folds: in addition to being a good baseline, it shows also how much the emotional features contribute in the emotionally-infused network. EIN results outperform the baselines with a large margin (around 2% in Twitter and 7% in news articles), especially in the news articles dataset. The margin between EIN and the best baseline is lower in the Twitter dataset. The results also show that combining emotional features clearly boosts the performance. We can figure out the improvement by comparing the results of EIN to LSTM. EIN shows superior results in news articles dataset with regard to the LSTM (79.43%). A similar case appears in the Twitter dataset but with a lower margin (59.70%). The results of EIN in Twitter dataset show that emotional features help the weak coverage of word embeddings to improve the performance as well as to overcome the BOW-SVM baseline. We observed before that clickbait TP's ratio of the news articles dataset is the highest one, and this result points out that the clickbait class is less difficult to detect specifically from an emotional perspective. Therefore, in order to assess how our model separates false information types, we employ dimensionality reduction using t-distributed Stochastic Neighbor Embedding (T-SNE) technique BIBREF36 to project the document's representation from a high dimensional space to a 2D plane. Thus, we project the embeddings in EIN by extracting them from the outputs of Denseb layer (see Figure FIGREF48). We extract the embeddings twice, once from a random epoch (epoch 10) at the beginning of the training phase and the other at the last epoch. Our aim from the early epoch projection is to validate what we have noticed: the clickbait class is less difficult to detect with regard to the other classes. As we can notice in the 10-epoch plot, the clickbait class needs few epochs to be separated from the other types, and this supports what we found previously in the manual investigation of the classes' TP ratios. Despite this clear separation, there is still an overlapping with some real-news records. This results points out that emotions in clickbaits play a key role in deceiving the reader. Also, the figure shows that the disinformation classes still need more training epochs for better separation. Real-news records are totally overlapped with the false information classes as well as the false information classes with each other. On the other hand, for the last epoch, clearly, the classes are separated from each other and the more important, from the real news. But generally, there still a small overlapping between satires and hoaxes as well few records from the propaganda class. <<</Emotionally-Infused Model>>> <<<EIN as Clickbaits Detector>>> From the previous results in Section SECREF37 as well as from what we notice in Figure FIGREF48, EIN obtains a clear separability of the clickbait class. These observations motivate us to investigate EIN as clickbait detector. Concretely, we test EIN on the source of our clickbait instances BIBREF33 in the news articles dataset. As we mentioned previously, this dataset originally was built using two different text sources. For clickbaits, the authors have manually identified a set of online sites that publish many clickbait articles. Whereas for the negative class, they collected headlines from a corpus of Wikinews articles collected in other research work. They took 7,500 samples from each class for the final version of the dataset. The authors also proposed a clickbaits detector model (Stop_Clickbait) that employed a combination of features: sentence structure (sentence length, average length of words, the ratio of the number of stop words to the number of thematic words and the longest separation between the syntactically dependent words), word patterns (presence of cardinal number at the beginning of the sentence, presence of unusual punctuation patterns), clickbait language (presence of hyperbolic words, common clickbait phrases, internet slangs and determiners), and N-grams features (word, Part-Of-Speech, and syntactic n-grams). Using this set of features group, the authors tested different classifiers where SVM showed the state-of-the-art results. They considered Accuracy, Precision, Recall and F1 to compare their approach to a baseline (an online web browser extension for clickbaits detection called Downworthy). In this experiment, we consider the third baseline (LSTM) to observe the improvement of the emotional features in the EIN model. Different from the previous experiments, this is a binary classification task. Therefore, we use binary cross-entropy as loss function and we change the Softmax layer to a Sigmoid function. The new parameters for both LSTM and EIN models are mentioned in Table TABREF44. In Table TABREF51 we present the results of the Stop_Clickbait approach, LSTM baseline, and the EIN model. The results show that our baseline outperforms the proposed clickbait detector with a good margin. Furthermore, the results of the EIN are superior to the LSTM and the Stop_Clickbait detector. Considering emotions in the EIN deep learning approach improved the detection of false information. This is due to the fact that in clickbaits emotions are employed to deceive the reader. <<</EIN as Clickbaits Detector>>> <<</Experiments and Results>>> <<<Discussion>>> The results show that the detection of suspicious news in Twitter is harder than detecting them in news articles. Overall, the results of EIN showed that emotional features improve the performance of our model, especially in the case of the news articles dataset. We manually inspected the Twitter dataset and observed that the language of the tweets has differences compared to the news articles one. We found that news in Twitter has many abbreviations (amp, wrt, JFK...etc.), bad words abbreviations (WTF, LMFO...etc.), informal language presentation, and typos. This reduces the coverage ratio of word embeddings. We also noticed that suspicious news in Twitter are more related to sexual issues. To validate our observations, we extracted the mean value of sexual words using a list of sexual terms BIBREF37. The mean value is the average number of times a sexual/bad word appears in a tweet normalized by the length of the tweet. The mean value in Twitter is 0.003 while in news articles is 0.0024. Similarly, suspicious news in Twitter presented more insulting words than in news articles where the mean value in Twitter is 0.0027 and 0.0017 in news articles. Following, we focus on analyzing false information from an emotional perspective. We are aiming to answer the rest of the questions, RQ2, RQ3, and RQ4. RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources? Intuitively, the emotions contribution in the classification process is not the same, where some words could manifest the existence of specific kind of emotions rather than others. To investigate this point, we use Information Gain (IG) in order to identify the importance of emotions in discriminating between real and all the other types of false news (multiclass task) in both Twitter and news articles datasets (see Figure FIGREF54). Before going through the ranking of features importance, we notice that the emotions ranking shapes are very similar in both Twitter and news articles. This states that despite the fact that the language is different, both sources have similar overall emotions distribution. In other words, false news employs a similar emotional pattern in both text sources. Since the news language in Twitter is not presented clearly as in news articles, this observation can help to build a cross-source system that is trained on suspicious news from news articles to detect the corresponding ones in Twitter. Figure FIGREF54 shows also that the emotion "joy" is the most important emotion in both datasets. It also mentions that "despair" and "hate" are almost not used in the classification process. The ranking of the features in both sources is different, where in the news articles dataset the top important emotions are "joy", "anticipation", "fear", and "disgust" respectively. On the other hand, the top ones in Twitter are "joy", "sadness", "fear", and "disgust". . RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones? We measure statically significant differences using the t-test on emotions across real news and false news (binary task) in the both datasets in Figure FIGREF55. These findings provide a deeper understanding of the EIN performance. The results show that "joy", "neg_emo", "ambiguous", "anticipation", "calmness", "disgust", "trust" and "surprise" have significant statistical differences between real and suspicious news in both datasets. Some other emotions such as "despair" and "anger" have no statistical difference in both datasets. It turns out that the results we obtain are generally consistent with the IG results in research question RQ2. We notice in the IG analysis that some emotions have a higher importance in one of the news sources: "sadness", "anger", and "fear" have a higher importance in Twitter than in news articles, and the opposite for "hope". We observe the same findings using the t-test. . RQ4 What are the top-N emotions that discriminate false information types in both textual sources? False information types are different in the way they present the news to the reader. This raises a question: what are the top employed emotions in each type of false information? In Table TABREF57, we present the first three emotions that contribute mostly to the classification process to each type. This can indicate to us what are the emotion types that are used mostly in each type of false information. Table TABREF57 shows that clickbaits express "surprise" and "negative emotion" at the most. This validates the definition of clickbaits as "attention redirection" by exploiting the reader and convincing him/her that there is an unexpected thing with negative emotion. The result of seeing "fear" in the top features in Twitter is interesting; one of the recent studies is presenting the hypothesis that says: curiosity is the best remedy for fear BIBREF38 based on psychological interpretations. Taking into account the definition of clickbaits as "attention redirection", looking at our results, we can proof this hypothesis. Furthermore, despite the language differences in both datasets, we obtain almost the same results, which emphasize our results. For hoaxes, it is not simple to interpret a specific pattern of emotions in the results. We might justify it by the fact that hoaxes are written to convince the reader of the validity of a story. Therefore, the writer is trying to present the story in a normal way (truthful) similar to a real story. Therefore, the top emotions are not unique to the hoax type. But what we find from the top hoaxes emotions in both datasets is that they are generally different except the emotion "like". Despite the natural narrative way of presenting the story, the analysis shows that the writer still uses "like" to grab reader's attention smoothly. Propaganda type has clearer emotional interpretation considering its definition. We find that propaganda expresses "joy", "fear" and at the same time "calmness" in the news articles. Both "joy" and "fear" are contrary from an emotional polar perspective, where "joy" shows the extreme of the positive emotions and "fear" the extreme negative, and at the same time, "calmness" is present. The emotional shifting between the two extremes is a clear attempt of opinion manipulation from an emotional perspective. We obtain a similar emotion set from Twitter, but instead of "joy" we get "hope". Lastly, satire is defined as a type of parody presented in a typical format of mainstream journalism, but in a similar way to irony and sarcasm phenomena BIBREF39. The results of the analysis show that "disgust" and "positive emotion" are present in both datasets, but we get "negative emotion" in the news articles and "sadness" in Twitter (both are placed in the negative side of emotions). We are interested in investigating the cause of the emotion "disgust" which appeared in the results from both datasets. We conduct a manual analysis on the text of the satire type in both datasets in order to shed some light on the possible causes. We notice that the satire language in the news often employs the emotion "disgust" to give a sense of humor. Figure FIGREF58 shows some examples from the news articles dataset highlighting the words that triggered the emotion "disgust". <<</Discussion>>> <<<Conclusions and Future Work>>> In this article we have presented an emotionally-infused deep learning network that uses emotional features to identify false information in Twitter and news articles sources. We performed several experiments to investigate the effectiveness of the emotional features in identifying false information. We validated the performance of the model by comparing it to a LSTM network and other baselines. The results on the two datasets showed that clickbaits have a simpler manipulation language where emotions help detecting them. This demonstrates that emotions play a key role in deceiving the reader. Based on this result, we investigated our model performance on a clickbaits dataset and we compared it to the state-of-the-art performance. Our model showed superior results near to 96% F1 value. Overall results confirmed that emotional features have boosted EIN model performance achieving better results on 3 different datasets (RQ1). These results emphasized the importance of emotional features in the detection of false information. In Twitter, false news content is deliberately sexual oriented and it uses many insulting words. Our analysis showed that emotions can help detecting false information also in Twitter. In the analysis section, we answered a set of questions regarding the emotions distribution in false news. We found that emotions have similar importance distribution in Twitter and news articles regardless of the differences in the used languages (RQ2). The analysis showed that most of the used emotions have statistical significant difference between real and false news (RQ3). Emotions plays a different role in each type of false information in line with its definition (RQ4). We found that clickbaits try to attract the attention of the reader by mainly employing the "surprise" emotion. Propagandas are manipulating the feelings of the readers by using extreme positive and negative emotions, with triggering a sense of "calmness" to confuse the readers and enforcing a feeling of confidence. Satire news instead use the "disgust" emotion to give a sense of humor. To sum up, we can say that the initial part of false news contains more emotions than the rest of document. Our approach exploit this fact for their detection. To the best of our knowledge, this is the first work that analyzes the impact of emotions in the detection of false information considering both social media and news articles. As a future work, the results of our approach as a clickbaits detector motivate us to develop for a clickbaits detector as a web browser extension. Also, we will study how the emotions flow inside the articles of each kind of false information, which is worthy to be investigated as the results of this work confirmed. <<</Conclusions and Future Work>>> <<</Title>>>
{ "references": [ "Abstract, Experiments and Results" ], "type": "disordered_section" }
1908.09951
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> An Emotional Analysis of False Information in Social Media and News Articles <<<Abstract>>> Fake news is risky since it has been created to manipulate the readers' opinions and beliefs. In this work, we compared the language of false news to the real one of real news from an emotional perspective, considering a set of false information types (propaganda, hoax, clickbait, and satire) from social media and online news articles sources. Our experiments showed that false information has different emotional patterns in each of its types, and emotions play a key role in deceiving the reader. Based on that, we proposed a LSTM neural network model that is emotionally-infused to detect false news. <<</Abstract>>> <<<Introduction>>> With the complicated political and economic situations in many countries, some agendas are publishing suspicious news to affect public opinions regarding specific issues BIBREF0. The spreading of this phenomenon is increasing recently with the large usage of social media and online news sources. Many anonymous accounts in social media platforms start to appear, as well as new online news agencies without presenting a clear identity of the owner. Twitter has recently detected a campaign organized by agencies from two different countries to affect the results of the last U.S. presidential elections of 2016. The initial disclosures by Twitter have included 3,841 accounts. A similar attempt was done by Facebook, as they detected coordinated efforts to influence U.S. politics ahead of the 2018 midterm elections. False information is categorized into 8 types according to BIBREF1. Some of these types are intentional to deceive where others are not. In this work, we are interested in analyzing 4 main types, i.e. hoaxes, propagandas, clickbaits, and satires. These types can be classified into two main categories - misinformation and disinformation - where misinformation considers false information that is published without the intent to deceive (e.g. satire). Disinformation can be seen as a specific kind of false information with the aim to mislead the reader (e.g. hoax, propaganda, and clickbait). Propagandas are fabricated stories spread to harm the interest of a particular party. Hoaxes are similar to propagandas but the main aim of the writer is not to manipulate the readers' opinions but to convince them of the validity of a paranoia-fueled story BIBREF2. Clickbait is another type of disinformation that refers to the deliberate use of misleading headlines, thumbnails, or stories' snippets to redirect attention (for traffic attention). Satire is the only type of misinformation, where the writer's main purpose is not to mislead the reader, but rather to deliver the story in an ironic way (to entertain or to be sarcastic). The topic of fake news is gaining attention due to its risky consequences. A vast set of campaigns has been organized to tackle fake news. The owner of Wikipedia encyclopedia created the news site WikiTribune to encourage the evidence-based journalism. Another way of addressing this issue is by fact-checking websites. These websites like politifact.com, snopes.com and factchecking.org aim to debunk false news by manually assess the credibility of claims that have been circulated massively in online platforms. These campaigns were not limited to the English language where other languages such as Arabic have been targeted by some sites like fatabyyano.net. <<<Hypothesis>>> Trusted news is recounting its content in a naturalistic way without attempting to affect the opinion of the reader. On the other hand, false news is taking advantage of the presented issue sensitivity to affect the readers' emotions which sequentially may affect their opinions as well. A set of works has been done previously to investigate the language of false information. The authors in BIBREF3 have studied rumours in Twitter. They have investigated a corpus of true and false tweets rumours from different aspects. From an emotional point of view, they found that false rumours inspired fear, disgust, and surprise in their replies while the true ones inspired joy and anticipation. Some kinds of false information are similar to other language phenomena. For example, satire by its definition showed similarity with irony language. The work in BIBREF4 showed that affective features work well in the detection of irony. In addition, they confirmed that positive words are more relevant for identifying sarcasm and negative words for irony BIBREF5. The results of these works motivate us to investigate the impact of emotions on false news types. These are the research questions we aim to answer: RQ1 Can emotional features help detecting false information? RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources? RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones? RQ4 What are the top-N emotions that discriminate false information types in both textual sources? In this work, we investigate suspicious news in two different sources: Twitter and online news articles. Concerning the news articles source, we focus on the beginning part of them, since they are fairly long, and the emotional analysis could be biased by their length. We believe that the beginning part of false news articles can present a unique emotional pattern for each false information type since the writer in this part is normally trying to trigger some emotions in the reader. Throughout the emotional analysis, we go beyond the superficial analysis of words. We hope that our findings in this work will contribute to fake news detection. The key contributions of this article are: Model: We propose an approach that combines emotional information from documents in a deep neural network. We compare the obtained results with a set of baselines. The results show that our approach is promising. Analysis: We show a comprehensive analysis on two false information datasets collected from social media and online news articles, based on a large set of emotions. We compare the differences from an affective perspective in both sources, and obtain valuable insights on how emotions can contribute to detect false news. The rest of the paper is structured as follows; After a brief review of related work in Section SECREF2, Section SECREF3 introduces our emotionally-infused model. Then, we present the evaluation framework in Section SECREF4. Section SECREF5 reports the experiments and the results, followed by an analysis on the false information types from emotional perspective in Section SECREF6. Finally, the conclusions of this work are summarized in Section SECREF7. <<</Hypothesis>>> <<</Introduction>>> <<<Related Work>>> The work that has been done previously on the analysis of false information is rather small regarding the approaches that were proposed. In this section, we present some recent works on the language analysis and detection of false information. Recent attempts tried to analyze the language of false news to give a better understanding. A work done in BIBREF6 has studied the false information in Twitter from a linguistic perspective. The authors found that real tweets contain significantly fewer bias markers, hedges, subjective terms, and less harmful words. They also found that propaganda news targets morals more than satires and hoaxes but less than clickbaits. Furthermore, satirical news contains more loyalty and fewer betrayal morals compared to propaganda. In addition, they built a model that combined a set of features (graph-based, cues words, and syntax) and achieved a good performance comparing to other baselines (71% vs. 59% macro-F1). Another similar work BIBREF2 has been done to characterize the language of false information (propaganda, hoax, and satire) in online news articles. The authors have studied the language from different perspectives: the existence of weak and strong subjectivity, hedges, and the degree of dramatization using a lexicon from Wiktionary. As well, they employed in their study the LIWC dictionary to exploit the existence of personal pronouns, swear, sexual, etc. words. The results showed that false news types tend to use first and second personal pronouns more than truthful news. Moreover, the results showed that false news generally uses words to exaggerate (subjectives, superlatives, and modal adverbs), and specifically, the satire type uses more adverbs. Hoax stories tend to use fewer superlatives and comparatives, and propagandas use relatively more assertive verbs. Moving away from these previous false information types, the work in BIBREF3 has focused on analyzing rumours in Twitter (from factuality perspective: True or False). They analyzed about 126,000 rumours and found that falsehood widespread significantly further, faster, deeper, and more broadly than truth in many domains. In addition, they found that false rumours are more novel than truthful ones, which made people more likely to share them. From an emotional perspective, they found that false rumours triggered "fear", "disgust", and "surprise" in replies while truthful ones triggered "anticipation", "sadness", "joy", and "trust". Another work BIBREF7 has studied the problem of detecting hoaxes by analyzing features related to the content in Wikipedia. The work showed that some features like hoaxes articles' length as well as the ratio of wiki markups (images, references, links to other articles and to external URLs, etc.) are important to discriminate hoaxes from legitimate articles. Many approaches have been proposed on fake news detection. In general, they are divided into social media and news claims-based approaches. The authors in BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 have proposed supervised methods using recurrent neural networks or by extracting manual features like a set of regular expressions, content-based, network-based etc. As an example, the work by BIBREF13 assessed the credibility of tweets by analyzing trending topics. They used message-based, user-based, and propagation-based features, and they found that some features related to the user information like user's age, number of followers, statuse counts etc. have helped the most to discriminate truthful from deceitful tweets. Other news claims-based approaches BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18 have been mainly focusing on inferring the credibility of the claims by retrieving evidences from Google or Bing search engines. These approaches have employed a different set of features starting from manual features (e.g. cosine similarity between the claims and the results, Alexa Rank of the evidence source, etc.) to a fully automatic approach using deep learning networks. A recent trend started to appear and is trying to approach the detection of fake news from a stance perspective. The aim is to predict how other articles orient to a specific fact BIBREF19, BIBREF20, BIBREF21. <<</Related Work>>> <<<Emotionally-infused Model>>> In this section we describe the Emotionally-Infused Network we propose (EIN). <<<Emotional Lexicons>>> Several emotional models well-grounded in psychology science have been proposed, such as the ones by Magda Arnold BIBREF22, Paul Ekman BIBREF23, Robert Plutchik BIBREF24, and Gerrod Parrot BIBREF25. On the basis of each of them, many emotional resources (lexicons) were built in the literature. In this work, we consider several emotional resources to increase the coverage of the emotional words in texts as well to have a wider range of emotions in the analysis. Concretely, we use EmoSenticNet, EmoLex, SentiSense, LIWC and Empath: EmoSenticNet BIBREF26 is a lexical resource that assigns WordNet-Affect emotion labels to SenticNet concepts. It has a total of 13,189 entries annotated using the six Ekman's basic emotions. EmoLex BIBREF27 is a word-emotion association lexicon that is labeled using the eight Plutchik's emotions. This lexicon contains 14,181 words. SentiSense BIBREF28 is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. SentiSense has 5,496 words labeled with emotions from a set of 14 emotional categories, which is an edited version of the merge between Arnold, Plutchik, and Parrott models. LIWC BIBREF29 is a linguistic dictionary that contains 4,500 words categorized to analyze psycholinguistic patterns in text. Linguistic Inquiry and Word Count (LIWC) has 4 emotional categories: "sadness", "anger", "positive emotion", and "negative emotion". Empath BIBREF30 is a tool that uses deep learning and word embeddings to build a semantically meaningful lexicon for concepts. Empath uses Parrott's model for the emotional representation, but we use only the primary emotions (6 emotions) in the Pattrott's hierarchy ("love", "joy", "surprise", "anger", "sadness", "fear"). In our study we consider the 17 emotions that we shown in Figure FIGREF14. <<</Emotional Lexicons>>> <<<Model>>> We choose an Long short-term memory (LSTM) BIBREF31 that takes the sequence of words as input and predicts the false information type. The input of our network is based on word embedding (content-based) and emotional features (see Figure FIGREF24). <<</Model>>> <<<Input Representation>>> Our network consists of two branches. In the content-based one, we use an embedding layer followed by a LSTM layer. Then, we add an attention layer BIBREF32 to make this branch focus on (highlighting) particular words over others . The attention mechanism assigns a weight to each word vector result from the LSTM layer with a focus on the classification class. The input representation for this branch is represented as follows: the input sentence $S$ of length $n$ is represented as $[S\textsubscript {1}, S\textsubscript {2} .. S\textsubscript {n}]$ where $S\textsubscript {n} \in {\rm I\!R}^d$; ${\rm I\!R}^d$ is a d-dimensional word embedding vector of the $i$-th word in the input sentence. The output vectors of the words are passed to the LSTM layer, where the LSTM learns the hidden state $h\textsubscript {t}$ by capturing the previous timesteps (past features). The produced hidden state $h\textsubscript {t}$ at each time step is passed to the attention layer which computes a "context" vector $c\textsubscript {t}$ as the weighted mean of the state sequence $h$ by: Where $T$ is the total number of timesteps in the input sequence and $\alpha \textsubscript {tj}$ is a weight computed at each time step $j$ for each state hj. This output vector is then concatenated with the output from the densea (see Figure FIGREF24) layer and passed to the denseb layer, which precedes a final Softmax function to predict the output classes. Since the content-based branch is concatenated with the other emotional-based branch. On the other hand, the input representation for the emotional-based branch is defined as follows: we have $N$ emotional lexicons $L\textsubscript {n}$ where $n\in [1, 5]$, each lexicon has $M$ number of emotions depending on the emotion model that the lexicon uses (e.g. Plutchik, Arnold, etc.). The emotion vector $E\textsubscript {m}$ of an input document using the $n$-th emotional lexicon is $L\textsubscript {n}E\textsubscript {m}$. In our implementation, the emotional vector $E\textsubscript {m}$ of a Lexicon $L\textsubscript {n}$ is built using word frequency and normalized by the input sentence's length. Each input sentence is represented using: Where $v \in {\rm I\!R}^q$ and $q$ is: <<</Input Representation>>> <<</Emotionally-infused Model>>> <<<Evaluation Framework>>> <<<Datasets>>> Annotated data is a crucial source of information to analyze false information. Current status of previous works lacks available datasets of false information, where the majority of the works focus on annotating datasets from a factuality perspective. However, to analyze the existence of emotions across different sources of news, we rely on two publicly available datasets and a list contains suspicious Twitter accounts. <<<News Articles>>> Our dataset source of news articles is described in BIBREF2. This dataset was built from two different sources, for the trusted news (real news) they sampled news articles from the English Gigaword corpus. For the false news, they collected articles from seven different unreliable news sites. These news articles include satires, hoaxes, and propagandas but not clickbaits. Since we are interested also in analyzing clickbaits, we slice a sample from an available clickbait dataset BIBREF33 that was originally collected from two sources: Wikinews articles' headlines and other online sites that are known to publish clickbaits. The satire, hoax, and propaganda news articles are considerably long (some of them reach the length of 5,000 words). This length could affect the quality of the analysis as we mentioned before. We focus on analyzing the initial part of the article. Our intuition is that it is where emotion-bearing words will be more frequent. Therefore, we shorten long news articles into a maximum length of N words (N=300). We choose the value of N based on the length of the shortest articles. Moreover, we process the dataset by removing very short articles, redundant articles or articles that do not have a textual content. <<</News Articles>>> <<<Twitter>>> For this dataset, we rely on a list of several Twitter accounts for each type of false information from BIBREF6. This list was created based on public resources that annotated suspicious Twitter accounts. The authors in BIBREF6 have built a dataset by collecting tweets from these accounts and they made it available. For the real news, we merge this list with another 32 Twitter accounts from BIBREF34. In this work we could not use the previous dataset and we decide to collect tweets again. For each of these accounts, we collected the last M tweets posted (M=1000). By investigating these accounts manually, we found that many tweets just contain links without textual news. Therefore, to ensure of the quality of the crawled data, we chose a high value for M (also to have enough data). After the collecting process, we processed these tweets by removing duplicated, very short tweets, and tweets without textual content. Table TABREF35 shows a summary for both datasets. <<</Twitter>>> <<</Datasets>>> <<<Baselines>>> Emotions have been used in many natural language processing tasks and they showed their efficiency BIBREF35. We aim at investigating their efficiency to detect false information. In addition to EIN, we created a model (Emotion-based Model) that uses emotional features only and compare it to two baselines. Our aim is to investigate if the emotional features independently can detect false news. The two baselines of this model are Majority Class baseline (MC) and the Random selection baseline (RAN). For the EIN model, we compare it to different baselines: a) The first one is bag-of-words with a support vector machine classifier (BOW-SVM). We test different classifiers, and we choose SVM since it gives the highest result in the 10-fold Cross Validation (CV); b) We use another baseline that is based on word embeddings where for each input document we extract an average word embedding vector by taking the mean of the embeddings for the document's words. Similarly, we test different classifiers and the Logistic Regression classifier shows the best performance (WE-LR); c) The last baseline is the same as our neural architecture but without the emotional features branch: an LSTM layer followed by attention and dense layers. <<</Baselines>>> <<</Evaluation Framework>>> <<<Experiments and Results>>> <<<Emotion-based Model>>> In our experiments, we use $20\%$ of each of the datasets for testing and we apply 10-fold cross-validation on the remain part for selecting the best classifier as well for tuning it. We tested many classifiers and we finally choose Random Forest for both datasets since it obtained the best results. Table TABREF39 presents the classification results on both datasets. The results in both datasets show that emotional features clearly detect false news, compared to the baselines (RQ1). The emotional features perform better in the news articles dataset compared with these of tweets. We are interested in investigating also how good are the emotional features in detecting each class comparing to the RAN baseline. We choose the RAN baseline since it shows better results with regard to macro-F1 score. For doing so, we investigated the True Positive (TP) classification ratio for each class in each dataset. The clickbait class shows the highest TPs comparing to the other classes. From this we can infer that clickbaits exploit emotions much more than the other classes to deceive the reader. It is worth to mention that for the hoax class the proposed approach is better than the random baselines with a small ratio ($4\%$ difference). This could be justified by the fact that hoaxes, by definition, try to convince the reader of the credibility of a false story. Hence, the writer tries to deliver the story in a normal way without allowing the reader to fall under suspicion. The number of instances related to the false information classes in the news articles dataset is the same. Therefore, there is not a majority class that the classifier can be biased to. This is not the case in the Twitter dataset. For the Twitter dataset, the dataset is not balanced. Therefore, where the results are biased by the majority class (propaganda). But in general, all the classes' TP ratios are larger than the corresponding ones obtained with RAN baseline. From these results, we can conclude that suspicious news exploits emotions with the aim to mislead the reader. Following, we present the results obtained by the proposed emotionally-infused model. <<</Emotion-based Model>>> <<<Emotionally-Infused Model>>> In the neural model, to reduce the computational costs, instead of the cross-validation process we take another $20\%$ from the training part as a validation set (other than the $20\%$ that is prepared for testing). For the pretrained word embeddings, we use Google News Word2Vec 300-Embeddings in the neural network as well as in the W2V-LR baseline. For the classical machine learning classifiers for the baselines, we use the Scikit-Learn python library, and for the deep learning network, we use Keras library with Tensorflow as backend. To tune our deep learning network (hyper-parameters), we use the Hyperopt library. And to reduce the effect of overfitting, we use early stopping technique. In Table TABREF44 we summarize the parameters with respect to each dataset. We have to mention that we use Dropout after the dense layer in the emotional features branch (Dropc) as well as after the attention layer in the other one (Dropd) before the concatenation process. Since it is a multiclass classification process, we use categorical cross-entropy loss function. A summary of the models' parameters is presented in Table TABREF44. Table TABREF47 summarizes the performance of the proposed model in comparison to those obtained by the baselines. We report Macro- precision, recall, and F1, including also the metric of accuracy; for comparing the models' results we consider the macro of metrics since it shows an averaged result over all the classes. The baselines that we propose clearly show high results, where the LSTM baseline has the best performance in news articles dataset. In Twitter there is a different scenario, the BOW-SVM baseline shows a higher performance with respect to LSTM. We are interested in investigating the reason behind that. Therefore, we checked the coverage ratio of the used embeddings in the Twitter dataset. We have to mention that we excluded stop words during representing the input documents using the pre-trained Google News word embeddings. In the news articles dataset, we found that the coverage ratio of the embeddings is around $94\%$ while in Twitter it is around $70\%$. Therefore, we tuned the word embeddings during the training process to improve the document's representation since we have a larger dataset from Twitter. This process contributed with $1.9\%$ on the final macro-F1 results in Twitter (the result without tuning is $53.51\%$). Even though, the results obtained with the LSTM baseline is still lower than the one obtained with BOW-SVM. This experiment gives us some intuition that the weaker performance on Twitter may be due to the embeddings. Therefore, we tried different embeddings but none of them improved the result. The second baseline (W2V-LR) proved the same issue regarding the embeddings. The W2V-LR macro-F1 result in the news articles dataset is competitive, where it is much lower in Twitter. The usage of LSTM is two folds: in addition to being a good baseline, it shows also how much the emotional features contribute in the emotionally-infused network. EIN results outperform the baselines with a large margin (around 2% in Twitter and 7% in news articles), especially in the news articles dataset. The margin between EIN and the best baseline is lower in the Twitter dataset. The results also show that combining emotional features clearly boosts the performance. We can figure out the improvement by comparing the results of EIN to LSTM. EIN shows superior results in news articles dataset with regard to the LSTM (79.43%). A similar case appears in the Twitter dataset but with a lower margin (59.70%). The results of EIN in Twitter dataset show that emotional features help the weak coverage of word embeddings to improve the performance as well as to overcome the BOW-SVM baseline. We observed before that clickbait TP's ratio of the news articles dataset is the highest one, and this result points out that the clickbait class is less difficult to detect specifically from an emotional perspective. Therefore, in order to assess how our model separates false information types, we employ dimensionality reduction using t-distributed Stochastic Neighbor Embedding (T-SNE) technique BIBREF36 to project the document's representation from a high dimensional space to a 2D plane. Thus, we project the embeddings in EIN by extracting them from the outputs of Denseb layer (see Figure FIGREF48). We extract the embeddings twice, once from a random epoch (epoch 10) at the beginning of the training phase and the other at the last epoch. Our aim from the early epoch projection is to validate what we have noticed: the clickbait class is less difficult to detect with regard to the other classes. As we can notice in the 10-epoch plot, the clickbait class needs few epochs to be separated from the other types, and this supports what we found previously in the manual investigation of the classes' TP ratios. Despite this clear separation, there is still an overlapping with some real-news records. This results points out that emotions in clickbaits play a key role in deceiving the reader. Also, the figure shows that the disinformation classes still need more training epochs for better separation. Real-news records are totally overlapped with the false information classes as well as the false information classes with each other. On the other hand, for the last epoch, clearly, the classes are separated from each other and the more important, from the real news. But generally, there still a small overlapping between satires and hoaxes as well few records from the propaganda class. <<</Emotionally-Infused Model>>> <<<EIN as Clickbaits Detector>>> From the previous results in Section SECREF37 as well as from what we notice in Figure FIGREF48, EIN obtains a clear separability of the clickbait class. These observations motivate us to investigate EIN as clickbait detector. Concretely, we test EIN on the source of our clickbait instances BIBREF33 in the news articles dataset. As we mentioned previously, this dataset originally was built using two different text sources. For clickbaits, the authors have manually identified a set of online sites that publish many clickbait articles. Whereas for the negative class, they collected headlines from a corpus of Wikinews articles collected in other research work. They took 7,500 samples from each class for the final version of the dataset. The authors also proposed a clickbaits detector model (Stop_Clickbait) that employed a combination of features: sentence structure (sentence length, average length of words, the ratio of the number of stop words to the number of thematic words and the longest separation between the syntactically dependent words), word patterns (presence of cardinal number at the beginning of the sentence, presence of unusual punctuation patterns), clickbait language (presence of hyperbolic words, common clickbait phrases, internet slangs and determiners), and N-grams features (word, Part-Of-Speech, and syntactic n-grams). Using this set of features group, the authors tested different classifiers where SVM showed the state-of-the-art results. They considered Accuracy, Precision, Recall and F1 to compare their approach to a baseline (an online web browser extension for clickbaits detection called Downworthy). In this experiment, we consider the third baseline (LSTM) to observe the improvement of the emotional features in the EIN model. Different from the previous experiments, this is a binary classification task. Therefore, we use binary cross-entropy as loss function and we change the Softmax layer to a Sigmoid function. The new parameters for both LSTM and EIN models are mentioned in Table TABREF44. In Table TABREF51 we present the results of the Stop_Clickbait approach, LSTM baseline, and the EIN model. The results show that our baseline outperforms the proposed clickbait detector with a good margin. Furthermore, the results of the EIN are superior to the LSTM and the Stop_Clickbait detector. Considering emotions in the EIN deep learning approach improved the detection of false information. This is due to the fact that in clickbaits emotions are employed to deceive the reader. <<</EIN as Clickbaits Detector>>> <<</Experiments and Results>>> <<<Discussion>>> The results show that the detection of suspicious news in Twitter is harder than detecting them in news articles. Overall, the results of EIN showed that emotional features improve the performance of our model, especially in the case of the news articles dataset. We manually inspected the Twitter dataset and observed that the language of the tweets has differences compared to the news articles one. We found that news in Twitter has many abbreviations (amp, wrt, JFK...etc.), bad words abbreviations (WTF, LMFO...etc.), informal language presentation, and typos. This reduces the coverage ratio of word embeddings. We also noticed that suspicious news in Twitter are more related to sexual issues. To validate our observations, we extracted the mean value of sexual words using a list of sexual terms BIBREF37. The mean value is the average number of times a sexual/bad word appears in a tweet normalized by the length of the tweet. The mean value in Twitter is 0.003 while in news articles is 0.0024. Similarly, suspicious news in Twitter presented more insulting words than in news articles where the mean value in Twitter is 0.0027 and 0.0017 in news articles. Following, we focus on analyzing false information from an emotional perspective. We are aiming to answer the rest of the questions, RQ2, RQ3, and RQ4. RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources? Intuitively, the emotions contribution in the classification process is not the same, where some words could manifest the existence of specific kind of emotions rather than others. To investigate this point, we use Information Gain (IG) in order to identify the importance of emotions in discriminating between real and all the other types of false news (multiclass task) in both Twitter and news articles datasets (see Figure FIGREF54). Before going through the ranking of features importance, we notice that the emotions ranking shapes are very similar in both Twitter and news articles. This states that despite the fact that the language is different, both sources have similar overall emotions distribution. In other words, false news employs a similar emotional pattern in both text sources. Since the news language in Twitter is not presented clearly as in news articles, this observation can help to build a cross-source system that is trained on suspicious news from news articles to detect the corresponding ones in Twitter. Figure FIGREF54 shows also that the emotion "joy" is the most important emotion in both datasets. It also mentions that "despair" and "hate" are almost not used in the classification process. The ranking of the features in both sources is different, where in the news articles dataset the top important emotions are "joy", "anticipation", "fear", and "disgust" respectively. On the other hand, the top ones in Twitter are "joy", "sadness", "fear", and "disgust". . RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones? We measure statically significant differences using the t-test on emotions across real news and false news (binary task) in the both datasets in Figure FIGREF55. These findings provide a deeper understanding of the EIN performance. The results show that "joy", "neg_emo", "ambiguous", "anticipation", "calmness", "disgust", "trust" and "surprise" have significant statistical differences between real and suspicious news in both datasets. Some other emotions such as "despair" and "anger" have no statistical difference in both datasets. It turns out that the results we obtain are generally consistent with the IG results in research question RQ2. We notice in the IG analysis that some emotions have a higher importance in one of the news sources: "sadness", "anger", and "fear" have a higher importance in Twitter than in news articles, and the opposite for "hope". We observe the same findings using the t-test. . RQ4 What are the top-N emotions that discriminate false information types in both textual sources? False information types are different in the way they present the news to the reader. This raises a question: what are the top employed emotions in each type of false information? In Table TABREF57, we present the first three emotions that contribute mostly to the classification process to each type. This can indicate to us what are the emotion types that are used mostly in each type of false information. Table TABREF57 shows that clickbaits express "surprise" and "negative emotion" at the most. This validates the definition of clickbaits as "attention redirection" by exploiting the reader and convincing him/her that there is an unexpected thing with negative emotion. The result of seeing "fear" in the top features in Twitter is interesting; one of the recent studies is presenting the hypothesis that says: curiosity is the best remedy for fear BIBREF38 based on psychological interpretations. Taking into account the definition of clickbaits as "attention redirection", looking at our results, we can proof this hypothesis. Furthermore, despite the language differences in both datasets, we obtain almost the same results, which emphasize our results. For hoaxes, it is not simple to interpret a specific pattern of emotions in the results. We might justify it by the fact that hoaxes are written to convince the reader of the validity of a story. Therefore, the writer is trying to present the story in a normal way (truthful) similar to a real story. Therefore, the top emotions are not unique to the hoax type. But what we find from the top hoaxes emotions in both datasets is that they are generally different except the emotion "like". Despite the natural narrative way of presenting the story, the analysis shows that the writer still uses "like" to grab reader's attention smoothly. Propaganda type has clearer emotional interpretation considering its definition. We find that propaganda expresses "joy", "fear" and at the same time "calmness" in the news articles. Both "joy" and "fear" are contrary from an emotional polar perspective, where "joy" shows the extreme of the positive emotions and "fear" the extreme negative, and at the same time, "calmness" is present. The emotional shifting between the two extremes is a clear attempt of opinion manipulation from an emotional perspective. We obtain a similar emotion set from Twitter, but instead of "joy" we get "hope". Lastly, satire is defined as a type of parody presented in a typical format of mainstream journalism, but in a similar way to irony and sarcasm phenomena BIBREF39. The results of the analysis show that "disgust" and "positive emotion" are present in both datasets, but we get "negative emotion" in the news articles and "sadness" in Twitter (both are placed in the negative side of emotions). We are interested in investigating the cause of the emotion "disgust" which appeared in the results from both datasets. We conduct a manual analysis on the text of the satire type in both datasets in order to shed some light on the possible causes. We notice that the satire language in the news often employs the emotion "disgust" to give a sense of humor. Figure FIGREF58 shows some examples from the news articles dataset highlighting the words that triggered the emotion "disgust". <<</Discussion>>> <<<Conclusions and Future Work>>> In this article we have presented an emotionally-infused deep learning network that uses emotional features to identify false information in Twitter and news articles sources. We performed several experiments to investigate the effectiveness of the emotional features in identifying false information. We validated the performance of the model by comparing it to a LSTM network and other baselines. The results on the two datasets showed that clickbaits have a simpler manipulation language where emotions help detecting them. This demonstrates that emotions play a key role in deceiving the reader. Based on this result, we investigated our model performance on a clickbaits dataset and we compared it to the state-of-the-art performance. Our model showed superior results near to 96% F1 value. Overall results confirmed that emotional features have boosted EIN model performance achieving better results on 3 different datasets (RQ1). These results emphasized the importance of emotional features in the detection of false information. In Twitter, false news content is deliberately sexual oriented and it uses many insulting words. Our analysis showed that emotions can help detecting false information also in Twitter. In the analysis section, we answered a set of questions regarding the emotions distribution in false news. We found that emotions have similar importance distribution in Twitter and news articles regardless of the differences in the used languages (RQ2). The analysis showed that most of the used emotions have statistical significant difference between real and false news (RQ3). Emotions plays a different role in each type of false information in line with its definition (RQ4). We found that clickbaits try to attract the attention of the reader by mainly employing the "surprise" emotion. Propagandas are manipulating the feelings of the readers by using extreme positive and negative emotions, with triggering a sense of "calmness" to confuse the readers and enforcing a feeling of confidence. Satire news instead use the "disgust" emotion to give a sense of humor. To sum up, we can say that the initial part of false news contains more emotions than the rest of document. Our approach exploit this fact for their detection. To the best of our knowledge, this is the first work that analyzes the impact of emotions in the detection of false information considering both social media and news articles. As a future work, the results of our approach as a clickbaits detector motivate us to develop for a clickbaits detector as a web browser extension. Also, we will study how the emotions flow inside the articles of each kind of false information, which is worthy to be investigated as the results of this work confirmed. <<</Conclusions and Future Work>>> <<</Title>>>
{ "references": [ "Introduction, Experiments and Results" ], "type": "disordered_section" }
1911.11698
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Doc2Vec on the PubMed corpus: study of a new approach to generate related articles <<<Abstract>>> PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the "similar articles" section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method. Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra. The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm. While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm. <<</Abstract>>> <<<Abstract>>> Background PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the “similar articles” section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method. Methods Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra. Results The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm. Conclusions While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm. <<</Abstract>>> <<<Background>>> <<<PubMed>>> PubMed is the largest database of bio-medical articles worldwide with more than 29,000,000 freely available abstracts. Each article is identified by an unique PubMed IDentifier (PMID) and is indexed with the Medical Subject Headings (MeSH) terminology. In order to facilitate the Information Retrieval (IR) process for the end-user, PubMed launched in 2007 a service of related articles search, available both through its Graphical User Interface (GUI) and its Application Programming Interface (API). Regarding the GUI, while the user is reading a publication, a panel presents title of articles that may be linked to the current reading. For the API, the user must query eLink with a given PMID BIBREF0. The output will be a list of others PMIDs, each associated with the similarity score computed by the pmra (pubmed related article) model BIBREF1. <<</PubMed>>> <<<The pmra model>>> To do so, each document is tokenized into many topics $S_{i}$. Then, the probability $P(C|D)$ that the user will find relevant the document C when reading the document D will be calculated. For this purpose, the authors brought the concept of eliteness. Briefly, a topic $S_{i}$ is presented as elite topic for a given document if a word $W_{i}$ representing $S_{i}$ is used with a high frequency in this document. This work allows to bring closer documents sharing a maximum of elite topics. In the article presenting the pmra model, authors claim that “the deployed algorithm in PubMed also takes advantage of MeSH terms, which we do not discuss here”. We can thus assume that a similar score is computed thanks to the associated MeSH terms with both documents D and C. Such an indexing is highly time-consuming and has to be manually performed. <<</The pmra model>>> <<<Documents embedding>>> Nowadays, embedding models allow to represent a text into a vector of fixed dimensions. The primary purpose of this mathematical representation of documents was to be able to use texts as input of deep neural networks. However, these models have been used by the IR community as well: once all fitted in the same multidimensional space, the cosine distance between two documents vectors can estimate the proximity between these two texts. In 2013, Mikolov et al. released a word embedding method called Word2Vec (W2V) BIBREF2. Briefly, this algorithm uses unsupervised learning to train a model which embeds a word as a vector while preserving its semantic meaning. Following this work, Mikolov and Le released in 2014 a method to vectorize complete texts BIBREF3. This algorithm, called Doc2Vec (D2V), is highly similar to W2V and comes with two architectures. The Distributed Memory Model of Paragraph Vectors (PV-DM) first trains a W2V model. This word embedding will be common for all texts from a given corpus C on which it was trained. Then, each document $D_{x}$ from C will be assigned to a randomly initialised vector of fixed length, which will be concatenated with vectors of words composing $D_{x}$ during the training time (words and documents vectors are sharing the same number of dimensions). This concatenation will be used by a final classifier to predict the next token of a randomly selected window of words. The accuracy of this task can be calculated and used to compute a loss function, used to back-propagate errors to the model, which leads to a modification of the document’s representation. The Distributed Bag of Words version of Paragraph Vector (PV-DBOW) is highly similar to the PV-DM, the main difference being the goal of the final classifier. Instead of concatenating vector from the document with word vectors, the goal here is to output words from this window just by using the mathematical representation of the document. <<</Documents embedding>>> <<<Related Work>>> Doc2Vec has been used for many cases of similar document retrieval. In 2016, Lee et al. used D2V to clusterize positive and negative sentiments with an accuracy of 76.4% BIBREF4. The same year, Lau and Baldwin showed that D2V provides a robust representation of documents, estimated with two tasks: document similarity to retrieve 12 different classes and sentences similarity scoring BIBREF5. Recently, studies started to use documents embedding on the PubMed corpus. In 2017, Gargiulo et al. used a combination of words vectors coming from the abstract to bring closer similar documents from Pubmed BIBREF6. Same year, Wang and Koopman used the PubMed database to compare D2V and their own document embedding method BIBREF7. Their designed accuracy measurement task was consisting in retrieving documents having a small cosine distance with the embedding of a query. Recently, Chen et al. released BioSentVec, a set of sentence vectors created from PubMed with the algorithm sent2vec BIBREF8, BIBREF9. However, their evaluation task was based on public sentences similarity datasets, when the goal here is to embed entire abstracts as vectors and to use them to search for similar articles versus the pmra model. In 2008, the related articles feature of PubMed has been compared (using a manual evaluation) with one that uses both a TF-IDF BIBREF10 representation of the documents and Lin’s distance BIBREF11 to compare their MeSH terms BIBREF12. Thus, no study was designed so far to compare documents embedding and the pmra algorithm. The objectives of this study were to measure the ability of these two models to infer the similarity between documents from PubMed and to search what impacts the most this proximity. To do so, different evaluation tasks were defined to cover a wide range of aspects of document analogy, from their context to their morphological similarities. <<</Related Work>>> <<</Background>>> <<<Methods>>> <<<Material>>> During this study, the optimisation of the model’s parameters and one of the evaluation tasks require associated MeSH terms with the abstracts from PubMed. Briefly, the MeSH is a medical terminology, used to index documents on PubMed to perform keywords-based queries. The MEDOC program was used to create a MySQL database filled with 26,345,267 articles from the PubMed bulk downloads on October 2018, 5th BIBREF13. Then, 16,048,372 articles having both an abstract and at least one associated MeSH term were selected for this study. For each, the PMID, title, abstract and MeSH terms were extracted. The titles and abstracts were lowered, tokenized and concatenated to compose the PubMed documents corpus. <<</Material>>> <<<Optimisation>>> Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector. A list of possible values was defined for each of these six parameters. The full amount of possible combinations of these parameters were sent to slave nodes on a cluster, each node training a D2V model with a unique combination of parameters on 85% of 100,000 documents randomly selected from the corpus. Every article from the remaining 15% were then sent to each trained model and queried for the top-ten closest articles. For each model, a final accuracy score represented by the average of common MeSH terms percentage between each document $D_{i}$ from the 15,000 extracted texts and their returning top-ten closest documents was calculated. The combination of parameters with the highest score was kept for both PV-DBOW and PV-DM. <<</Optimisation>>> <<<Training>>> The final models were trained on a server powered by four XEON E7 (144 threads) and 1To of RAM. Among the total corpus (16,048,372 documents), 1% (160,482) was extracted as a test set (named TeS) and was discarded from the training. The final models were trained on 15,887,890 documents representing the training set called TrS. <<</Training>>> <<<Evaluation>>> The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity. Indeed, a reliable algorithm to find related documents should be able to bring closer texts sharing either a similar context, some important ideas (stems of words), an amount of non-stemmed vocabulary (e.g. verbs tenses are taken in account) and should not be based on raw character-similarity (two documents sharing the same proportion of letter “A” or having a similar length should not be brought together if they do not exhibit upper levels similarity). <<<String length>>> To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents). <<</String length>>> <<<Words co-occurrences>>> A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \in D_{x}$ and all words $WC_{x} \in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm. <<</Words co-occurrences>>> <<<Stems co-occurrences>>> The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed. <<</Stems co-occurrences>>> <<<MeSH similarity>>> It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V. <<</MeSH similarity>>> <<<Manual evaluation>>> Among all documents contained in the TeS, 10 articles $D_{x}$ have been randomly selected. All of them were sent to the pmra and to the most accurate of the two D2V architectures, regarding the automatic evaluations explained above. Each model was then queried for the ten closest articles for each $D_{x_i} \in D_{x}$ and the relevance between $D_{x_i}$ and every of the top-ten documents was blindly assessed by a three-modality scale used in other standard Information Retrieval test sets: bad (0), partial (1) or full relevance (2) BIBREF15. In addition, evaluators have been asked to rank publications according their relevant proximity with the query, the first being the closest from their perspective. Two medical doctors and two medical data librarians took part in this evaluation. <<</Manual evaluation>>> <<</Evaluation>>> <<</Methods>>> <<<Results>>> <<</Results>>> <<<Discussion>>> In this study, the ability of D2V to infer similarity between biomedical abstracts has been compared versus the pmra, the algorithm actually used in Pubmed. Regarding the strings length task, even if trending lines slopes are very close to zero, a slight negative correlation is observed between the difference in terms of character and scores calculated by PV-DBOW and pmra. This result can be relativized. Indeed, it was expected that two different abstracts regarding their number of characters are more likely to be different in term of context. The longest text can treat more subjects with different words (explaining D2V’s results) or to be associated with more MeSH labels (clarifying pmra ones’). Words or stems content analysis does not showed any particular correlation between common words/stems and scores computed by both D2V models or pmra. Inverse results could have been expected, regarding the way pmra is linking documents (using common terms between documents). The score brought to the pmra model by the MeSH terms should be quite important for the final scoring formula. However, among all possible couples of words between two documents, only 500 were randomly selected, due to computational limits. Random sampling effect could have led to these results. D2V takes in account many language features such as bi- or trigrams, synonyms, other related meanings and stopwords. No prior knowledge of analysis on the documents are needed. The pmra is based (in addition to words) on the manual MeSH indexing of the document, even if this aspect was not discussed in the Lin and Wilbur’s publication. This indexing step is highly time-consuming and employs more than 50 people to assign labels on documents from PubMed. The result displayed on the figure FIGREF23 could have been expected for the pmra algorithm, this model using the MeSH terms on the statistical formula used to link documents as well as elite or elitness terms. It was thus expected that two documents sharing a lot of indexing labels would have been seen close by the pmra. However, these MeSH descriptors were only used to select the appropriate parameters used to train the D2V models. The fact that D2V still manages, with the PV-DBOW architecture, to find documents that are close to each other regarding the MeSH indexing demonstrates its ability to capture an article’s subject solely with its abstract and title. Regarding the manual evaluation, D2V PV-DBOW model has been very largely underrated compared to the pmra model. Its results have been seen as not accurate more than three times compared to the Pubmed's model. Regarding the ranking of the results, the average position of the pmra is centred around 7, while D2V's one is around 14. However, the real signification of these results can be relativised. Indeed, the agreement between the four annotators is only moderate and no general consensus can be extracted. This study also has some limitations. First, the MeSH indexing of documents on PubMed can occur on full-text data, while both optimisation of the hyper-parameters and an evaluation task are based on abstracts' indexing. However, this bias should have a limited impact on the results. The indexing being based on the main topics from the documents, these subjects should also be cited in the abstract. About this manual indexing, a bias is brought by the indexers. It is well-known in the information retrieval community that intra- and inter-indexers bias exist. As the parameters optimisation step relied only on MeSH terms, it assumed that a model trained on articles’ abstracts can be optimised with MeSH terms which are selected according to the full text of the articles. In other words, this optimisation assumed an abstract is enough to semantically represent the whole text. But this is not completely true. If it was, MeSH terms would have not be selected on full texts in the first place. Also, the principle that a PubMed related article feature has to give articles which have a lot of MeSH terms in common has been followed throughout this work. To go further, as mentioned in the paper presenting D2V, the concatenation of vectors from both PV-DM and PV-DBOW for a single document could lead to a better accuracy. A third model could be designed by the merge of the two presented here. Another moot point on the text embedding community is about the part-of-speech tagging of the text before sending it to the model (during both training and utilisation). This supplementary information could lead to a better understanding of the text, particularly due to the disambiguation of homonyms. <<</Discussion>>> <<<Conclusion>>> This study showed that Doc2Vec PV-DBOW, an unsupervised text embedding technique, can infer similarity between biomedical articles' abstract. It requires no prior knowledge on the documents such as text indexing and is not impacted by raw words content or document structure. This algorithm was able to link documents sharing MeSH labels in a similar way the pmra did. A manual evaluation returned very low scores for the D2V PV-DBOW model, but with a highly moderate agreement between evaluators. More investigation should be carried out to understand this difference between the evaluation based on the MeSH indexing (performed by humans) and the manual evaluation. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Methods" ], "type": "disordered_section" }
1911.11698
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Doc2Vec on the PubMed corpus: study of a new approach to generate related articles <<<Abstract>>> PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the "similar articles" section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method. Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra. The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm. While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm. <<</Abstract>>> <<<Abstract>>> Background PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the “similar articles” section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method. Methods Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra. Results The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm. Conclusions While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm. <<</Abstract>>> <<<Background>>> <<<PubMed>>> PubMed is the largest database of bio-medical articles worldwide with more than 29,000,000 freely available abstracts. Each article is identified by an unique PubMed IDentifier (PMID) and is indexed with the Medical Subject Headings (MeSH) terminology. In order to facilitate the Information Retrieval (IR) process for the end-user, PubMed launched in 2007 a service of related articles search, available both through its Graphical User Interface (GUI) and its Application Programming Interface (API). Regarding the GUI, while the user is reading a publication, a panel presents title of articles that may be linked to the current reading. For the API, the user must query eLink with a given PMID BIBREF0. The output will be a list of others PMIDs, each associated with the similarity score computed by the pmra (pubmed related article) model BIBREF1. <<</PubMed>>> <<<The pmra model>>> To do so, each document is tokenized into many topics $S_{i}$. Then, the probability $P(C|D)$ that the user will find relevant the document C when reading the document D will be calculated. For this purpose, the authors brought the concept of eliteness. Briefly, a topic $S_{i}$ is presented as elite topic for a given document if a word $W_{i}$ representing $S_{i}$ is used with a high frequency in this document. This work allows to bring closer documents sharing a maximum of elite topics. In the article presenting the pmra model, authors claim that “the deployed algorithm in PubMed also takes advantage of MeSH terms, which we do not discuss here”. We can thus assume that a similar score is computed thanks to the associated MeSH terms with both documents D and C. Such an indexing is highly time-consuming and has to be manually performed. <<</The pmra model>>> <<<Documents embedding>>> Nowadays, embedding models allow to represent a text into a vector of fixed dimensions. The primary purpose of this mathematical representation of documents was to be able to use texts as input of deep neural networks. However, these models have been used by the IR community as well: once all fitted in the same multidimensional space, the cosine distance between two documents vectors can estimate the proximity between these two texts. In 2013, Mikolov et al. released a word embedding method called Word2Vec (W2V) BIBREF2. Briefly, this algorithm uses unsupervised learning to train a model which embeds a word as a vector while preserving its semantic meaning. Following this work, Mikolov and Le released in 2014 a method to vectorize complete texts BIBREF3. This algorithm, called Doc2Vec (D2V), is highly similar to W2V and comes with two architectures. The Distributed Memory Model of Paragraph Vectors (PV-DM) first trains a W2V model. This word embedding will be common for all texts from a given corpus C on which it was trained. Then, each document $D_{x}$ from C will be assigned to a randomly initialised vector of fixed length, which will be concatenated with vectors of words composing $D_{x}$ during the training time (words and documents vectors are sharing the same number of dimensions). This concatenation will be used by a final classifier to predict the next token of a randomly selected window of words. The accuracy of this task can be calculated and used to compute a loss function, used to back-propagate errors to the model, which leads to a modification of the document’s representation. The Distributed Bag of Words version of Paragraph Vector (PV-DBOW) is highly similar to the PV-DM, the main difference being the goal of the final classifier. Instead of concatenating vector from the document with word vectors, the goal here is to output words from this window just by using the mathematical representation of the document. <<</Documents embedding>>> <<<Related Work>>> Doc2Vec has been used for many cases of similar document retrieval. In 2016, Lee et al. used D2V to clusterize positive and negative sentiments with an accuracy of 76.4% BIBREF4. The same year, Lau and Baldwin showed that D2V provides a robust representation of documents, estimated with two tasks: document similarity to retrieve 12 different classes and sentences similarity scoring BIBREF5. Recently, studies started to use documents embedding on the PubMed corpus. In 2017, Gargiulo et al. used a combination of words vectors coming from the abstract to bring closer similar documents from Pubmed BIBREF6. Same year, Wang and Koopman used the PubMed database to compare D2V and their own document embedding method BIBREF7. Their designed accuracy measurement task was consisting in retrieving documents having a small cosine distance with the embedding of a query. Recently, Chen et al. released BioSentVec, a set of sentence vectors created from PubMed with the algorithm sent2vec BIBREF8, BIBREF9. However, their evaluation task was based on public sentences similarity datasets, when the goal here is to embed entire abstracts as vectors and to use them to search for similar articles versus the pmra model. In 2008, the related articles feature of PubMed has been compared (using a manual evaluation) with one that uses both a TF-IDF BIBREF10 representation of the documents and Lin’s distance BIBREF11 to compare their MeSH terms BIBREF12. Thus, no study was designed so far to compare documents embedding and the pmra algorithm. The objectives of this study were to measure the ability of these two models to infer the similarity between documents from PubMed and to search what impacts the most this proximity. To do so, different evaluation tasks were defined to cover a wide range of aspects of document analogy, from their context to their morphological similarities. <<</Related Work>>> <<</Background>>> <<<Methods>>> <<<Material>>> During this study, the optimisation of the model’s parameters and one of the evaluation tasks require associated MeSH terms with the abstracts from PubMed. Briefly, the MeSH is a medical terminology, used to index documents on PubMed to perform keywords-based queries. The MEDOC program was used to create a MySQL database filled with 26,345,267 articles from the PubMed bulk downloads on October 2018, 5th BIBREF13. Then, 16,048,372 articles having both an abstract and at least one associated MeSH term were selected for this study. For each, the PMID, title, abstract and MeSH terms were extracted. The titles and abstracts were lowered, tokenized and concatenated to compose the PubMed documents corpus. <<</Material>>> <<<Optimisation>>> Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector. A list of possible values was defined for each of these six parameters. The full amount of possible combinations of these parameters were sent to slave nodes on a cluster, each node training a D2V model with a unique combination of parameters on 85% of 100,000 documents randomly selected from the corpus. Every article from the remaining 15% were then sent to each trained model and queried for the top-ten closest articles. For each model, a final accuracy score represented by the average of common MeSH terms percentage between each document $D_{i}$ from the 15,000 extracted texts and their returning top-ten closest documents was calculated. The combination of parameters with the highest score was kept for both PV-DBOW and PV-DM. <<</Optimisation>>> <<<Training>>> The final models were trained on a server powered by four XEON E7 (144 threads) and 1To of RAM. Among the total corpus (16,048,372 documents), 1% (160,482) was extracted as a test set (named TeS) and was discarded from the training. The final models were trained on 15,887,890 documents representing the training set called TrS. <<</Training>>> <<<Evaluation>>> The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity. Indeed, a reliable algorithm to find related documents should be able to bring closer texts sharing either a similar context, some important ideas (stems of words), an amount of non-stemmed vocabulary (e.g. verbs tenses are taken in account) and should not be based on raw character-similarity (two documents sharing the same proportion of letter “A” or having a similar length should not be brought together if they do not exhibit upper levels similarity). <<<String length>>> To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents). <<</String length>>> <<<Words co-occurrences>>> A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \in D_{x}$ and all words $WC_{x} \in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm. <<</Words co-occurrences>>> <<<Stems co-occurrences>>> The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed. <<</Stems co-occurrences>>> <<<MeSH similarity>>> It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V. <<</MeSH similarity>>> <<<Manual evaluation>>> Among all documents contained in the TeS, 10 articles $D_{x}$ have been randomly selected. All of them were sent to the pmra and to the most accurate of the two D2V architectures, regarding the automatic evaluations explained above. Each model was then queried for the ten closest articles for each $D_{x_i} \in D_{x}$ and the relevance between $D_{x_i}$ and every of the top-ten documents was blindly assessed by a three-modality scale used in other standard Information Retrieval test sets: bad (0), partial (1) or full relevance (2) BIBREF15. In addition, evaluators have been asked to rank publications according their relevant proximity with the query, the first being the closest from their perspective. Two medical doctors and two medical data librarians took part in this evaluation. <<</Manual evaluation>>> <<</Evaluation>>> <<</Methods>>> <<<Results>>> <<</Results>>> <<<Discussion>>> In this study, the ability of D2V to infer similarity between biomedical abstracts has been compared versus the pmra, the algorithm actually used in Pubmed. Regarding the strings length task, even if trending lines slopes are very close to zero, a slight negative correlation is observed between the difference in terms of character and scores calculated by PV-DBOW and pmra. This result can be relativized. Indeed, it was expected that two different abstracts regarding their number of characters are more likely to be different in term of context. The longest text can treat more subjects with different words (explaining D2V’s results) or to be associated with more MeSH labels (clarifying pmra ones’). Words or stems content analysis does not showed any particular correlation between common words/stems and scores computed by both D2V models or pmra. Inverse results could have been expected, regarding the way pmra is linking documents (using common terms between documents). The score brought to the pmra model by the MeSH terms should be quite important for the final scoring formula. However, among all possible couples of words between two documents, only 500 were randomly selected, due to computational limits. Random sampling effect could have led to these results. D2V takes in account many language features such as bi- or trigrams, synonyms, other related meanings and stopwords. No prior knowledge of analysis on the documents are needed. The pmra is based (in addition to words) on the manual MeSH indexing of the document, even if this aspect was not discussed in the Lin and Wilbur’s publication. This indexing step is highly time-consuming and employs more than 50 people to assign labels on documents from PubMed. The result displayed on the figure FIGREF23 could have been expected for the pmra algorithm, this model using the MeSH terms on the statistical formula used to link documents as well as elite or elitness terms. It was thus expected that two documents sharing a lot of indexing labels would have been seen close by the pmra. However, these MeSH descriptors were only used to select the appropriate parameters used to train the D2V models. The fact that D2V still manages, with the PV-DBOW architecture, to find documents that are close to each other regarding the MeSH indexing demonstrates its ability to capture an article’s subject solely with its abstract and title. Regarding the manual evaluation, D2V PV-DBOW model has been very largely underrated compared to the pmra model. Its results have been seen as not accurate more than three times compared to the Pubmed's model. Regarding the ranking of the results, the average position of the pmra is centred around 7, while D2V's one is around 14. However, the real signification of these results can be relativised. Indeed, the agreement between the four annotators is only moderate and no general consensus can be extracted. This study also has some limitations. First, the MeSH indexing of documents on PubMed can occur on full-text data, while both optimisation of the hyper-parameters and an evaluation task are based on abstracts' indexing. However, this bias should have a limited impact on the results. The indexing being based on the main topics from the documents, these subjects should also be cited in the abstract. About this manual indexing, a bias is brought by the indexers. It is well-known in the information retrieval community that intra- and inter-indexers bias exist. As the parameters optimisation step relied only on MeSH terms, it assumed that a model trained on articles’ abstracts can be optimised with MeSH terms which are selected according to the full text of the articles. In other words, this optimisation assumed an abstract is enough to semantically represent the whole text. But this is not completely true. If it was, MeSH terms would have not be selected on full texts in the first place. Also, the principle that a PubMed related article feature has to give articles which have a lot of MeSH terms in common has been followed throughout this work. To go further, as mentioned in the paper presenting D2V, the concatenation of vectors from both PV-DM and PV-DBOW for a single document could lead to a better accuracy. A third model could be designed by the merge of the two presented here. Another moot point on the text embedding community is about the part-of-speech tagging of the text before sending it to the model (during both training and utilisation). This supplementary information could lead to a better understanding of the text, particularly due to the disambiguation of homonyms. <<</Discussion>>> <<<Conclusion>>> This study showed that Doc2Vec PV-DBOW, an unsupervised text embedding technique, can infer similarity between biomedical articles' abstract. It requires no prior knowledge on the documents such as text indexing and is not impacted by raw words content or document structure. This algorithm was able to link documents sharing MeSH labels in a similar way the pmra did. A manual evaluation returned very low scores for the D2V PV-DBOW model, but with a highly moderate agreement between evaluators. More investigation should be carried out to understand this difference between the evaluation based on the MeSH indexing (performed by humans) and the manual evaluation. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Background" ], "type": "disordered_section" }
2002.02492
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Consistency of a Recurrent Language Model With Respect to Incomplete Decoding <<<Abstract>>> Despite strong performance on a variety of tasks, neural sequence models trained with maximum likelihood have been shown to exhibit issues such as length bias and degenerate repetition. We study the related issue of receiving infinite-length sequences from a recurrent language model when using common decoding algorithms. To analyze this issue, we first define inconsistency of a decoding algorithm, meaning that the algorithm can yield an infinite-length sequence that has zero probability under the model. We prove that commonly used incomplete decoding algorithms - greedy search, beam search, top-k sampling, and nucleus sampling - are inconsistent, despite the fact that recurrent language models are trained to produce sequences of finite length. Based on these insights, we propose two remedies which address inconsistency: consistent variants of top-k and nucleus sampling, and a self-terminating recurrent language model. Empirical results show that inconsistency occurs in practice, and that the proposed methods prevent inconsistency. <<</Abstract>>> <<<Introduction>>> Neural sequence models trained with maximum likelihood estimation (MLE) have become a standard approach to modeling sequences in a variety of natural language applications such as machine translation BIBREF0, dialogue modeling BIBREF1, and language modeling BIBREF2. Despite this success, MLE-trained neural sequence models have been shown to exhibit issues such as length bias BIBREF3, BIBREF4 and degenerate repetition BIBREF5. These issues are suspected to be related to the maximum likelihood objective's local normalization, which results in a discrepancy between the learned model's distribution and the distribution induced by the decoding algorithm used to generate sequences BIBREF6, BIBREF7. This has prompted the development of alternative decoding methods BIBREF8, BIBREF5 and training objectives BIBREF9, BIBREF10. In this paper, we formalize and study this discrepancy between the model and the decoding algorithm. We begin by formally defining recurrent neural language models, a family that encompasses neural models used in practice, such as recurrent neural networks BIBREF11, BIBREF12, BIBREF13, and transformers BIBREF14. Next, we formally define a decoding algorithm – a function that induces a distribution over sequences given a recurrent language model and a context distribution – which is used to obtain probable sequences from a model. In this paper, we show that the distribution induced by a decoding algorithm can contradict this intended use; instead, the decoding algorithm may return improbable, infinite-length sequences. Our main finding is that a sequence which receives zero probability under a recurrent language model's distribution can receive nonzero probability under the distribution induced by a decoding algorithm. This occurs when the recurrent language model always ranks the sequence termination token outside of the set of tokens considered at each decoding step, yielding an infinite-length, zero probability sequence. This holds whenever the decoding algorithm is incomplete, in the sense that the algorithm excludes tokens from consideration at each step of decoding, which is the case for common methods such as greedy search, beam search, top-$k$ sampling BIBREF15, and nucleus sampling BIBREF5. We formalize our main finding using the notion of consistency BIBREF16 – whether a distribution assigns probability mass only to finite sequences – and prove that a consistent recurrent language model paired with an incomplete decoding algorithm can induce an inconsistent sequence distribution. Based on the insight that inconsistency occurs due to the behavior of the termination token under incomplete decoding, we develop two methods for addressing inconsistency. First, we propose consistent sampling methods which guarantee that the termination token is not excluded from selection during decoding. Second, we introduce a self-terminating recurrent language model which ensures that the termination token is eventually ranked above all others, guaranteeing consistency under incomplete decoding. To empirically measure inconsistency, we decode sequences from trained recurrent language models and measure the proportion of sequences with lengths far exceeding the maximum training sequence length. Our experiments on the Wikitext2 dataset BIBREF17 suggest that inconsistency occurs in practice when using incomplete decoding methods, while the proposed consistent sampling methods and self-terminating model parameterization prevent inconsistency and maintain language modeling quality. The theoretical analysis reveals defects of existing decoding algorithms, providing a way to develop future models, inference procedures, and learning algorithms. We present methods related to sampling and model parameterization, but there are more directions which we leave to the future; we close with directions related to sequence-level learning. <<</Introduction>>> <<<Background>>> We begin our discussion by establishing background definitions. First, we define a sequence which is the main object of our investigation. Definition 2.1 (Sequence) A sequence $Y$ is an ordered collection of items from a predefined finite vocabulary $V$. A sequence of finite length always ends with a special token $\left<\text{eos}\right>\in V$ that only appears at the end of a sequence. Each model we consider generates a sequence conditioned on context information, such as a prefix in sentence completion. To consider this, we define a context distribution. Definition 2.2 (Context distribution) A context distribution $p(C)$ is a probability distribution defined over a set $\mathcal {C}$. An element $C\in \mathcal {C}$ is called a context. <<<Recurrent Language Models>>> A recurrent language model is an autoregressive model of a sequence distribution, where each conditional probability is parameterized with a neural network. Importantly, we assume that all tokens in a sequence are dependent on each other under a recurrent language model. This allows us to avoid cases in which the model degenerates to a Markovian language model, such as an $n$-gram model with a finite $n$. Definition 2.3 (Recurrent language model) A recurrent language model $p_\theta $ is a neural network that computes the following conditional probability at each time step where $h_t = f_{\theta }(y_t, h_{t-1})$ and $h_0 = g_{\theta }(C)$, and $u,c,\theta $ are parameters. A recurrent language model thereby computes the probability of a sequence $Y=(y_1, \ldots , y_T)$ by where $y_{<t}=(y_1,\ldots ,y_{t-1})$. This distribution satisfies Practical variants of the recurrent language model differ by the choice of transition function $f_{\theta }$ BIBREF11, BIBREF13, BIBREF12, BIBREF14. The use of softmax BIBREF18 implies that every unique token in the vocabulary is considered at every location of a sequence. Remark 2.1 Under the conditional distribution of a recurrent language model, every token $v\in V$ is assigned a positive probability. This implies that $0 < p_\theta (v\,|\,y_{<t}, C) < 1.$ In addition, it follows that any finite sequence is probable by a recurrent language model under any context, i.e., $p_{\theta }(Y\,|\,C) > 0$ for any sequence $Y$ of finite length. <<</Recurrent Language Models>>> <<<Decoding Algorithms>>> Because it is intractable to decode the most probable sequence, it is necessary in practice to use an approximate decoding algorithm. Definition 2.4 (Decoding algorithm) A decoding algorithm $\mathcal {F}(p_{\theta }, C)$ is a function that generates a sequence $\tilde{Y}$ given a recurrent language model $p_{\theta }$ and context $C$. Let $q_{\mathcal {F}}$ denote the distribution induced by the decoding algorithm $\mathcal {F}$. We consider two families of decoding algorithms. In our analysis we only consider decoding algorithms that decode in a single pass, forward in time, without modifying previously selected tokens. <<<Stochastic decoding.>>> The first family consists of stochastic algorithms. Among them, ancestral sampling is asymptotically unbiased and can be used for finding the most probable sequence, although it requires a substantial number of samples to achieve a low-variance estimate. Definition 2.5 (Ancestral sampling) Ancestral sampling $\mathcal {F}_{\text{anc}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from $p_{\theta }(y_t\,|\,\tilde{y}_{<t}, C)$ until $\tilde{y}_t = \left<\text{eos}\right>$: In order to avoid the high variance, two approximate stochastic decoding algorithms have recently been proposed and tested with recurrent language models. Top-$k$ sampling considers only a subset of the $k$ most probable tokens from the vocabulary at a time, while nucleus sampling considers only the minimal subset of most probable tokens whose total probability is higher than a predefined threshold. Definition 2.6 (Top-$k$ sampling BIBREF15) Top-$k$ sampling $\mathcal {F}_{\text{top-k}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from the following proposal distribution: Definition 2.7 (Nucleus sampling BIBREF5) Nucleus sampling $\mathcal {F}_{\text{nuc-}\mu }$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from the following proposal distribution. Let $v_1,\ldots ,v_{|V|}$ denote tokens in $V$ such that $p_{\theta }(v_i\,|\,y_{<t},C) \ge p_{\theta }(v_j\,|\,y_{<t},C)$ for all $i < j$, and define where $V_{\mu } = \left\lbrace v_1, \cdots , v_{k_\mu } \right\rbrace $ with <<</Stochastic decoding.>>> <<<Deterministic decoding.>>> The other family consists of deterministic decoding algorithms, where a token is selected deterministically according to a rule at each decoding step. The most naive algorithm, called greedy decoding, simply takes the most probable token at each step. Definition 2.8 (Greedy decoding) Greedy decoding $\mathcal {F}_{\text{greedy}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively selecting the most likely token from $p_{\theta }(y_t | \tilde{y}_{<t}, C)$ until $\tilde{y}_t = \left<\text{eos}\right>$: In contrast to greedy decoding, beam search operates on the level of partial sequences or prefixes. Definition 2.9 (Prefix) A prefix $\rho _t$ is an ordered collection of items from $V$. The score of a prefix is where $\rho _t[\tau ]$ is a token at time $\tau $ from $\rho _t$. Starting from a set of empty prefixes, at each iteration a new prefix set is formed by expanding each prefix, then choosing the highest scoring expanded prefixes. Definition 2.10 (Beam search) Beam search with width $k$, $\mathcal {F}_{\text{beam}-k}$, generates a sequence from a recurrent language model $p_{\theta }$ by maintaining a size-$k$ prefix set $\mathrm {P}_t^{\text{top}}$. Starting with $P_0^{top}=\varnothing $, at each iteration $t\in \lbrace 1,2,\ldots \rbrace $ beam search forms a new prefix set $\mathrm {P}_t^{\text{top}}$ by expanding the current set, $\mathrm {P}_t = \bigcup _{\rho \in \mathrm {P}_{t-1}^{\text{top}}} \lbrace \rho \circ v\, |\, v\in V\rbrace $ (where $\rho \circ v$ is concatenation), then choosing the $k$ highest scoring elements, Any $\rho \in \mathrm {P}_t^{\text{top}}$ ending with $\left<\text{eos}\right>$ is restricted from being expanded further, and is added to a set $S$. Beam search ends when $S$ contains $k$ sequences, and returns the highest scoring sequence in $S$. <<</Deterministic decoding.>>> <<<Incompleteness.>>> Other than ancestral sampling, the decoding algorithms above are incomplete in that they only consider a strict subset of the the full vocabulary $V$ at each time step, aside from the trivial case of $k=|V|$. Definition 2.11 (Incomplete Decoding) A decoding algorithm $\mathcal {F}$ is incomplete when for each context $C$ and prefix $y_{<t}$, there is a strict subset $V^{\prime }_t\subsetneq V$ such that <<</Incompleteness.>>> <<</Decoding Algorithms>>> <<</Background>>> <<<Consistency of a Decoding Algorithm>>> <<<Definition of consistency.>>> A recurrent language model $p_{\theta }$ may assign a positive probability to an infinitely long sequence, in which case we call the model inconsistent. This notion of consistency was raised and analyzed earlier, for instance by BIBREF19 and BIBREF16, in terms of whether the distribution induced by $p_{\theta }$ is concentrated on finite sequences. We extend their definition to account for the context $C$. Definition 3.1 (Consistency of a recurrent language model) A recurrent language model is consistent under a context distribution $p(C)$ if $p_{\theta }(|Y|=\infty ) = 0$. Otherwise, the recurrent language model is said to be inconsistent. Any sequence decoded from a consistent model for a given probable context is guaranteed to terminate. Lemma 3.1 If a recurrent language model $p_{\theta }$ is consistent, $p_{\theta }(|Y|=\infty \,|\,C)=0$ for any probable context $C$. Next, we establish a practical condition under which a recurrent language model is consistent. Lemma 3.2 A recurrent language model $p_{\theta }$ is consistent if $\Vert h_t\Vert _p$ is uniformly bounded for some $p\ge 1$. [Proof sketch] If $\Vert h_t\Vert _p$ is bounded, then each $u_v^\top h_t$ is bounded, hence $p_{\theta }(\left<\text{eos}\right>| y_{<t}, C)>\xi >0$ for a constant $\xi $. Thus $p_{\theta }(|Y|=\infty ) \le \lim _{t\rightarrow \infty } (1 - \xi )^t = 0$, meaning that $p_{\theta }$ is consistent. Although this condition is practical because layer normalization or bounded activation functions BIBREF11, BIBREF12, BIBREF14 result in bounded $h_t$, we show that even if a recurrent language model is consistent, a decoding algorithm may produce an infinite-length sequence. We formalize this discrepancy using the consistency of a decoding algorithm. Definition 3.2 (Consistency of a decoding algorithm) A decoding algorithm $\mathcal {F}$ is consistent with respect to a consistent recurrent language model $p_{\theta }$ under a context distribution $p(C)$ if the decoding algorithm $\mathcal {F}$ preserves the consistency of the model $p_{\theta }$, that is, $q_{\mathcal {F}}(|Y|=\infty )=0$. When a consistent recurrent language model $p_{\theta }$ and a decoding algorithm $\mathcal {F}$ induce a consistent distribution $q_{\mathcal {F}}$, we say that $p_{\theta }$ paired with $\mathcal {F}$ is consistent. For instance, any consistent recurrent language model paired with ancestral sampling is consistent, because the induced distribution $q_{\mathcal {F}_{\text{anc}}}$ is the same as the distribution of the original model. We also have an analogue of Lemma UNKREF21. Lemma 3.3 A consistent decoding algorithm with respect to a consistent recurrent language model decodes only probable sequences. That is, if $q_{\mathcal {F}}(Y\,|\,C)>0$, then $p_{\theta }(Y\,|\,C)>0$ for any probable context $C$. <<</Definition of consistency.>>> <<<Inconsistency of incomplete decoding.>>> Any incomplete decoding algorithm (Definition UNKREF18) can be inconsistent regardless of the context distribution, because there is a recurrent language model that places $\left<\text{eos}\right>$ outside of $V^{\prime }_t$ at every step of decoding. To show this, we construct a consistent recurrent language model whose distribution induced by an incomplete decoding algorithm is inconsistent. Theorem 3.4 (Inconsistency of an incomplete decoding algorithm) There exists a consistent recurrent language model $p_{\theta }$ from which an incomplete decoding algorithm $\mathcal {F}$, that considers only up to $(|V|-1)$-most likely tokens according to $p_{\theta }(y_t\,|\,y_{<t},C)$ at each step $t$, finds a sequence $\tilde{Y}$ whose probability under $p_{\theta }$ is 0 for any context distribution. We prove this theorem by constructing a $\tanh $ recurrent network. We define the recurrent function $f_{\theta }$ as where $e(y_{t}) \in \mathbb {R}^{|V|}$ is a one-hot representation of $y_t$, $W_h \in \mathbb {R}^{d \times d}$ where every entry is positive, and $I$ is an identity matrix of size $|V| \times |V|$. $h_0 = g_{\theta }(C)$ is constructed to consist of positive values only. Because each element of $|h_t|$ is bounded by 1, the constructed recurrent language model $p_{\theta }$ is consistent by Lemma UNKREF23. For $v \ne \left<\text{eos}\right>$, we set $u_v$ (see Definition UNKREF4) to be where all elements of $\bar{u}_v$ are positive and $e(v)$ is a one-hot representation of $v$. $c_v$ is set to zero. Next, let where all elements of $\bar{u}_{\left<\text{eos}\right>}$ are negative. This defines a valid recurrent language model (Definition UNKREF4), since the conditional distribution at each time $t$ is influenced by all the previous tokens. More specifically, the logit of a token $v$ depends on $\sum _{t^{\prime }=1}^t {1}(y_{t^{\prime }} = v)$, where 1 is an indicator function. This recurrent language model always outputs positive logits for non-$\left<\text{eos}\right>$ tokens, and outputs negative logits for the $\left<\text{eos}\right>$ token. This implies $p(\left<\text{eos}\right>|\,y_{<t}, C) < p(v\,|\,y_{<t}, C)$ for all $v \in V \backslash \left\lbrace \left<\text{eos}\right>\right\rbrace $. This means that $\left<\text{eos}\right>$ is always ranked last at each time step, so an incomplete decoding algorithm that considers at most $(|V|-1)$ most probable tokens at each time step from $p_{\theta }(y_t\,|\,y_{<t}, C)$ cannot decode $\left<\text{eos}\right>$ and thus always decodes an infinitely long sequence. The log-probability of this infinitely long sequence $\hat{Y}$ is For any $v\in V$, where $b_v = \sum _{v^{\prime }\ne v} \exp (-\Vert u_{v^{\prime }}\Vert _1)$. The last inequality holds because $x/(x+b_v)$ is increasing in $x>0$. Therefore, the log-probability $\log p_{\theta }(\hat{Y}\,|\,C)$ diverges as $|\hat{Y}| \rightarrow \infty $, and thus $p_{\theta }(\hat{Y}\,|\,C) = 0$, which implies the decoding algorithm $\mathcal {F}$ is inconsistent by Lemma UNKREF25. Greedy decoding, beam search, top-$k$ sampling, and nucleus sampling are all inconsistent according to this theorem; there are consistent models $p_{\theta }$ that induce inconsistent distributions when paired with these decoding algorithms. <<</Inconsistency of incomplete decoding.>>> <<</Consistency of a Decoding Algorithm>>> <<<Fixing the inconsistency>>> In this section, we consider two ways to prevent inconsistency arising from incomplete decoding algorithms. First, we introduce consistent versions of top-$k$ and nucleus sampling. Second, we introduce the self-terminating recurrent language model, which is consistent when paired with any of the decoding algorithms considered in this paper. <<<Consistent Sampling Algorithms>>> The proof of Theorem UNKREF27 suggests that inconsistency of incomplete decoding algorithms arises from the fact that $\left<\text{eos}\right>$ may be excluded indefinitely from the set of top-ranked tokens. We propose a simple modification to top-$k$ and nucleus sampling that forces $\left<\text{eos}\right>$ to be included at each step of decoding. First, we give a condition for when a particular model $p_{\theta }$ paired with a decoding algorithm $\mathcal {F}$ is consistent. Theorem 4.1 Let $p_{\theta }$ be a consistent recurrent language model. If a decoding algorithm $\mathcal {F}$ satisfies $q_{\mathcal {F}}(\left<\text{eos}\right>|\,y_{<t}, C) \ge p_{\theta }(\left<\text{eos}\right>|\,y_{<t}, C)$ for every prefix $y_{<t}$ and context $C$, then the decoding algorithm $\mathcal {F}$ is consistent with respect to the model $p_{\theta }$. Let $P^{\prime }_{t-1}$ denote a set of all prefixes $y_{<t}$ of length $t-1$. For $t\ge 1$, Taking the limit $t\rightarrow \infty $ and expectation over $C$ on both sides, we have from which the decoding algorithm is consistent. We define consistent variants of top-$k$ and nucleus sampling which satisfy this condition. Definition 4.1 (Consistent top-$k$ sampling) Consistent top-$k$ sampling is top-$k$ sampling with the following modified proposal distribution: where $V^{\prime } = \left\lbrace \left<\text{eos}\right>\right\rbrace \cup \underset{v^{\prime }}{\arg \text{top-k}}\ p_{\theta }(v^{\prime }\,|\,y_{<t}, C)$. Definition 4.2 (Consistent nucleus sampling) Consistent nucleus sampling is nucleus sampling with the following modified proposal distribution: The induced probability of $\left<\text{eos}\right>$ under these two algorithms is always equal to or larger than the model's probability. By Theorem UNKREF29, these algorithms are consistent with respect to any consistent recurrent language model. <<</Consistent Sampling Algorithms>>> <<<A Self-Terminating Recurrent Language Model>>> Although these consistent sampling algorithms can be used with any recurrent language model, their stochastic nature may not be suitable for finding a single, highly probable sequence. To avoid this limitation, we propose the self-terminating recurrent language model (STRLM). Definition 4.3 (Self-terminating recurrent language model) A self-terminating recurrent language model computes the following conditional probability at each time step: where with $\sigma : \mathbb {R} \rightarrow [0,1-\epsilon ]$ and $\epsilon \in (0,1)$. $h_t$ is computed as in the original recurrent language model. The underlying idea is that the probability of $\left<\text{eos}\right>$ increases monotonically. The model is consistent when paired with greedy decoding. Theorem 4.2 Greedy decoding is consistent with respect to any self-terminating recurrent language model. Let $p_{t}^{\left<\text{eos}\right>}$ denote $p_{\theta }(\left<\text{eos}\right>|\,y_{<t}, C)$ and $a_{t}^{\left<\text{eos}\right>}$ denote $u_{\left<\text{eos}\right>}^\top h_t + c_{\left<\text{eos}\right>}$. By Definition UNKREF33 we have Take $B=-\log 2 / \log (1-\epsilon )$. We then have $p_{t}^{\left<\text{eos}\right>}>1/2$ for all $t > B$, which implies that $\left<\text{eos}\right>$ is always the most probable token after time step $B$. Hence, the sequence length is less than $B$ with probability 1. Beam search is also consistent with respect to any self-terminating recurrent language model according to a similar argument; see Appendix for the proof. <<</A Self-Terminating Recurrent Language Model>>> <<</Fixing the inconsistency>>> <<<Empirical Validation>>> The theoretical results rely on the existence of a model that results in inconsistency; it remains to be shown that inconsistency with respect to incomplete decoding occurs with recurrent language models encountered in practice. Moreover, while the proposed consistent sampling methods and self-terminating recurrent language model carry theoretical guarantees in terms of consistency, we must check whether they retain language modeling quality. To do so, we perform two experiments using a sequence completion task. In each experiment, we use the beginning of a sequence as context, then decode continuations from a trained recurrent language model and measure the proportion of non-terminated sequences in order to approximately measure inconsistency. The first experiment (§SECREF45) shows that inconsistency occurs in practice, and the second experiment (§SECREF47) shows the effectiveness of the proposed approaches. <<<Sequence completion.>>> We evaluate recurrent language models on a sequence completion task, which has previously been used to evaluate the effectiveness of sequence models, e.g. BIBREF20, BIBREF21, BIBREF2, BIBREF5, BIBREF10. Sequence completion is a general setting for studying the behavior of language models, encompassing machine translation BIBREF0, story generation BIBREF15, and dialogue modeling BIBREF1. The task consists of decoding a continuation $\hat{Y}\sim \mathcal {F}(p_{\theta }, C)$ given a length-$k$ prefix $C=(c_1,\ldots ,c_k)$, resulting in a completion $(c_1,\ldots ,c_k,\hat{y}_1\ldots ,\hat{y}_T)$. <<</Sequence completion.>>> <<<Dataset.>>> We use the Wikitext2 dataset BIBREF17 consisting of paragraphs from Wikipedia, since it has frequently been used to evaluate language models BIBREF22, BIBREF23, BIBREF24. We split each paragraph into sentences using Spacy, resulting in roughly 100k sequences (78,274 train, 8,464 valid, 9,708 test). We split each sequence, using the first $k$ tokens as a context and the remaining tokens as a continuation. To ensure that each sequence contains a prefix, we prepend padding tokens to make it length $k$. Special $\left<\text{bos}\right>$ and $\left<\text{eos}\right>$ tokens are then inserted at the beginning and end of every sequence. Our experiments use $k=10$. We model sequences at the word level with a vocabulary size of 33,182. The average training sequence length is 24 tokens, with a maximum of 137. <<</Dataset.>>> <<<Context distribution.>>> We define empirical context distributions with prefixes from the train, valid, and test sets, where $\mathcal {D}=\lbrace (C^{(n)},Y^{(n)})\rbrace _{n=1}^{N}$ is a dataset split. <<</Context distribution.>>> <<<Evaluation metrics.>>> We use finite sequences to approximately measure the consistency of a model paired with a decoding algorithm, since decoding an infinite-length sequence is impossible. We use the proportion of decoded continuations that are longer than a predefined limit, where $\hat{Y}^{(n)}\sim \mathcal {F}(p_{\theta }, C^{(n)})$ for each context $C^{(n)}$ in $\mathcal {D}$. We call $r_L$ the non-termination ratio of the decoding algorithm $\mathcal {F}$ for an underlying model and context distribution. A value of $r_L$ greater than zero means that some sequences did not terminate within $L$ steps. When $L$ is infinity, this implies that the model paired with the decoding algorithm is inconsistent. In practice, we use a finite $L$ that is substantially larger than the maximum training sequence length, and we interpret a non-zero $r_L$ as evidence that the model paired with the decoding algorithm is inconsistent. We use $L=1500$, which is more than 10 times the maximum training sequence length. In each experiment, we report the mean and standard deviation of metrics across 10 independent initializations. Unless specified otherwise, we report metrics using the test context distribution, since the train, valid, and randomly generated context distributions had similar results. <<</Evaluation metrics.>>> <<<Training.>>> We train recurrent language models for sequence completion with maximum likelihood, using the following loss on each sequence $Y=(c_1,\ldots ,c_k,y_1,\ldots ,y_T)$: This amounts to running the full training sequence through a recurrent model and zeroing the loss for the first $k$ tokens, so that the first $k$ steps correspond to learning a $g_{\theta }$ that encodes the context. Each model is trained on a single Nvidia P40 GPU for up to 100 epochs, stopping early when validation perplexity does not decrease for 10 consecutive epochs. <<</Training.>>> <<<Models.>>> We consider recurrent neural networks with hyperbolic tangent activations ($\tanh $-RNN) BIBREF11 and LSTM units (LSTM-RNN) BIBREF13. We perform an initial hyper-parameter sweep and select the best set of hyper-parameters for each of $\tanh $-RNN and LSTM-RNN based on the validation perplexities. With this best set of hyperparameters, we train each of these models with 10 different initializations. The choice of $\tanh $ and LSTM RNNs implies that all of the recurrent language models that we train are consistent according to Lemma UNKREF23. Our LSTM models achieve similar test perplexity ($91.86 \pm 0.4$) to those reported in previous work BIBREF24; see Appendix for further details. Additionally, we train self-terminating $\tanh $-RNN and LSTM-RNN variants (Definition UNKREF33) at various values of $\epsilon $, which controls a lower bound on the termination probability at each step. We use $\sigma (x)=(1-\epsilon )\text{sigmoid}(x)$. We use the hyper-parameters selected in the preceding grid search. <<</Models.>>> <<<Inconsistency of Recurrent Language Models>>> In this experiment, we demonstrate evidence of inconsistency with incomplete decoding methods (Theorem UNKREF27). Table TABREF43 shows non-termination ratios for the recurrent language models using the incomplete decoding algorithms considered in this work, along with ancestral sampling. Decoding with ancestral sampling always resulted in sequences that terminated within $L$ steps, since the induced distribution is the same as that of the consistent model. On the other hand, the non-zero non-termination ratios for the incomplete decoding algorithms suggest inconsistency with respect to each algorithm, providing evidence for Theorem UNKREF27. In particular, greedy search, beam search, and nucleus sampling yielded non-terminating sequences with both the $\tanh $ and LSTM RNNs. Using greedy decoding, roughly 6% of all contexts resulted in a non-terminating continuation with the $\tanh $-RNN, and roughly 1% with the LSTM-RNN. Nucleus sampling also produced non-terminating sequences with the $\tanh $-RNN (2.49%, nuc-0.2) and LSTM-RNN (0.76%, nuc-0.2), with the amount of non-termination decreasing as $\mu $ increased (see Definition UNKREF11), likely due to $\left<\text{eos}\right>$ having a higher chance of being included in $V_{\mu }$. Top-$k$ sampling resulted in non-terminating sequences with the $\tanh $-RNN, but not with the LSTM, implying that $\left<\text{eos}\right>$ was ranked within the top $k$ positions on at least one timestep during each decoding. Beam search produced non-terminating sequences with both the $\tanh $-RNN (beam-2,4) and LSTM-RNN (beam-2) models. This means that $\left<\text{eos}\right>$ was outside of the top tokens (determined by the beam width) considered at each step, since in our experiments we terminated the beam search when a single beam prefix contained $\left<\text{eos}\right>$. With the LSTM-RNN, a larger beam width (beam-4) prevented non-termination. <<</Inconsistency of Recurrent Language Models>>> <<<Consistency of the Proposed Methods>>> In this experiment, we evaluate the consistent variants of top-$k$ and nucleus sampling (§SECREF28) as well as the self-terminating recurrent language model (§SECREF32) in terms of consistency and language modeling quality. <<<Consistent sampling.>>> Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\left<\text{eos}\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate. <<</Consistent sampling.>>> <<<Self-terminating RNN.>>> As seen in Table TABREF50, the self-terminating recurrent language models with $\epsilon \in \lbrace 10^{-2},10^{-3}\rbrace $ are consistent with respect to greedy decoding, at the expense of perplexity compared to the vanilla model. The value of $\epsilon $ from Definition UNKREF33, which controls a lower-bound on termination probability at each step, influences both $r_L$ and perplexity. When $\epsilon $ is too large ($\epsilon =10^{-2}$), perplexity degrades. When $\epsilon $ is too small ($\epsilon =10^{-4}$), the lower-bound grows slowly, so $\left<\text{eos}\right>$ is not guaranteed to be top-ranked within $L$ steps, and the metrics resemble the baseline's. An $\epsilon $ of $10^{-3}$ balanced consistency and language modeling quality, with a zero non-termination ratio and perplexity within 3 points of the baseline. For the example decoded sequences in Table TABREF46, generation quality is similar when both the self-terminating and baseline models terminate (first row). For prefixes that led to non-termination with the baseline, the self-terminating variant can yield a finite sequence with reasonable quality (second row). This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition. <<</Self-terminating RNN.>>> <<</Consistency of the Proposed Methods>>> <<</Empirical Validation>>> <<<Future Directions>>> The methods we proposed in this paper have focused on how to resolve inconsistency from the viewpoint of decoding algorithms or model parameterization. Another approach is to address the issue of inconsistency in the learning phase. One interesting direction is to investigate whether maximum likelihood learning is a cause of inconsistency. Given a training set $\left\lbrace (C^{(n)}, Y^{(n)}) \right\rbrace _{n=1}^N$ drawn from a data distribution, maximum likelihood learning solves: where $\Omega (\theta )$ is a regularizer and $\lambda $ is a regularization weight. Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding. Sequence-level learning, however, uses a decoding algorithm during training BIBREF25, BIBREF26. We hypothesize that sequence-level learning can result in a good sequence generator that is consistent with respect to incomplete decoding. <<</Future Directions>>> <<<Conclusion>>> We extended the notion of consistency of a recurrent language model put forward by BIBREF16 to incorporate a decoding algorithm, and used it to analyze the discrepancy between a model and the distribution induced by a decoding algorithm. We proved that incomplete decoding is inconsistent, and proposed two methods to prevent this: consistent decoding and the self-terminating recurrent language model. Using a sequence completion task, we confirmed that empirical inconsistency occurs in practice, and that each method prevents inconsistency while maintaining the quality of generated sequences. We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Background" ], "type": "disordered_section" }
2002.02492
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Consistency of a Recurrent Language Model With Respect to Incomplete Decoding <<<Abstract>>> Despite strong performance on a variety of tasks, neural sequence models trained with maximum likelihood have been shown to exhibit issues such as length bias and degenerate repetition. We study the related issue of receiving infinite-length sequences from a recurrent language model when using common decoding algorithms. To analyze this issue, we first define inconsistency of a decoding algorithm, meaning that the algorithm can yield an infinite-length sequence that has zero probability under the model. We prove that commonly used incomplete decoding algorithms - greedy search, beam search, top-k sampling, and nucleus sampling - are inconsistent, despite the fact that recurrent language models are trained to produce sequences of finite length. Based on these insights, we propose two remedies which address inconsistency: consistent variants of top-k and nucleus sampling, and a self-terminating recurrent language model. Empirical results show that inconsistency occurs in practice, and that the proposed methods prevent inconsistency. <<</Abstract>>> <<<Introduction>>> Neural sequence models trained with maximum likelihood estimation (MLE) have become a standard approach to modeling sequences in a variety of natural language applications such as machine translation BIBREF0, dialogue modeling BIBREF1, and language modeling BIBREF2. Despite this success, MLE-trained neural sequence models have been shown to exhibit issues such as length bias BIBREF3, BIBREF4 and degenerate repetition BIBREF5. These issues are suspected to be related to the maximum likelihood objective's local normalization, which results in a discrepancy between the learned model's distribution and the distribution induced by the decoding algorithm used to generate sequences BIBREF6, BIBREF7. This has prompted the development of alternative decoding methods BIBREF8, BIBREF5 and training objectives BIBREF9, BIBREF10. In this paper, we formalize and study this discrepancy between the model and the decoding algorithm. We begin by formally defining recurrent neural language models, a family that encompasses neural models used in practice, such as recurrent neural networks BIBREF11, BIBREF12, BIBREF13, and transformers BIBREF14. Next, we formally define a decoding algorithm – a function that induces a distribution over sequences given a recurrent language model and a context distribution – which is used to obtain probable sequences from a model. In this paper, we show that the distribution induced by a decoding algorithm can contradict this intended use; instead, the decoding algorithm may return improbable, infinite-length sequences. Our main finding is that a sequence which receives zero probability under a recurrent language model's distribution can receive nonzero probability under the distribution induced by a decoding algorithm. This occurs when the recurrent language model always ranks the sequence termination token outside of the set of tokens considered at each decoding step, yielding an infinite-length, zero probability sequence. This holds whenever the decoding algorithm is incomplete, in the sense that the algorithm excludes tokens from consideration at each step of decoding, which is the case for common methods such as greedy search, beam search, top-$k$ sampling BIBREF15, and nucleus sampling BIBREF5. We formalize our main finding using the notion of consistency BIBREF16 – whether a distribution assigns probability mass only to finite sequences – and prove that a consistent recurrent language model paired with an incomplete decoding algorithm can induce an inconsistent sequence distribution. Based on the insight that inconsistency occurs due to the behavior of the termination token under incomplete decoding, we develop two methods for addressing inconsistency. First, we propose consistent sampling methods which guarantee that the termination token is not excluded from selection during decoding. Second, we introduce a self-terminating recurrent language model which ensures that the termination token is eventually ranked above all others, guaranteeing consistency under incomplete decoding. To empirically measure inconsistency, we decode sequences from trained recurrent language models and measure the proportion of sequences with lengths far exceeding the maximum training sequence length. Our experiments on the Wikitext2 dataset BIBREF17 suggest that inconsistency occurs in practice when using incomplete decoding methods, while the proposed consistent sampling methods and self-terminating model parameterization prevent inconsistency and maintain language modeling quality. The theoretical analysis reveals defects of existing decoding algorithms, providing a way to develop future models, inference procedures, and learning algorithms. We present methods related to sampling and model parameterization, but there are more directions which we leave to the future; we close with directions related to sequence-level learning. <<</Introduction>>> <<<Background>>> We begin our discussion by establishing background definitions. First, we define a sequence which is the main object of our investigation. Definition 2.1 (Sequence) A sequence $Y$ is an ordered collection of items from a predefined finite vocabulary $V$. A sequence of finite length always ends with a special token $\left<\text{eos}\right>\in V$ that only appears at the end of a sequence. Each model we consider generates a sequence conditioned on context information, such as a prefix in sentence completion. To consider this, we define a context distribution. Definition 2.2 (Context distribution) A context distribution $p(C)$ is a probability distribution defined over a set $\mathcal {C}$. An element $C\in \mathcal {C}$ is called a context. <<<Recurrent Language Models>>> A recurrent language model is an autoregressive model of a sequence distribution, where each conditional probability is parameterized with a neural network. Importantly, we assume that all tokens in a sequence are dependent on each other under a recurrent language model. This allows us to avoid cases in which the model degenerates to a Markovian language model, such as an $n$-gram model with a finite $n$. Definition 2.3 (Recurrent language model) A recurrent language model $p_\theta $ is a neural network that computes the following conditional probability at each time step where $h_t = f_{\theta }(y_t, h_{t-1})$ and $h_0 = g_{\theta }(C)$, and $u,c,\theta $ are parameters. A recurrent language model thereby computes the probability of a sequence $Y=(y_1, \ldots , y_T)$ by where $y_{<t}=(y_1,\ldots ,y_{t-1})$. This distribution satisfies Practical variants of the recurrent language model differ by the choice of transition function $f_{\theta }$ BIBREF11, BIBREF13, BIBREF12, BIBREF14. The use of softmax BIBREF18 implies that every unique token in the vocabulary is considered at every location of a sequence. Remark 2.1 Under the conditional distribution of a recurrent language model, every token $v\in V$ is assigned a positive probability. This implies that $0 < p_\theta (v\,|\,y_{<t}, C) < 1.$ In addition, it follows that any finite sequence is probable by a recurrent language model under any context, i.e., $p_{\theta }(Y\,|\,C) > 0$ for any sequence $Y$ of finite length. <<</Recurrent Language Models>>> <<<Decoding Algorithms>>> Because it is intractable to decode the most probable sequence, it is necessary in practice to use an approximate decoding algorithm. Definition 2.4 (Decoding algorithm) A decoding algorithm $\mathcal {F}(p_{\theta }, C)$ is a function that generates a sequence $\tilde{Y}$ given a recurrent language model $p_{\theta }$ and context $C$. Let $q_{\mathcal {F}}$ denote the distribution induced by the decoding algorithm $\mathcal {F}$. We consider two families of decoding algorithms. In our analysis we only consider decoding algorithms that decode in a single pass, forward in time, without modifying previously selected tokens. <<<Stochastic decoding.>>> The first family consists of stochastic algorithms. Among them, ancestral sampling is asymptotically unbiased and can be used for finding the most probable sequence, although it requires a substantial number of samples to achieve a low-variance estimate. Definition 2.5 (Ancestral sampling) Ancestral sampling $\mathcal {F}_{\text{anc}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from $p_{\theta }(y_t\,|\,\tilde{y}_{<t}, C)$ until $\tilde{y}_t = \left<\text{eos}\right>$: In order to avoid the high variance, two approximate stochastic decoding algorithms have recently been proposed and tested with recurrent language models. Top-$k$ sampling considers only a subset of the $k$ most probable tokens from the vocabulary at a time, while nucleus sampling considers only the minimal subset of most probable tokens whose total probability is higher than a predefined threshold. Definition 2.6 (Top-$k$ sampling BIBREF15) Top-$k$ sampling $\mathcal {F}_{\text{top-k}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from the following proposal distribution: Definition 2.7 (Nucleus sampling BIBREF5) Nucleus sampling $\mathcal {F}_{\text{nuc-}\mu }$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from the following proposal distribution. Let $v_1,\ldots ,v_{|V|}$ denote tokens in $V$ such that $p_{\theta }(v_i\,|\,y_{<t},C) \ge p_{\theta }(v_j\,|\,y_{<t},C)$ for all $i < j$, and define where $V_{\mu } = \left\lbrace v_1, \cdots , v_{k_\mu } \right\rbrace $ with <<</Stochastic decoding.>>> <<<Deterministic decoding.>>> The other family consists of deterministic decoding algorithms, where a token is selected deterministically according to a rule at each decoding step. The most naive algorithm, called greedy decoding, simply takes the most probable token at each step. Definition 2.8 (Greedy decoding) Greedy decoding $\mathcal {F}_{\text{greedy}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively selecting the most likely token from $p_{\theta }(y_t | \tilde{y}_{<t}, C)$ until $\tilde{y}_t = \left<\text{eos}\right>$: In contrast to greedy decoding, beam search operates on the level of partial sequences or prefixes. Definition 2.9 (Prefix) A prefix $\rho _t$ is an ordered collection of items from $V$. The score of a prefix is where $\rho _t[\tau ]$ is a token at time $\tau $ from $\rho _t$. Starting from a set of empty prefixes, at each iteration a new prefix set is formed by expanding each prefix, then choosing the highest scoring expanded prefixes. Definition 2.10 (Beam search) Beam search with width $k$, $\mathcal {F}_{\text{beam}-k}$, generates a sequence from a recurrent language model $p_{\theta }$ by maintaining a size-$k$ prefix set $\mathrm {P}_t^{\text{top}}$. Starting with $P_0^{top}=\varnothing $, at each iteration $t\in \lbrace 1,2,\ldots \rbrace $ beam search forms a new prefix set $\mathrm {P}_t^{\text{top}}$ by expanding the current set, $\mathrm {P}_t = \bigcup _{\rho \in \mathrm {P}_{t-1}^{\text{top}}} \lbrace \rho \circ v\, |\, v\in V\rbrace $ (where $\rho \circ v$ is concatenation), then choosing the $k$ highest scoring elements, Any $\rho \in \mathrm {P}_t^{\text{top}}$ ending with $\left<\text{eos}\right>$ is restricted from being expanded further, and is added to a set $S$. Beam search ends when $S$ contains $k$ sequences, and returns the highest scoring sequence in $S$. <<</Deterministic decoding.>>> <<<Incompleteness.>>> Other than ancestral sampling, the decoding algorithms above are incomplete in that they only consider a strict subset of the the full vocabulary $V$ at each time step, aside from the trivial case of $k=|V|$. Definition 2.11 (Incomplete Decoding) A decoding algorithm $\mathcal {F}$ is incomplete when for each context $C$ and prefix $y_{<t}$, there is a strict subset $V^{\prime }_t\subsetneq V$ such that <<</Incompleteness.>>> <<</Decoding Algorithms>>> <<</Background>>> <<<Consistency of a Decoding Algorithm>>> <<<Definition of consistency.>>> A recurrent language model $p_{\theta }$ may assign a positive probability to an infinitely long sequence, in which case we call the model inconsistent. This notion of consistency was raised and analyzed earlier, for instance by BIBREF19 and BIBREF16, in terms of whether the distribution induced by $p_{\theta }$ is concentrated on finite sequences. We extend their definition to account for the context $C$. Definition 3.1 (Consistency of a recurrent language model) A recurrent language model is consistent under a context distribution $p(C)$ if $p_{\theta }(|Y|=\infty ) = 0$. Otherwise, the recurrent language model is said to be inconsistent. Any sequence decoded from a consistent model for a given probable context is guaranteed to terminate. Lemma 3.1 If a recurrent language model $p_{\theta }$ is consistent, $p_{\theta }(|Y|=\infty \,|\,C)=0$ for any probable context $C$. Next, we establish a practical condition under which a recurrent language model is consistent. Lemma 3.2 A recurrent language model $p_{\theta }$ is consistent if $\Vert h_t\Vert _p$ is uniformly bounded for some $p\ge 1$. [Proof sketch] If $\Vert h_t\Vert _p$ is bounded, then each $u_v^\top h_t$ is bounded, hence $p_{\theta }(\left<\text{eos}\right>| y_{<t}, C)>\xi >0$ for a constant $\xi $. Thus $p_{\theta }(|Y|=\infty ) \le \lim _{t\rightarrow \infty } (1 - \xi )^t = 0$, meaning that $p_{\theta }$ is consistent. Although this condition is practical because layer normalization or bounded activation functions BIBREF11, BIBREF12, BIBREF14 result in bounded $h_t$, we show that even if a recurrent language model is consistent, a decoding algorithm may produce an infinite-length sequence. We formalize this discrepancy using the consistency of a decoding algorithm. Definition 3.2 (Consistency of a decoding algorithm) A decoding algorithm $\mathcal {F}$ is consistent with respect to a consistent recurrent language model $p_{\theta }$ under a context distribution $p(C)$ if the decoding algorithm $\mathcal {F}$ preserves the consistency of the model $p_{\theta }$, that is, $q_{\mathcal {F}}(|Y|=\infty )=0$. When a consistent recurrent language model $p_{\theta }$ and a decoding algorithm $\mathcal {F}$ induce a consistent distribution $q_{\mathcal {F}}$, we say that $p_{\theta }$ paired with $\mathcal {F}$ is consistent. For instance, any consistent recurrent language model paired with ancestral sampling is consistent, because the induced distribution $q_{\mathcal {F}_{\text{anc}}}$ is the same as the distribution of the original model. We also have an analogue of Lemma UNKREF21. Lemma 3.3 A consistent decoding algorithm with respect to a consistent recurrent language model decodes only probable sequences. That is, if $q_{\mathcal {F}}(Y\,|\,C)>0$, then $p_{\theta }(Y\,|\,C)>0$ for any probable context $C$. <<</Definition of consistency.>>> <<<Inconsistency of incomplete decoding.>>> Any incomplete decoding algorithm (Definition UNKREF18) can be inconsistent regardless of the context distribution, because there is a recurrent language model that places $\left<\text{eos}\right>$ outside of $V^{\prime }_t$ at every step of decoding. To show this, we construct a consistent recurrent language model whose distribution induced by an incomplete decoding algorithm is inconsistent. Theorem 3.4 (Inconsistency of an incomplete decoding algorithm) There exists a consistent recurrent language model $p_{\theta }$ from which an incomplete decoding algorithm $\mathcal {F}$, that considers only up to $(|V|-1)$-most likely tokens according to $p_{\theta }(y_t\,|\,y_{<t},C)$ at each step $t$, finds a sequence $\tilde{Y}$ whose probability under $p_{\theta }$ is 0 for any context distribution. We prove this theorem by constructing a $\tanh $ recurrent network. We define the recurrent function $f_{\theta }$ as where $e(y_{t}) \in \mathbb {R}^{|V|}$ is a one-hot representation of $y_t$, $W_h \in \mathbb {R}^{d \times d}$ where every entry is positive, and $I$ is an identity matrix of size $|V| \times |V|$. $h_0 = g_{\theta }(C)$ is constructed to consist of positive values only. Because each element of $|h_t|$ is bounded by 1, the constructed recurrent language model $p_{\theta }$ is consistent by Lemma UNKREF23. For $v \ne \left<\text{eos}\right>$, we set $u_v$ (see Definition UNKREF4) to be where all elements of $\bar{u}_v$ are positive and $e(v)$ is a one-hot representation of $v$. $c_v$ is set to zero. Next, let where all elements of $\bar{u}_{\left<\text{eos}\right>}$ are negative. This defines a valid recurrent language model (Definition UNKREF4), since the conditional distribution at each time $t$ is influenced by all the previous tokens. More specifically, the logit of a token $v$ depends on $\sum _{t^{\prime }=1}^t {1}(y_{t^{\prime }} = v)$, where 1 is an indicator function. This recurrent language model always outputs positive logits for non-$\left<\text{eos}\right>$ tokens, and outputs negative logits for the $\left<\text{eos}\right>$ token. This implies $p(\left<\text{eos}\right>|\,y_{<t}, C) < p(v\,|\,y_{<t}, C)$ for all $v \in V \backslash \left\lbrace \left<\text{eos}\right>\right\rbrace $. This means that $\left<\text{eos}\right>$ is always ranked last at each time step, so an incomplete decoding algorithm that considers at most $(|V|-1)$ most probable tokens at each time step from $p_{\theta }(y_t\,|\,y_{<t}, C)$ cannot decode $\left<\text{eos}\right>$ and thus always decodes an infinitely long sequence. The log-probability of this infinitely long sequence $\hat{Y}$ is For any $v\in V$, where $b_v = \sum _{v^{\prime }\ne v} \exp (-\Vert u_{v^{\prime }}\Vert _1)$. The last inequality holds because $x/(x+b_v)$ is increasing in $x>0$. Therefore, the log-probability $\log p_{\theta }(\hat{Y}\,|\,C)$ diverges as $|\hat{Y}| \rightarrow \infty $, and thus $p_{\theta }(\hat{Y}\,|\,C) = 0$, which implies the decoding algorithm $\mathcal {F}$ is inconsistent by Lemma UNKREF25. Greedy decoding, beam search, top-$k$ sampling, and nucleus sampling are all inconsistent according to this theorem; there are consistent models $p_{\theta }$ that induce inconsistent distributions when paired with these decoding algorithms. <<</Inconsistency of incomplete decoding.>>> <<</Consistency of a Decoding Algorithm>>> <<<Fixing the inconsistency>>> In this section, we consider two ways to prevent inconsistency arising from incomplete decoding algorithms. First, we introduce consistent versions of top-$k$ and nucleus sampling. Second, we introduce the self-terminating recurrent language model, which is consistent when paired with any of the decoding algorithms considered in this paper. <<<Consistent Sampling Algorithms>>> The proof of Theorem UNKREF27 suggests that inconsistency of incomplete decoding algorithms arises from the fact that $\left<\text{eos}\right>$ may be excluded indefinitely from the set of top-ranked tokens. We propose a simple modification to top-$k$ and nucleus sampling that forces $\left<\text{eos}\right>$ to be included at each step of decoding. First, we give a condition for when a particular model $p_{\theta }$ paired with a decoding algorithm $\mathcal {F}$ is consistent. Theorem 4.1 Let $p_{\theta }$ be a consistent recurrent language model. If a decoding algorithm $\mathcal {F}$ satisfies $q_{\mathcal {F}}(\left<\text{eos}\right>|\,y_{<t}, C) \ge p_{\theta }(\left<\text{eos}\right>|\,y_{<t}, C)$ for every prefix $y_{<t}$ and context $C$, then the decoding algorithm $\mathcal {F}$ is consistent with respect to the model $p_{\theta }$. Let $P^{\prime }_{t-1}$ denote a set of all prefixes $y_{<t}$ of length $t-1$. For $t\ge 1$, Taking the limit $t\rightarrow \infty $ and expectation over $C$ on both sides, we have from which the decoding algorithm is consistent. We define consistent variants of top-$k$ and nucleus sampling which satisfy this condition. Definition 4.1 (Consistent top-$k$ sampling) Consistent top-$k$ sampling is top-$k$ sampling with the following modified proposal distribution: where $V^{\prime } = \left\lbrace \left<\text{eos}\right>\right\rbrace \cup \underset{v^{\prime }}{\arg \text{top-k}}\ p_{\theta }(v^{\prime }\,|\,y_{<t}, C)$. Definition 4.2 (Consistent nucleus sampling) Consistent nucleus sampling is nucleus sampling with the following modified proposal distribution: The induced probability of $\left<\text{eos}\right>$ under these two algorithms is always equal to or larger than the model's probability. By Theorem UNKREF29, these algorithms are consistent with respect to any consistent recurrent language model. <<</Consistent Sampling Algorithms>>> <<<A Self-Terminating Recurrent Language Model>>> Although these consistent sampling algorithms can be used with any recurrent language model, their stochastic nature may not be suitable for finding a single, highly probable sequence. To avoid this limitation, we propose the self-terminating recurrent language model (STRLM). Definition 4.3 (Self-terminating recurrent language model) A self-terminating recurrent language model computes the following conditional probability at each time step: where with $\sigma : \mathbb {R} \rightarrow [0,1-\epsilon ]$ and $\epsilon \in (0,1)$. $h_t$ is computed as in the original recurrent language model. The underlying idea is that the probability of $\left<\text{eos}\right>$ increases monotonically. The model is consistent when paired with greedy decoding. Theorem 4.2 Greedy decoding is consistent with respect to any self-terminating recurrent language model. Let $p_{t}^{\left<\text{eos}\right>}$ denote $p_{\theta }(\left<\text{eos}\right>|\,y_{<t}, C)$ and $a_{t}^{\left<\text{eos}\right>}$ denote $u_{\left<\text{eos}\right>}^\top h_t + c_{\left<\text{eos}\right>}$. By Definition UNKREF33 we have Take $B=-\log 2 / \log (1-\epsilon )$. We then have $p_{t}^{\left<\text{eos}\right>}>1/2$ for all $t > B$, which implies that $\left<\text{eos}\right>$ is always the most probable token after time step $B$. Hence, the sequence length is less than $B$ with probability 1. Beam search is also consistent with respect to any self-terminating recurrent language model according to a similar argument; see Appendix for the proof. <<</A Self-Terminating Recurrent Language Model>>> <<</Fixing the inconsistency>>> <<<Empirical Validation>>> The theoretical results rely on the existence of a model that results in inconsistency; it remains to be shown that inconsistency with respect to incomplete decoding occurs with recurrent language models encountered in practice. Moreover, while the proposed consistent sampling methods and self-terminating recurrent language model carry theoretical guarantees in terms of consistency, we must check whether they retain language modeling quality. To do so, we perform two experiments using a sequence completion task. In each experiment, we use the beginning of a sequence as context, then decode continuations from a trained recurrent language model and measure the proportion of non-terminated sequences in order to approximately measure inconsistency. The first experiment (§SECREF45) shows that inconsistency occurs in practice, and the second experiment (§SECREF47) shows the effectiveness of the proposed approaches. <<<Sequence completion.>>> We evaluate recurrent language models on a sequence completion task, which has previously been used to evaluate the effectiveness of sequence models, e.g. BIBREF20, BIBREF21, BIBREF2, BIBREF5, BIBREF10. Sequence completion is a general setting for studying the behavior of language models, encompassing machine translation BIBREF0, story generation BIBREF15, and dialogue modeling BIBREF1. The task consists of decoding a continuation $\hat{Y}\sim \mathcal {F}(p_{\theta }, C)$ given a length-$k$ prefix $C=(c_1,\ldots ,c_k)$, resulting in a completion $(c_1,\ldots ,c_k,\hat{y}_1\ldots ,\hat{y}_T)$. <<</Sequence completion.>>> <<<Dataset.>>> We use the Wikitext2 dataset BIBREF17 consisting of paragraphs from Wikipedia, since it has frequently been used to evaluate language models BIBREF22, BIBREF23, BIBREF24. We split each paragraph into sentences using Spacy, resulting in roughly 100k sequences (78,274 train, 8,464 valid, 9,708 test). We split each sequence, using the first $k$ tokens as a context and the remaining tokens as a continuation. To ensure that each sequence contains a prefix, we prepend padding tokens to make it length $k$. Special $\left<\text{bos}\right>$ and $\left<\text{eos}\right>$ tokens are then inserted at the beginning and end of every sequence. Our experiments use $k=10$. We model sequences at the word level with a vocabulary size of 33,182. The average training sequence length is 24 tokens, with a maximum of 137. <<</Dataset.>>> <<<Context distribution.>>> We define empirical context distributions with prefixes from the train, valid, and test sets, where $\mathcal {D}=\lbrace (C^{(n)},Y^{(n)})\rbrace _{n=1}^{N}$ is a dataset split. <<</Context distribution.>>> <<<Evaluation metrics.>>> We use finite sequences to approximately measure the consistency of a model paired with a decoding algorithm, since decoding an infinite-length sequence is impossible. We use the proportion of decoded continuations that are longer than a predefined limit, where $\hat{Y}^{(n)}\sim \mathcal {F}(p_{\theta }, C^{(n)})$ for each context $C^{(n)}$ in $\mathcal {D}$. We call $r_L$ the non-termination ratio of the decoding algorithm $\mathcal {F}$ for an underlying model and context distribution. A value of $r_L$ greater than zero means that some sequences did not terminate within $L$ steps. When $L$ is infinity, this implies that the model paired with the decoding algorithm is inconsistent. In practice, we use a finite $L$ that is substantially larger than the maximum training sequence length, and we interpret a non-zero $r_L$ as evidence that the model paired with the decoding algorithm is inconsistent. We use $L=1500$, which is more than 10 times the maximum training sequence length. In each experiment, we report the mean and standard deviation of metrics across 10 independent initializations. Unless specified otherwise, we report metrics using the test context distribution, since the train, valid, and randomly generated context distributions had similar results. <<</Evaluation metrics.>>> <<<Training.>>> We train recurrent language models for sequence completion with maximum likelihood, using the following loss on each sequence $Y=(c_1,\ldots ,c_k,y_1,\ldots ,y_T)$: This amounts to running the full training sequence through a recurrent model and zeroing the loss for the first $k$ tokens, so that the first $k$ steps correspond to learning a $g_{\theta }$ that encodes the context. Each model is trained on a single Nvidia P40 GPU for up to 100 epochs, stopping early when validation perplexity does not decrease for 10 consecutive epochs. <<</Training.>>> <<<Models.>>> We consider recurrent neural networks with hyperbolic tangent activations ($\tanh $-RNN) BIBREF11 and LSTM units (LSTM-RNN) BIBREF13. We perform an initial hyper-parameter sweep and select the best set of hyper-parameters for each of $\tanh $-RNN and LSTM-RNN based on the validation perplexities. With this best set of hyperparameters, we train each of these models with 10 different initializations. The choice of $\tanh $ and LSTM RNNs implies that all of the recurrent language models that we train are consistent according to Lemma UNKREF23. Our LSTM models achieve similar test perplexity ($91.86 \pm 0.4$) to those reported in previous work BIBREF24; see Appendix for further details. Additionally, we train self-terminating $\tanh $-RNN and LSTM-RNN variants (Definition UNKREF33) at various values of $\epsilon $, which controls a lower bound on the termination probability at each step. We use $\sigma (x)=(1-\epsilon )\text{sigmoid}(x)$. We use the hyper-parameters selected in the preceding grid search. <<</Models.>>> <<<Inconsistency of Recurrent Language Models>>> In this experiment, we demonstrate evidence of inconsistency with incomplete decoding methods (Theorem UNKREF27). Table TABREF43 shows non-termination ratios for the recurrent language models using the incomplete decoding algorithms considered in this work, along with ancestral sampling. Decoding with ancestral sampling always resulted in sequences that terminated within $L$ steps, since the induced distribution is the same as that of the consistent model. On the other hand, the non-zero non-termination ratios for the incomplete decoding algorithms suggest inconsistency with respect to each algorithm, providing evidence for Theorem UNKREF27. In particular, greedy search, beam search, and nucleus sampling yielded non-terminating sequences with both the $\tanh $ and LSTM RNNs. Using greedy decoding, roughly 6% of all contexts resulted in a non-terminating continuation with the $\tanh $-RNN, and roughly 1% with the LSTM-RNN. Nucleus sampling also produced non-terminating sequences with the $\tanh $-RNN (2.49%, nuc-0.2) and LSTM-RNN (0.76%, nuc-0.2), with the amount of non-termination decreasing as $\mu $ increased (see Definition UNKREF11), likely due to $\left<\text{eos}\right>$ having a higher chance of being included in $V_{\mu }$. Top-$k$ sampling resulted in non-terminating sequences with the $\tanh $-RNN, but not with the LSTM, implying that $\left<\text{eos}\right>$ was ranked within the top $k$ positions on at least one timestep during each decoding. Beam search produced non-terminating sequences with both the $\tanh $-RNN (beam-2,4) and LSTM-RNN (beam-2) models. This means that $\left<\text{eos}\right>$ was outside of the top tokens (determined by the beam width) considered at each step, since in our experiments we terminated the beam search when a single beam prefix contained $\left<\text{eos}\right>$. With the LSTM-RNN, a larger beam width (beam-4) prevented non-termination. <<</Inconsistency of Recurrent Language Models>>> <<<Consistency of the Proposed Methods>>> In this experiment, we evaluate the consistent variants of top-$k$ and nucleus sampling (§SECREF28) as well as the self-terminating recurrent language model (§SECREF32) in terms of consistency and language modeling quality. <<<Consistent sampling.>>> Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\left<\text{eos}\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate. <<</Consistent sampling.>>> <<<Self-terminating RNN.>>> As seen in Table TABREF50, the self-terminating recurrent language models with $\epsilon \in \lbrace 10^{-2},10^{-3}\rbrace $ are consistent with respect to greedy decoding, at the expense of perplexity compared to the vanilla model. The value of $\epsilon $ from Definition UNKREF33, which controls a lower-bound on termination probability at each step, influences both $r_L$ and perplexity. When $\epsilon $ is too large ($\epsilon =10^{-2}$), perplexity degrades. When $\epsilon $ is too small ($\epsilon =10^{-4}$), the lower-bound grows slowly, so $\left<\text{eos}\right>$ is not guaranteed to be top-ranked within $L$ steps, and the metrics resemble the baseline's. An $\epsilon $ of $10^{-3}$ balanced consistency and language modeling quality, with a zero non-termination ratio and perplexity within 3 points of the baseline. For the example decoded sequences in Table TABREF46, generation quality is similar when both the self-terminating and baseline models terminate (first row). For prefixes that led to non-termination with the baseline, the self-terminating variant can yield a finite sequence with reasonable quality (second row). This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition. <<</Self-terminating RNN.>>> <<</Consistency of the Proposed Methods>>> <<</Empirical Validation>>> <<<Future Directions>>> The methods we proposed in this paper have focused on how to resolve inconsistency from the viewpoint of decoding algorithms or model parameterization. Another approach is to address the issue of inconsistency in the learning phase. One interesting direction is to investigate whether maximum likelihood learning is a cause of inconsistency. Given a training set $\left\lbrace (C^{(n)}, Y^{(n)}) \right\rbrace _{n=1}^N$ drawn from a data distribution, maximum likelihood learning solves: where $\Omega (\theta )$ is a regularizer and $\lambda $ is a regularization weight. Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding. Sequence-level learning, however, uses a decoding algorithm during training BIBREF25, BIBREF26. We hypothesize that sequence-level learning can result in a good sequence generator that is consistent with respect to incomplete decoding. <<</Future Directions>>> <<<Conclusion>>> We extended the notion of consistency of a recurrent language model put forward by BIBREF16 to incorporate a decoding algorithm, and used it to analyze the discrepancy between a model and the distribution induced by a decoding algorithm. We proved that incomplete decoding is inconsistent, and proposed two methods to prevent this: consistent decoding and the self-terminating recurrent language model. Using a sequence completion task, we confirmed that empirical inconsistency occurs in practice, and that each method prevents inconsistency while maintaining the quality of generated sequences. We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Conclusion" ], "type": "disordered_section" }
2001.06354
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Modality-Balanced Models for Visual Dialogue <<<Abstract>>> The Visual Dialog task requires a model to exploit both image and conversational context information to generate the next response to the dialogue. However, via manual analysis, we find that a large number of conversational questions can be answered by only looking at the image without any access to the context history, while others still need the conversation context to predict the correct answers. We demonstrate that due to this reason, previous joint-modality (history and image) models over-rely on and are more prone to memorizing the dialogue history (e.g., by extracting certain keywords or patterns in the context information), whereas image-only models are more generalizable (because they cannot memorize or extract keywords from history) and perform substantially better at the primary normalized discounted cumulative gain (NDCG) task metric which allows multiple correct answers. Hence, this observation encourages us to explicitly maintain two models, i.e., an image-only model and an image-history joint model, and combine their complementary abilities for a more balanced multimodal model. We present multiple methods for this integration of the two models, via ensemble and consensus dropout fusion with shared parameters. Empirically, our models achieve strong results on the Visual Dialog challenge 2019 (rank 3 on NDCG and high balance across metrics), and substantially outperform the winner of the Visual Dialog challenge 2018 on most metrics. <<</Abstract>>> <<<Introduction>>> When we pursue conversations, context is important to keep the topic consistent or to answer questions which are asked by others, since most new utterances are made conditioned on related mentions or topic clues in the previous utterances in the conversation history. However, conversation history is not necessarily needed for all interactions, for instance, someone can change topics during a conversation and can ask a sudden new question which is not related to the context. This is similar to the setup in the Visual Dialog task BIBREF0, in which one agent (say the `asker') keeps asking questions and the other one (say the `answerer') keeps answering the questions based on an image for multiple rounds. The asker can ask a question from the conversation context. Then the answerer should answer the question by considering the conversation history as well as the image information, e.g., if the asker asks a question, “Are they in pots?” (Q4 in Fig. FIGREF1), the answerer should find a clue in the past question-answer pairs “Is there a lot of plants?” - “I only see 2.” (Q3-A3 in Fig. FIGREF1) and figure out what `they' means first to answer the question correctly. On the other hand, some questions in this task are independent of the past conversation history, e.g., “Can you see a building?” (Q8 in Fig. FIGREF1), where the answerer does not need to look at conversation context and can answer the question only based on the image information. We first conduct a manual investigation on the Visual Dialog dataset (VisDial) to figure out how many questions can be answered only with images and how many of them need conversation history to be answered. This investigation shows that around 80% of the questions can be answered only with images. Moreover, on the model side, we verify this observation by building a model that uses only images to answer questions. As expected, this image-only model works very well on the primary task metric of NDCG (evaluated on dense annotations which consider multiple similar answers as correct ones with similarity weights on them) without any help from the conversation history (see Table TABREF40). However, we find that the image-only model does not get higher scores on other metrics such as mean reciprocal rank (MRR), recall@k, and mean rank (evaluated on single ground-truth answers). Because the image-only model does not use any conversation-history information, we hypothesize that this scoring behavior might be related to the amount of history information available, and hence we also conduct additional experiments by building an image-history joint model and train it with different lengths of history features. From these experiments, we see a tendency that a model with the less amount of history features gets a higher NDCG score (with lower values for other metrics), whereas a model with more history information has the opposite behavior. Previously, BIBREF1 argued that the Visdial dataset has an answer bias such that a simple model without vision or dialogue history could achieve reasonable results. However, our motivation is different from theirs. The purpose of our paper is to find characteristics of existing multimodal models on the dataset (which are biased towards the language information in the dialogue history), analyze behaviors of these models on different metrics, as well as employ this analysis to build better, less biased models that achieve more balanced scores. Since NDCG measures more of a model's generalization ability (because it allows multiple similar answers), while the other metrics measure a model's preciseness, we interpret the results of these above experiments to mean that a model with more history information tends to predict correct answers by memorizing keywords or patterns in the history while a model with less history information (i.e., the image-only model) is better at generalization by avoiding relying on such exact-match extracted information. We think that an ideal model should have more balanced behavior and scores over all the metrics rather than having higher scores only for a certain metric and such a model could be considered as the one with both preciseness and generalization. To this end, we propose two models, an image-only and an image-history-joint model. We analyze that the answers these two models produce are complementarily good, and better at different metrics. Hence, we integrate these two models (image-only and image-history-joint) in two ways: consensus-dropout-fusion and ensemble. Our final consensus-dropout-fusion ensemble model scores strongly on both NDCG and recall metrics for the VisDial v1.0 test dataset, and these scores outperform the state-of-the-art of the Visual Dialog challenge 2018 on most metrics. Also, our model shows competitive balanced results in the Visual Dialog challenge 2019 (test-std leaderboard rank 3 based on NDCG metric and high balance across metrics). <<</Introduction>>> <<<Related Work>>> <<<Visual Question Answering (VQA)>>> Visual question answering is a task in which a machine is asked to answer a question about an image. The recent success of deep neural networks and massive data collection BIBREF2 has made the field more active. One of the most challenging parts of the task is to ground the meaning of text on visual evidence. Co-attention BIBREF3 is proposed to integrate information from different modalities (i.e., image and language) and more advanced approaches have shown good performance BIBREF4, BIBREF5, BIBREF6. A bilinear approach has also been proposed to replace simple addition or concatenation approaches for fusing the two modalities BIBREF7, BIBREF8, BIBREF9, BIBREF10. In our work, we employ multi-modal factorized bilinear pooling (MFB) BIBREF11 to fuse a question and image-history features. <<</Visual Question Answering (VQA)>>> <<<Visual Dialog>>> The Visual Dialog task BIBREF0 can be seen as an extended version of the VQA task, with multiple rounds of sequential question-answer pairs as dialog history, including an image caption, which should be referred to before answering a given question. This conversation history can help a model better predict correct answers by giving direct or indirect clues for the answers, or proper context for co-reference resolution. However, having conversation history also means that a model should extract relevant information from the history and introduces another challenge to the task. Many approaches have been proposed to handle this challenge. BIBREF12 tries to extract the clues from history recursively while BIBREF13 and BIBREF14 employ co-attention to fuse visual, history, and question features. In our work, we employ BIBREF15's approach to fuse visual and history features before they are attended by a question. Our joint model with fused features has much information from history and we find that it is in complementary relation with our image-only model. Thus, we combine the two models to take the most appropriate information from each model to answer questions. <<</Visual Dialog>>> <<</Related Work>>> <<<Models>>> In the Visual Dialog task BIBREF0, two agents interact via natural language with respect to an image. The asker keeps asking about the image given an image caption without seeing the image. The other agent (i.e., answerer) keeps answering the questions by viewing the image. They conduct multiple rounds of conversation accumulating question-answer pairs which are called `history' (Figure FIGREF1). The full history $\textrm {HISTORY}$ consists of question-answer pairs as well as an image caption which describes the given image, such that at a current time point $t$, the previous history is $\textrm {HISTORY}_t = \lbrace C, (Q_{1},A_{1}), (Q_{2},A_{2}), ..., (Q_{t-1},A_{t-1}) \rbrace $, where $C$ is the image caption and $Q_{t-1}$ and $A_{t-1}$ are the question and answer at round $t-1$, respectively. Then, given a new current time-stamp question $Q_t$, the history $\textrm {HISTORY}_t$, and the image, the model has to rank 100 candidate answers from the answerer's perspective. <<<Features>>> Visual Features: For visual features, we use object features which are extracted from an image by using Faster R-CNN BIBREF16. The visual feature, $V_{rcnn} \in \mathbb {R}^{k \times d_{v}}$, is a matrix whose rows correspond to objects, where $k$ is the number of objects (k=36 in our experiment), $d_{v}$ is dimension size of visual feature ($d_{v}$ = 2048 for ResNet backbone). Question Features: The word sequence of a question at round $r$, $W_{q_{r}} = \lbrace w_{q_{r}1}, w_{q_{r}2},..., w_{q_{r}T_{q_r}}\rbrace $ is encoded via an LSTM-RNN BIBREF17, and, we take the last hidden state as a question representation: $q_{r} = h_{T_{q_{r}}}^{q_{r}}$, where $T_{q_{r}}$ is the length of the question at round $r$. History Features: History $H_r$ is a history feature at round $r$ encoded from concatenation of a question and a ground truth answer, such that where $T_{a_{r-1}}$ is the length of the answer of round $r-1$, and the length of history at round $r$ is $T_{h_{r}}=T_{q_{r-1}}+T_{a_{r-1}} $. The history $H_r$ is also encoded with an LSTM, We also take the last hidden state as history representation at round $r$: $H_r = h_{T_{h_r}}^{h_r}$. Note that the first history feature $H_1$ comes from the image caption $C$. <<</Features>>> <<<Image-Only Model>>> We first build a model which only uses visual features to answer questions. We employ a state-of-the-art `bottom-up and top-down' approach from BIBREF18, in which we apply the attention mechanism over detected object features. We also adopt the multi-modal factorized bilinear pooling (MFB) method BIBREF11 to calculate attention weights over the visual features with respect to a question feature. From projected visual features and a question feature, we obtain $z \in \mathbb {R}^{k \times d_{m}}$ by applying MFB: where $\textrm {Linear}_{d_v\times d}$ is a linear projection which projects points from a $d_v$-dimension space to a $d$-dimension space. where $M$, $N$ $\in \mathbb {R}^{d_{m} \times d \times m}$ are trainable parameters, $d$ is the dimension of projected visual features and a question feature, $d_m$ is dimension of the fused feature, and $m$ is the number of factors. ${1}_k$ $\in \mathbb {R}^k$ is a vector whose elements are all one. Following BIBREF11, we also apply the power normalization and $\ell _2$ normalization to obtain $\hat{z}_{r}$. After applying linear projection, the softmax operation is applied to get a weight vector $\alpha $: $\alpha _{r} = \textrm {softmax}(L\hat{z}_{r}^{\top })$. We then get a visual representation vector, $v_{r}$ by weighted summing the projected visual features: $v_{r} = \sum _{i=1}^k \alpha _{ri}V_i$, where $L \in \mathbb {R}^{1 \times d_m }$ is trainable parameter, and $V_i$ is the $i$-th row vector of visual feature matrix $V$. The visual representation vector and a question feature vector are combined with element-wise product after linear projection. After one more linear projection, we get the final feature, $f_{v_{r}}^{q_{r}}$ which is further used to rank answers. where $\textrm {fc}_*$ is an fully-connected layer. <<<Answer Selection>>> For each round, there are 100 candidate answers. The $l$-th answer at round $r$, is encoded in the same way as question and history. where $T_{a_{rl}}$ is the length of the $l$-th candidate answer. Scores for each candidate answer are calculated by dot product between fused feature $f_{v_r}^{q_r}$ and each candidate answer representation, $a_{rl}$: $s_{rl} = f_{v_r}^{q_r}\cdot a_{rl}$. <<</Answer Selection>>> <<</Image-Only Model>>> <<<Image-History Joint Model>>> We calculate the similarity matrix, $S_r \in \mathbb {R}^{k \times r} $ between visual and history features following BIBREF15. where $w_s \in \mathbb {R}^{3d}$ is trainable parameter and $H_j$ is the $j$-th row vector of the history feature $H_{1:r}$. From the similarity matrix, the new fused history representation is: Similarly, the new fused visual representation is: These fused features are then fed to the MFB module and attended over w.r.t. a question feature, respectively, following the same process as a visual feature in the image-only model. The weighted-summed features are combined with a question feature through element-wise product and concatenated together to produce the integrated representation: where $v_{r}^f$ and $h_{r}^f$ are weighted-sum of fused features with respect to a question feature. Figure FIGREF5 depicts the whole process of the joint model in this section. <<<Round Dropout>>> To prevent the model from over-relying on history information, we propose a novel dropout approach in which some rounds of history features are dropped out (Figure FIGREF17). To be specific, we randomly pick up to 3 rounds of history from entire history except image caption feature and throw them away. where $N_h^r$ is number of history features at round $r$ and $N_D^r$ is the number of history features to drop at round $r$. <<</Round Dropout>>> <<</Image-History Joint Model>>> <<<Combining Image-Only & Image-History Joint Models>>> Since each of our models has different abilities, we exploit their complementary abilities together by combining them in two ways. The first is our novel consensus dropout fusion which integrates the two models in training time. The other way is to build an ensemble model from the two models at test time. <<<Consensus Dropout Fusion>>> In order to integrate the image-only model and the image-history joint model into one model, we propose a novel integration method called consensus dropout fusion. Our consensus dropout fusion is the combination of a consensus method and an instance dropout method (Figure FIGREF23). <<<Consensus>>> We employ a consensus method in which logits from each model are added to produce the final logit following BIBREF19's approach. where $L_{I}$ and $L_{J}$ are the logit from image-only model and image-hitory joint model, respectively, and $L_{IJ}$ is the new logit obtained by adding the two logits. <<</Consensus>>> <<<Instance Dropout>>> To allow the image-only model to have a stronger effect producing more balanced results over all metrics, we apply dropout to instances of the logit of the joint model. To be specific, when we add two logits, we multiply $L_{J}$ by $I_{drop}$, where ${1}_{(N\times R)} \in \mathbb {R}^{(N\times R)}$ and ${1}_{d} \in \mathbb {R}^{d}$ are all-ones vectors of $(N\times R)$ and $d$ dimension, respectively. $N$ is the training batch size and $R$ is the length of rounds of the conversation history. The dropout mask, $\xi $, is calculated following BIBREF20's work. <<</Instance Dropout>>> <<</Consensus Dropout Fusion>>> <<<Ensemble>>> We also integrate our 2 models via an ensemble. We train each model separately and combine them at test time. To be specific, we take logits from the pre-trained models and select the answer with the highest sum of logits. <<</Ensemble>>> <<</Combining Image-Only & Image-History Joint Models>>> <<</Models>>> <<<Experimental Setup>>> <<<Dataset>>> We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context. <<</Dataset>>> <<<Metrics>>> For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values. <<</Metrics>>> <<<Training Details>>> In our models, the size of word vectors is 300, the dimension of visual feature is 2048, and hidden size of LSTM units which are used for encoders of questions, context history, and candidate answers is 512. We employ Adam BIBREF21 as the optimizer. We set the initial learning rate to 0.001 and decrease it by 0.0001 per epoch until 8th epoch and decay by 0.5 from 9th epoch on. For round dropout, we set the maximum number of history features to be dropped to 3 and we tune the p value to 0.25 for our instance dropout in the consensus dropout fusion module. Cross-entropy is used to calculate the loss. <<</Training Details>>> <<</Experimental Setup>>> <<<Analysis and Results>>> In this section, we first discuss how many questions are answered only from image and how many of them need image and history jointly to be answered by conducting a manual investigation. We find that a large portion of questions in the VisDial dataset can be answered by only using images. Next, to verify the observation from the manual investigation, we perform a follow-up experiment and find a trade-off relation between the amount of history features and the metric scoring trend of models. We then analyze the answers from two models (image-only and image-history joint model) and show they are in complementary relation. Lastly, we show each model can make up for the other by being combined in consensus dropout fusion or in an ensemble model. <<<Human Evaluation: Is Image Alone Enough?>>> We conduct a human evaluation on image, history, and question. To be specific, we randomly select 100 images (which leads to 1000 questions) from the validation set for the evaluation and count the number of questions which can be answered only with images and the number of questions which need conversation context to be answered (ground-truth answers are provided to check if the answers can be inferred given corresponding questions and images instead of providing all the 100 candidate answers). Two annotators conduct the experiment independently and questions on which both annotators mark as being able to be answered only with images are classified as only-image questions otherwise as need-history questions. The inter-annotation agreement (kappa) is 0.74. As shown in Table TABREF36, around 80% of the questions can be answered only from images. Conversely, this also implies that a model needs conversation context to better perform the task. However, as discussed in Sec.SECREF1, using only history is not enough either (only 1% of the questions can be answered) and thus history should be used jointly with images. Note that we consider a question with a pronoun as answerable only with an image if the pronoun can be inferred (co-reference) from the corresponding image (e.g., a question mentions `he' and the image has only one person who is a boy). <<</Human Evaluation: Is Image Alone Enough?>>> <<<Reduced Question-Answer Rounds>>> We next run our joint model with various lengths of history. To be specific, we make our joint model use only k previous history features to answer a question. As shown in Table TABREF40, there is a trade-off between the values of metrics and the number of history features. As the number of history features the joint model uses is increased, the score of NDCG is decreased while other metrics are increased. On the other hand, as the number of history features the joint model uses is decreased the score of NDCG is increased while other metrics are decreased. If we see the Visual Dialog primary task metric of NDCG as a barometer of the model's ability to generalize and the other metrics can be seen as an indicator of preciseness, this means that decreased size of history gives a model the ability of generalization at the cost of preciseness. From this tendency, the image-only model has the highest NDCG score. <<</Reduced Question-Answer Rounds>>> <<<Complementary Relation>>> If the image-only model is good at NDCG, can we exploit its ability by combining it with the joint model? To figure out this possibility, we compare each answer from the image-only model and the joint model. To be specific, for R@1, we list up the correct answers from each model and count answers which are in both sets, i.e., the intersection. From the intersection, we obtain the union of the two sets. For NDCG, there is not one single correct answer. So we roughly calculate the intersection by taking minimum values between the two models' scores and averaging them. As we can see in Table TABREF42, the intersections do not take the entire score of either model for both metrics. This could mean image-only and joint models have room to be improved by combining them together. <<</Complementary Relation>>> <<<Model Combination Results>>> Considering the complementary relation between image-only model and joint model, combining the two models would be a good approach to take the best from the both. So, we integrate these two models via two methods: consensus dropout fusion and ensemble (see Sec.SECREF26). <<<Consensus Dropout Fusion Results>>> As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics. Unlike ensemble way, consensus dropout fusion does not require much increase in the number of model parameters. <<</Consensus Dropout Fusion Results>>> <<<Ensemble Model Results>>> As also shown in Table TABREF46, the ensemble model seems to take the best results from each model. Specifically, the NDCG score of the ensemble model is comparable to that of the image-only model and the scores of other metrics are comparable to those of the image-history joint model. From this experiment, we can confirm that the two models are in complementary relation. <<</Ensemble Model Results>>> <<</Model Combination Results>>> <<<Final Visual Dialog Test Results>>> For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average. <<<Ensemble on More Models>>> We also run an ensemble model from our image-only, joint, and consensus dropout fusion models (6 of each and total 18 models) and evaluate it on the test-standard dataset of the VisDial v1.0. This model's scores (NDCG: 59.90, MRR: 64.05, R@1: 50.28, R@5: 80.95, R@10: 90.60, Mean: 4.00) are in between our image-only ensemble model and our consensus dropout fusion ensemble model, i.e., this ensemble model has a higher NDCG than the consensus dropout fusion ensemble model and higher non-NDCG scores than the image-only ensemble model. This result shows that our image-only, joint, and consensus dropout fusion models make up for each other by being combined in an ensemble model as we expected. <<</Ensemble on More Models>>> <<</Final Visual Dialog Test Results>>> <<</Analysis and Results>>> <<<Ablation Study>>> Round Dropout: As shown in Table TABREF52, our round dropout (see Sec.SECREF24) improves the NDCG score by 1.2. A possible interpretation is that round dropout could help the model avoid from over-fitting to some patterns in the history features by intentionally dropping some of the features in the training session. Consensus Dropout Fusion and Dropout Rate: We run our consensus dropout fusion model (see Sec.SECREF27) with different instance dropout rates to figure out how the dropout rates affect the performance of the model. As shown in Table.TABREF53, as the dropout rate increases the NDCG score is also increased while scores of non-NDCG metrics are decreased. By changing the dropout rate, we can modulate the influence of each model (image-only and joint models) over the combined model. We choose a value of 0.25 for the dropout rate since it yields more balanced scores over all metrics. Ensemble Combination: We try different combinations from image-only and joint models to build ensemble models. The total number of models amounts to 3, i.e., image-only + image-only (I+I), joint + joint (J+J), and image-only + joint (I+J) ensemble models. As shown in Table TABREF54, scores of the I+J ensemble model are comparable to same-kind ensemble models (I+I and J+J). To be specific, for the NDCG metric, the I+J model outperforms the J+J model, while, for other metrics (MRR, recall@k, and mean rank), the I+J model outperforms the I+I model. This might imply that the balanced scores (i.e., high scores over all metrics) of the I+J model are from the complementary relation between image-only and image-history joint model. Output Examples: Due to space constraints and no supplementary allowed in AAAI rules, we provide detailed examples in this arxiv version's appendix, showing the coreference and memorization phenomena of the joint image-history model as well as the image-only model's example outputs on image-only questions. Examples of only-image questions, and the ranking lists of the image-history joint and image-only models are also provided. <<</Ablation Study>>> <<<Conclusion>>> We first showed that current multimodal models on the Visual Dialog task over-rely on the dialogue history, and relatedly, image-only and image-history joint models achieve complementary performance gains. Hence, to balance the best abilities from each model, we proposed two ways of combining them: consensus dropout fusion and ensemble. Our consensus dropout fusion and ensemble model achieve strong ranks on multiple leaderboards. Specifically, the models show higher scores than the state-of-the-art results of the Visual Dialog challenge 2018 and more balanced scores than highest ranked results of the Visual Dialog challenge 2019. Given the characteristics of the dataset and current model behaviors, a potential future direction is to combine the power of the two models dynamically, e.g., learn to select a proper model based on the question type. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Related Work, Conclusion" ], "type": "disordered_section" }
2001.06354
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Modality-Balanced Models for Visual Dialogue <<<Abstract>>> The Visual Dialog task requires a model to exploit both image and conversational context information to generate the next response to the dialogue. However, via manual analysis, we find that a large number of conversational questions can be answered by only looking at the image without any access to the context history, while others still need the conversation context to predict the correct answers. We demonstrate that due to this reason, previous joint-modality (history and image) models over-rely on and are more prone to memorizing the dialogue history (e.g., by extracting certain keywords or patterns in the context information), whereas image-only models are more generalizable (because they cannot memorize or extract keywords from history) and perform substantially better at the primary normalized discounted cumulative gain (NDCG) task metric which allows multiple correct answers. Hence, this observation encourages us to explicitly maintain two models, i.e., an image-only model and an image-history joint model, and combine their complementary abilities for a more balanced multimodal model. We present multiple methods for this integration of the two models, via ensemble and consensus dropout fusion with shared parameters. Empirically, our models achieve strong results on the Visual Dialog challenge 2019 (rank 3 on NDCG and high balance across metrics), and substantially outperform the winner of the Visual Dialog challenge 2018 on most metrics. <<</Abstract>>> <<<Introduction>>> When we pursue conversations, context is important to keep the topic consistent or to answer questions which are asked by others, since most new utterances are made conditioned on related mentions or topic clues in the previous utterances in the conversation history. However, conversation history is not necessarily needed for all interactions, for instance, someone can change topics during a conversation and can ask a sudden new question which is not related to the context. This is similar to the setup in the Visual Dialog task BIBREF0, in which one agent (say the `asker') keeps asking questions and the other one (say the `answerer') keeps answering the questions based on an image for multiple rounds. The asker can ask a question from the conversation context. Then the answerer should answer the question by considering the conversation history as well as the image information, e.g., if the asker asks a question, “Are they in pots?” (Q4 in Fig. FIGREF1), the answerer should find a clue in the past question-answer pairs “Is there a lot of plants?” - “I only see 2.” (Q3-A3 in Fig. FIGREF1) and figure out what `they' means first to answer the question correctly. On the other hand, some questions in this task are independent of the past conversation history, e.g., “Can you see a building?” (Q8 in Fig. FIGREF1), where the answerer does not need to look at conversation context and can answer the question only based on the image information. We first conduct a manual investigation on the Visual Dialog dataset (VisDial) to figure out how many questions can be answered only with images and how many of them need conversation history to be answered. This investigation shows that around 80% of the questions can be answered only with images. Moreover, on the model side, we verify this observation by building a model that uses only images to answer questions. As expected, this image-only model works very well on the primary task metric of NDCG (evaluated on dense annotations which consider multiple similar answers as correct ones with similarity weights on them) without any help from the conversation history (see Table TABREF40). However, we find that the image-only model does not get higher scores on other metrics such as mean reciprocal rank (MRR), recall@k, and mean rank (evaluated on single ground-truth answers). Because the image-only model does not use any conversation-history information, we hypothesize that this scoring behavior might be related to the amount of history information available, and hence we also conduct additional experiments by building an image-history joint model and train it with different lengths of history features. From these experiments, we see a tendency that a model with the less amount of history features gets a higher NDCG score (with lower values for other metrics), whereas a model with more history information has the opposite behavior. Previously, BIBREF1 argued that the Visdial dataset has an answer bias such that a simple model without vision or dialogue history could achieve reasonable results. However, our motivation is different from theirs. The purpose of our paper is to find characteristics of existing multimodal models on the dataset (which are biased towards the language information in the dialogue history), analyze behaviors of these models on different metrics, as well as employ this analysis to build better, less biased models that achieve more balanced scores. Since NDCG measures more of a model's generalization ability (because it allows multiple similar answers), while the other metrics measure a model's preciseness, we interpret the results of these above experiments to mean that a model with more history information tends to predict correct answers by memorizing keywords or patterns in the history while a model with less history information (i.e., the image-only model) is better at generalization by avoiding relying on such exact-match extracted information. We think that an ideal model should have more balanced behavior and scores over all the metrics rather than having higher scores only for a certain metric and such a model could be considered as the one with both preciseness and generalization. To this end, we propose two models, an image-only and an image-history-joint model. We analyze that the answers these two models produce are complementarily good, and better at different metrics. Hence, we integrate these two models (image-only and image-history-joint) in two ways: consensus-dropout-fusion and ensemble. Our final consensus-dropout-fusion ensemble model scores strongly on both NDCG and recall metrics for the VisDial v1.0 test dataset, and these scores outperform the state-of-the-art of the Visual Dialog challenge 2018 on most metrics. Also, our model shows competitive balanced results in the Visual Dialog challenge 2019 (test-std leaderboard rank 3 based on NDCG metric and high balance across metrics). <<</Introduction>>> <<<Related Work>>> <<<Visual Question Answering (VQA)>>> Visual question answering is a task in which a machine is asked to answer a question about an image. The recent success of deep neural networks and massive data collection BIBREF2 has made the field more active. One of the most challenging parts of the task is to ground the meaning of text on visual evidence. Co-attention BIBREF3 is proposed to integrate information from different modalities (i.e., image and language) and more advanced approaches have shown good performance BIBREF4, BIBREF5, BIBREF6. A bilinear approach has also been proposed to replace simple addition or concatenation approaches for fusing the two modalities BIBREF7, BIBREF8, BIBREF9, BIBREF10. In our work, we employ multi-modal factorized bilinear pooling (MFB) BIBREF11 to fuse a question and image-history features. <<</Visual Question Answering (VQA)>>> <<<Visual Dialog>>> The Visual Dialog task BIBREF0 can be seen as an extended version of the VQA task, with multiple rounds of sequential question-answer pairs as dialog history, including an image caption, which should be referred to before answering a given question. This conversation history can help a model better predict correct answers by giving direct or indirect clues for the answers, or proper context for co-reference resolution. However, having conversation history also means that a model should extract relevant information from the history and introduces another challenge to the task. Many approaches have been proposed to handle this challenge. BIBREF12 tries to extract the clues from history recursively while BIBREF13 and BIBREF14 employ co-attention to fuse visual, history, and question features. In our work, we employ BIBREF15's approach to fuse visual and history features before they are attended by a question. Our joint model with fused features has much information from history and we find that it is in complementary relation with our image-only model. Thus, we combine the two models to take the most appropriate information from each model to answer questions. <<</Visual Dialog>>> <<</Related Work>>> <<<Models>>> In the Visual Dialog task BIBREF0, two agents interact via natural language with respect to an image. The asker keeps asking about the image given an image caption without seeing the image. The other agent (i.e., answerer) keeps answering the questions by viewing the image. They conduct multiple rounds of conversation accumulating question-answer pairs which are called `history' (Figure FIGREF1). The full history $\textrm {HISTORY}$ consists of question-answer pairs as well as an image caption which describes the given image, such that at a current time point $t$, the previous history is $\textrm {HISTORY}_t = \lbrace C, (Q_{1},A_{1}), (Q_{2},A_{2}), ..., (Q_{t-1},A_{t-1}) \rbrace $, where $C$ is the image caption and $Q_{t-1}$ and $A_{t-1}$ are the question and answer at round $t-1$, respectively. Then, given a new current time-stamp question $Q_t$, the history $\textrm {HISTORY}_t$, and the image, the model has to rank 100 candidate answers from the answerer's perspective. <<<Features>>> Visual Features: For visual features, we use object features which are extracted from an image by using Faster R-CNN BIBREF16. The visual feature, $V_{rcnn} \in \mathbb {R}^{k \times d_{v}}$, is a matrix whose rows correspond to objects, where $k$ is the number of objects (k=36 in our experiment), $d_{v}$ is dimension size of visual feature ($d_{v}$ = 2048 for ResNet backbone). Question Features: The word sequence of a question at round $r$, $W_{q_{r}} = \lbrace w_{q_{r}1}, w_{q_{r}2},..., w_{q_{r}T_{q_r}}\rbrace $ is encoded via an LSTM-RNN BIBREF17, and, we take the last hidden state as a question representation: $q_{r} = h_{T_{q_{r}}}^{q_{r}}$, where $T_{q_{r}}$ is the length of the question at round $r$. History Features: History $H_r$ is a history feature at round $r$ encoded from concatenation of a question and a ground truth answer, such that where $T_{a_{r-1}}$ is the length of the answer of round $r-1$, and the length of history at round $r$ is $T_{h_{r}}=T_{q_{r-1}}+T_{a_{r-1}} $. The history $H_r$ is also encoded with an LSTM, We also take the last hidden state as history representation at round $r$: $H_r = h_{T_{h_r}}^{h_r}$. Note that the first history feature $H_1$ comes from the image caption $C$. <<</Features>>> <<<Image-Only Model>>> We first build a model which only uses visual features to answer questions. We employ a state-of-the-art `bottom-up and top-down' approach from BIBREF18, in which we apply the attention mechanism over detected object features. We also adopt the multi-modal factorized bilinear pooling (MFB) method BIBREF11 to calculate attention weights over the visual features with respect to a question feature. From projected visual features and a question feature, we obtain $z \in \mathbb {R}^{k \times d_{m}}$ by applying MFB: where $\textrm {Linear}_{d_v\times d}$ is a linear projection which projects points from a $d_v$-dimension space to a $d$-dimension space. where $M$, $N$ $\in \mathbb {R}^{d_{m} \times d \times m}$ are trainable parameters, $d$ is the dimension of projected visual features and a question feature, $d_m$ is dimension of the fused feature, and $m$ is the number of factors. ${1}_k$ $\in \mathbb {R}^k$ is a vector whose elements are all one. Following BIBREF11, we also apply the power normalization and $\ell _2$ normalization to obtain $\hat{z}_{r}$. After applying linear projection, the softmax operation is applied to get a weight vector $\alpha $: $\alpha _{r} = \textrm {softmax}(L\hat{z}_{r}^{\top })$. We then get a visual representation vector, $v_{r}$ by weighted summing the projected visual features: $v_{r} = \sum _{i=1}^k \alpha _{ri}V_i$, where $L \in \mathbb {R}^{1 \times d_m }$ is trainable parameter, and $V_i$ is the $i$-th row vector of visual feature matrix $V$. The visual representation vector and a question feature vector are combined with element-wise product after linear projection. After one more linear projection, we get the final feature, $f_{v_{r}}^{q_{r}}$ which is further used to rank answers. where $\textrm {fc}_*$ is an fully-connected layer. <<<Answer Selection>>> For each round, there are 100 candidate answers. The $l$-th answer at round $r$, is encoded in the same way as question and history. where $T_{a_{rl}}$ is the length of the $l$-th candidate answer. Scores for each candidate answer are calculated by dot product between fused feature $f_{v_r}^{q_r}$ and each candidate answer representation, $a_{rl}$: $s_{rl} = f_{v_r}^{q_r}\cdot a_{rl}$. <<</Answer Selection>>> <<</Image-Only Model>>> <<<Image-History Joint Model>>> We calculate the similarity matrix, $S_r \in \mathbb {R}^{k \times r} $ between visual and history features following BIBREF15. where $w_s \in \mathbb {R}^{3d}$ is trainable parameter and $H_j$ is the $j$-th row vector of the history feature $H_{1:r}$. From the similarity matrix, the new fused history representation is: Similarly, the new fused visual representation is: These fused features are then fed to the MFB module and attended over w.r.t. a question feature, respectively, following the same process as a visual feature in the image-only model. The weighted-summed features are combined with a question feature through element-wise product and concatenated together to produce the integrated representation: where $v_{r}^f$ and $h_{r}^f$ are weighted-sum of fused features with respect to a question feature. Figure FIGREF5 depicts the whole process of the joint model in this section. <<<Round Dropout>>> To prevent the model from over-relying on history information, we propose a novel dropout approach in which some rounds of history features are dropped out (Figure FIGREF17). To be specific, we randomly pick up to 3 rounds of history from entire history except image caption feature and throw them away. where $N_h^r$ is number of history features at round $r$ and $N_D^r$ is the number of history features to drop at round $r$. <<</Round Dropout>>> <<</Image-History Joint Model>>> <<<Combining Image-Only & Image-History Joint Models>>> Since each of our models has different abilities, we exploit their complementary abilities together by combining them in two ways. The first is our novel consensus dropout fusion which integrates the two models in training time. The other way is to build an ensemble model from the two models at test time. <<<Consensus Dropout Fusion>>> In order to integrate the image-only model and the image-history joint model into one model, we propose a novel integration method called consensus dropout fusion. Our consensus dropout fusion is the combination of a consensus method and an instance dropout method (Figure FIGREF23). <<<Consensus>>> We employ a consensus method in which logits from each model are added to produce the final logit following BIBREF19's approach. where $L_{I}$ and $L_{J}$ are the logit from image-only model and image-hitory joint model, respectively, and $L_{IJ}$ is the new logit obtained by adding the two logits. <<</Consensus>>> <<<Instance Dropout>>> To allow the image-only model to have a stronger effect producing more balanced results over all metrics, we apply dropout to instances of the logit of the joint model. To be specific, when we add two logits, we multiply $L_{J}$ by $I_{drop}$, where ${1}_{(N\times R)} \in \mathbb {R}^{(N\times R)}$ and ${1}_{d} \in \mathbb {R}^{d}$ are all-ones vectors of $(N\times R)$ and $d$ dimension, respectively. $N$ is the training batch size and $R$ is the length of rounds of the conversation history. The dropout mask, $\xi $, is calculated following BIBREF20's work. <<</Instance Dropout>>> <<</Consensus Dropout Fusion>>> <<<Ensemble>>> We also integrate our 2 models via an ensemble. We train each model separately and combine them at test time. To be specific, we take logits from the pre-trained models and select the answer with the highest sum of logits. <<</Ensemble>>> <<</Combining Image-Only & Image-History Joint Models>>> <<</Models>>> <<<Experimental Setup>>> <<<Dataset>>> We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context. <<</Dataset>>> <<<Metrics>>> For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values. <<</Metrics>>> <<<Training Details>>> In our models, the size of word vectors is 300, the dimension of visual feature is 2048, and hidden size of LSTM units which are used for encoders of questions, context history, and candidate answers is 512. We employ Adam BIBREF21 as the optimizer. We set the initial learning rate to 0.001 and decrease it by 0.0001 per epoch until 8th epoch and decay by 0.5 from 9th epoch on. For round dropout, we set the maximum number of history features to be dropped to 3 and we tune the p value to 0.25 for our instance dropout in the consensus dropout fusion module. Cross-entropy is used to calculate the loss. <<</Training Details>>> <<</Experimental Setup>>> <<<Analysis and Results>>> In this section, we first discuss how many questions are answered only from image and how many of them need image and history jointly to be answered by conducting a manual investigation. We find that a large portion of questions in the VisDial dataset can be answered by only using images. Next, to verify the observation from the manual investigation, we perform a follow-up experiment and find a trade-off relation between the amount of history features and the metric scoring trend of models. We then analyze the answers from two models (image-only and image-history joint model) and show they are in complementary relation. Lastly, we show each model can make up for the other by being combined in consensus dropout fusion or in an ensemble model. <<<Human Evaluation: Is Image Alone Enough?>>> We conduct a human evaluation on image, history, and question. To be specific, we randomly select 100 images (which leads to 1000 questions) from the validation set for the evaluation and count the number of questions which can be answered only with images and the number of questions which need conversation context to be answered (ground-truth answers are provided to check if the answers can be inferred given corresponding questions and images instead of providing all the 100 candidate answers). Two annotators conduct the experiment independently and questions on which both annotators mark as being able to be answered only with images are classified as only-image questions otherwise as need-history questions. The inter-annotation agreement (kappa) is 0.74. As shown in Table TABREF36, around 80% of the questions can be answered only from images. Conversely, this also implies that a model needs conversation context to better perform the task. However, as discussed in Sec.SECREF1, using only history is not enough either (only 1% of the questions can be answered) and thus history should be used jointly with images. Note that we consider a question with a pronoun as answerable only with an image if the pronoun can be inferred (co-reference) from the corresponding image (e.g., a question mentions `he' and the image has only one person who is a boy). <<</Human Evaluation: Is Image Alone Enough?>>> <<<Reduced Question-Answer Rounds>>> We next run our joint model with various lengths of history. To be specific, we make our joint model use only k previous history features to answer a question. As shown in Table TABREF40, there is a trade-off between the values of metrics and the number of history features. As the number of history features the joint model uses is increased, the score of NDCG is decreased while other metrics are increased. On the other hand, as the number of history features the joint model uses is decreased the score of NDCG is increased while other metrics are decreased. If we see the Visual Dialog primary task metric of NDCG as a barometer of the model's ability to generalize and the other metrics can be seen as an indicator of preciseness, this means that decreased size of history gives a model the ability of generalization at the cost of preciseness. From this tendency, the image-only model has the highest NDCG score. <<</Reduced Question-Answer Rounds>>> <<<Complementary Relation>>> If the image-only model is good at NDCG, can we exploit its ability by combining it with the joint model? To figure out this possibility, we compare each answer from the image-only model and the joint model. To be specific, for R@1, we list up the correct answers from each model and count answers which are in both sets, i.e., the intersection. From the intersection, we obtain the union of the two sets. For NDCG, there is not one single correct answer. So we roughly calculate the intersection by taking minimum values between the two models' scores and averaging them. As we can see in Table TABREF42, the intersections do not take the entire score of either model for both metrics. This could mean image-only and joint models have room to be improved by combining them together. <<</Complementary Relation>>> <<<Model Combination Results>>> Considering the complementary relation between image-only model and joint model, combining the two models would be a good approach to take the best from the both. So, we integrate these two models via two methods: consensus dropout fusion and ensemble (see Sec.SECREF26). <<<Consensus Dropout Fusion Results>>> As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics. Unlike ensemble way, consensus dropout fusion does not require much increase in the number of model parameters. <<</Consensus Dropout Fusion Results>>> <<<Ensemble Model Results>>> As also shown in Table TABREF46, the ensemble model seems to take the best results from each model. Specifically, the NDCG score of the ensemble model is comparable to that of the image-only model and the scores of other metrics are comparable to those of the image-history joint model. From this experiment, we can confirm that the two models are in complementary relation. <<</Ensemble Model Results>>> <<</Model Combination Results>>> <<<Final Visual Dialog Test Results>>> For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average. <<<Ensemble on More Models>>> We also run an ensemble model from our image-only, joint, and consensus dropout fusion models (6 of each and total 18 models) and evaluate it on the test-standard dataset of the VisDial v1.0. This model's scores (NDCG: 59.90, MRR: 64.05, R@1: 50.28, R@5: 80.95, R@10: 90.60, Mean: 4.00) are in between our image-only ensemble model and our consensus dropout fusion ensemble model, i.e., this ensemble model has a higher NDCG than the consensus dropout fusion ensemble model and higher non-NDCG scores than the image-only ensemble model. This result shows that our image-only, joint, and consensus dropout fusion models make up for each other by being combined in an ensemble model as we expected. <<</Ensemble on More Models>>> <<</Final Visual Dialog Test Results>>> <<</Analysis and Results>>> <<<Ablation Study>>> Round Dropout: As shown in Table TABREF52, our round dropout (see Sec.SECREF24) improves the NDCG score by 1.2. A possible interpretation is that round dropout could help the model avoid from over-fitting to some patterns in the history features by intentionally dropping some of the features in the training session. Consensus Dropout Fusion and Dropout Rate: We run our consensus dropout fusion model (see Sec.SECREF27) with different instance dropout rates to figure out how the dropout rates affect the performance of the model. As shown in Table.TABREF53, as the dropout rate increases the NDCG score is also increased while scores of non-NDCG metrics are decreased. By changing the dropout rate, we can modulate the influence of each model (image-only and joint models) over the combined model. We choose a value of 0.25 for the dropout rate since it yields more balanced scores over all metrics. Ensemble Combination: We try different combinations from image-only and joint models to build ensemble models. The total number of models amounts to 3, i.e., image-only + image-only (I+I), joint + joint (J+J), and image-only + joint (I+J) ensemble models. As shown in Table TABREF54, scores of the I+J ensemble model are comparable to same-kind ensemble models (I+I and J+J). To be specific, for the NDCG metric, the I+J model outperforms the J+J model, while, for other metrics (MRR, recall@k, and mean rank), the I+J model outperforms the I+I model. This might imply that the balanced scores (i.e., high scores over all metrics) of the I+J model are from the complementary relation between image-only and image-history joint model. Output Examples: Due to space constraints and no supplementary allowed in AAAI rules, we provide detailed examples in this arxiv version's appendix, showing the coreference and memorization phenomena of the joint image-history model as well as the image-only model's example outputs on image-only questions. Examples of only-image questions, and the ranking lists of the image-history joint and image-only models are also provided. <<</Ablation Study>>> <<<Conclusion>>> We first showed that current multimodal models on the Visual Dialog task over-rely on the dialogue history, and relatedly, image-only and image-history joint models achieve complementary performance gains. Hence, to balance the best abilities from each model, we proposed two ways of combining them: consensus dropout fusion and ensemble. Our consensus dropout fusion and ensemble model achieve strong ranks on multiple leaderboards. Specifically, the models show higher scores than the state-of-the-art results of the Visual Dialog challenge 2018 and more balanced scores than highest ranked results of the Visual Dialog challenge 2019. Given the characteristics of the dataset and current model behaviors, a potential future direction is to combine the power of the two models dynamically, e.g., learn to select a proper model based on the question type. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Conclusion, Abstract" ], "type": "disordered_section" }
2001.06354
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Modality-Balanced Models for Visual Dialogue <<<Abstract>>> The Visual Dialog task requires a model to exploit both image and conversational context information to generate the next response to the dialogue. However, via manual analysis, we find that a large number of conversational questions can be answered by only looking at the image without any access to the context history, while others still need the conversation context to predict the correct answers. We demonstrate that due to this reason, previous joint-modality (history and image) models over-rely on and are more prone to memorizing the dialogue history (e.g., by extracting certain keywords or patterns in the context information), whereas image-only models are more generalizable (because they cannot memorize or extract keywords from history) and perform substantially better at the primary normalized discounted cumulative gain (NDCG) task metric which allows multiple correct answers. Hence, this observation encourages us to explicitly maintain two models, i.e., an image-only model and an image-history joint model, and combine their complementary abilities for a more balanced multimodal model. We present multiple methods for this integration of the two models, via ensemble and consensus dropout fusion with shared parameters. Empirically, our models achieve strong results on the Visual Dialog challenge 2019 (rank 3 on NDCG and high balance across metrics), and substantially outperform the winner of the Visual Dialog challenge 2018 on most metrics. <<</Abstract>>> <<<Introduction>>> When we pursue conversations, context is important to keep the topic consistent or to answer questions which are asked by others, since most new utterances are made conditioned on related mentions or topic clues in the previous utterances in the conversation history. However, conversation history is not necessarily needed for all interactions, for instance, someone can change topics during a conversation and can ask a sudden new question which is not related to the context. This is similar to the setup in the Visual Dialog task BIBREF0, in which one agent (say the `asker') keeps asking questions and the other one (say the `answerer') keeps answering the questions based on an image for multiple rounds. The asker can ask a question from the conversation context. Then the answerer should answer the question by considering the conversation history as well as the image information, e.g., if the asker asks a question, “Are they in pots?” (Q4 in Fig. FIGREF1), the answerer should find a clue in the past question-answer pairs “Is there a lot of plants?” - “I only see 2.” (Q3-A3 in Fig. FIGREF1) and figure out what `they' means first to answer the question correctly. On the other hand, some questions in this task are independent of the past conversation history, e.g., “Can you see a building?” (Q8 in Fig. FIGREF1), where the answerer does not need to look at conversation context and can answer the question only based on the image information. We first conduct a manual investigation on the Visual Dialog dataset (VisDial) to figure out how many questions can be answered only with images and how many of them need conversation history to be answered. This investigation shows that around 80% of the questions can be answered only with images. Moreover, on the model side, we verify this observation by building a model that uses only images to answer questions. As expected, this image-only model works very well on the primary task metric of NDCG (evaluated on dense annotations which consider multiple similar answers as correct ones with similarity weights on them) without any help from the conversation history (see Table TABREF40). However, we find that the image-only model does not get higher scores on other metrics such as mean reciprocal rank (MRR), recall@k, and mean rank (evaluated on single ground-truth answers). Because the image-only model does not use any conversation-history information, we hypothesize that this scoring behavior might be related to the amount of history information available, and hence we also conduct additional experiments by building an image-history joint model and train it with different lengths of history features. From these experiments, we see a tendency that a model with the less amount of history features gets a higher NDCG score (with lower values for other metrics), whereas a model with more history information has the opposite behavior. Previously, BIBREF1 argued that the Visdial dataset has an answer bias such that a simple model without vision or dialogue history could achieve reasonable results. However, our motivation is different from theirs. The purpose of our paper is to find characteristics of existing multimodal models on the dataset (which are biased towards the language information in the dialogue history), analyze behaviors of these models on different metrics, as well as employ this analysis to build better, less biased models that achieve more balanced scores. Since NDCG measures more of a model's generalization ability (because it allows multiple similar answers), while the other metrics measure a model's preciseness, we interpret the results of these above experiments to mean that a model with more history information tends to predict correct answers by memorizing keywords or patterns in the history while a model with less history information (i.e., the image-only model) is better at generalization by avoiding relying on such exact-match extracted information. We think that an ideal model should have more balanced behavior and scores over all the metrics rather than having higher scores only for a certain metric and such a model could be considered as the one with both preciseness and generalization. To this end, we propose two models, an image-only and an image-history-joint model. We analyze that the answers these two models produce are complementarily good, and better at different metrics. Hence, we integrate these two models (image-only and image-history-joint) in two ways: consensus-dropout-fusion and ensemble. Our final consensus-dropout-fusion ensemble model scores strongly on both NDCG and recall metrics for the VisDial v1.0 test dataset, and these scores outperform the state-of-the-art of the Visual Dialog challenge 2018 on most metrics. Also, our model shows competitive balanced results in the Visual Dialog challenge 2019 (test-std leaderboard rank 3 based on NDCG metric and high balance across metrics). <<</Introduction>>> <<<Related Work>>> <<<Visual Question Answering (VQA)>>> Visual question answering is a task in which a machine is asked to answer a question about an image. The recent success of deep neural networks and massive data collection BIBREF2 has made the field more active. One of the most challenging parts of the task is to ground the meaning of text on visual evidence. Co-attention BIBREF3 is proposed to integrate information from different modalities (i.e., image and language) and more advanced approaches have shown good performance BIBREF4, BIBREF5, BIBREF6. A bilinear approach has also been proposed to replace simple addition or concatenation approaches for fusing the two modalities BIBREF7, BIBREF8, BIBREF9, BIBREF10. In our work, we employ multi-modal factorized bilinear pooling (MFB) BIBREF11 to fuse a question and image-history features. <<</Visual Question Answering (VQA)>>> <<<Visual Dialog>>> The Visual Dialog task BIBREF0 can be seen as an extended version of the VQA task, with multiple rounds of sequential question-answer pairs as dialog history, including an image caption, which should be referred to before answering a given question. This conversation history can help a model better predict correct answers by giving direct or indirect clues for the answers, or proper context for co-reference resolution. However, having conversation history also means that a model should extract relevant information from the history and introduces another challenge to the task. Many approaches have been proposed to handle this challenge. BIBREF12 tries to extract the clues from history recursively while BIBREF13 and BIBREF14 employ co-attention to fuse visual, history, and question features. In our work, we employ BIBREF15's approach to fuse visual and history features before they are attended by a question. Our joint model with fused features has much information from history and we find that it is in complementary relation with our image-only model. Thus, we combine the two models to take the most appropriate information from each model to answer questions. <<</Visual Dialog>>> <<</Related Work>>> <<<Models>>> In the Visual Dialog task BIBREF0, two agents interact via natural language with respect to an image. The asker keeps asking about the image given an image caption without seeing the image. The other agent (i.e., answerer) keeps answering the questions by viewing the image. They conduct multiple rounds of conversation accumulating question-answer pairs which are called `history' (Figure FIGREF1). The full history $\textrm {HISTORY}$ consists of question-answer pairs as well as an image caption which describes the given image, such that at a current time point $t$, the previous history is $\textrm {HISTORY}_t = \lbrace C, (Q_{1},A_{1}), (Q_{2},A_{2}), ..., (Q_{t-1},A_{t-1}) \rbrace $, where $C$ is the image caption and $Q_{t-1}$ and $A_{t-1}$ are the question and answer at round $t-1$, respectively. Then, given a new current time-stamp question $Q_t$, the history $\textrm {HISTORY}_t$, and the image, the model has to rank 100 candidate answers from the answerer's perspective. <<<Features>>> Visual Features: For visual features, we use object features which are extracted from an image by using Faster R-CNN BIBREF16. The visual feature, $V_{rcnn} \in \mathbb {R}^{k \times d_{v}}$, is a matrix whose rows correspond to objects, where $k$ is the number of objects (k=36 in our experiment), $d_{v}$ is dimension size of visual feature ($d_{v}$ = 2048 for ResNet backbone). Question Features: The word sequence of a question at round $r$, $W_{q_{r}} = \lbrace w_{q_{r}1}, w_{q_{r}2},..., w_{q_{r}T_{q_r}}\rbrace $ is encoded via an LSTM-RNN BIBREF17, and, we take the last hidden state as a question representation: $q_{r} = h_{T_{q_{r}}}^{q_{r}}$, where $T_{q_{r}}$ is the length of the question at round $r$. History Features: History $H_r$ is a history feature at round $r$ encoded from concatenation of a question and a ground truth answer, such that where $T_{a_{r-1}}$ is the length of the answer of round $r-1$, and the length of history at round $r$ is $T_{h_{r}}=T_{q_{r-1}}+T_{a_{r-1}} $. The history $H_r$ is also encoded with an LSTM, We also take the last hidden state as history representation at round $r$: $H_r = h_{T_{h_r}}^{h_r}$. Note that the first history feature $H_1$ comes from the image caption $C$. <<</Features>>> <<<Image-Only Model>>> We first build a model which only uses visual features to answer questions. We employ a state-of-the-art `bottom-up and top-down' approach from BIBREF18, in which we apply the attention mechanism over detected object features. We also adopt the multi-modal factorized bilinear pooling (MFB) method BIBREF11 to calculate attention weights over the visual features with respect to a question feature. From projected visual features and a question feature, we obtain $z \in \mathbb {R}^{k \times d_{m}}$ by applying MFB: where $\textrm {Linear}_{d_v\times d}$ is a linear projection which projects points from a $d_v$-dimension space to a $d$-dimension space. where $M$, $N$ $\in \mathbb {R}^{d_{m} \times d \times m}$ are trainable parameters, $d$ is the dimension of projected visual features and a question feature, $d_m$ is dimension of the fused feature, and $m$ is the number of factors. ${1}_k$ $\in \mathbb {R}^k$ is a vector whose elements are all one. Following BIBREF11, we also apply the power normalization and $\ell _2$ normalization to obtain $\hat{z}_{r}$. After applying linear projection, the softmax operation is applied to get a weight vector $\alpha $: $\alpha _{r} = \textrm {softmax}(L\hat{z}_{r}^{\top })$. We then get a visual representation vector, $v_{r}$ by weighted summing the projected visual features: $v_{r} = \sum _{i=1}^k \alpha _{ri}V_i$, where $L \in \mathbb {R}^{1 \times d_m }$ is trainable parameter, and $V_i$ is the $i$-th row vector of visual feature matrix $V$. The visual representation vector and a question feature vector are combined with element-wise product after linear projection. After one more linear projection, we get the final feature, $f_{v_{r}}^{q_{r}}$ which is further used to rank answers. where $\textrm {fc}_*$ is an fully-connected layer. <<<Answer Selection>>> For each round, there are 100 candidate answers. The $l$-th answer at round $r$, is encoded in the same way as question and history. where $T_{a_{rl}}$ is the length of the $l$-th candidate answer. Scores for each candidate answer are calculated by dot product between fused feature $f_{v_r}^{q_r}$ and each candidate answer representation, $a_{rl}$: $s_{rl} = f_{v_r}^{q_r}\cdot a_{rl}$. <<</Answer Selection>>> <<</Image-Only Model>>> <<<Image-History Joint Model>>> We calculate the similarity matrix, $S_r \in \mathbb {R}^{k \times r} $ between visual and history features following BIBREF15. where $w_s \in \mathbb {R}^{3d}$ is trainable parameter and $H_j$ is the $j$-th row vector of the history feature $H_{1:r}$. From the similarity matrix, the new fused history representation is: Similarly, the new fused visual representation is: These fused features are then fed to the MFB module and attended over w.r.t. a question feature, respectively, following the same process as a visual feature in the image-only model. The weighted-summed features are combined with a question feature through element-wise product and concatenated together to produce the integrated representation: where $v_{r}^f$ and $h_{r}^f$ are weighted-sum of fused features with respect to a question feature. Figure FIGREF5 depicts the whole process of the joint model in this section. <<<Round Dropout>>> To prevent the model from over-relying on history information, we propose a novel dropout approach in which some rounds of history features are dropped out (Figure FIGREF17). To be specific, we randomly pick up to 3 rounds of history from entire history except image caption feature and throw them away. where $N_h^r$ is number of history features at round $r$ and $N_D^r$ is the number of history features to drop at round $r$. <<</Round Dropout>>> <<</Image-History Joint Model>>> <<<Combining Image-Only & Image-History Joint Models>>> Since each of our models has different abilities, we exploit their complementary abilities together by combining them in two ways. The first is our novel consensus dropout fusion which integrates the two models in training time. The other way is to build an ensemble model from the two models at test time. <<<Consensus Dropout Fusion>>> In order to integrate the image-only model and the image-history joint model into one model, we propose a novel integration method called consensus dropout fusion. Our consensus dropout fusion is the combination of a consensus method and an instance dropout method (Figure FIGREF23). <<<Consensus>>> We employ a consensus method in which logits from each model are added to produce the final logit following BIBREF19's approach. where $L_{I}$ and $L_{J}$ are the logit from image-only model and image-hitory joint model, respectively, and $L_{IJ}$ is the new logit obtained by adding the two logits. <<</Consensus>>> <<<Instance Dropout>>> To allow the image-only model to have a stronger effect producing more balanced results over all metrics, we apply dropout to instances of the logit of the joint model. To be specific, when we add two logits, we multiply $L_{J}$ by $I_{drop}$, where ${1}_{(N\times R)} \in \mathbb {R}^{(N\times R)}$ and ${1}_{d} \in \mathbb {R}^{d}$ are all-ones vectors of $(N\times R)$ and $d$ dimension, respectively. $N$ is the training batch size and $R$ is the length of rounds of the conversation history. The dropout mask, $\xi $, is calculated following BIBREF20's work. <<</Instance Dropout>>> <<</Consensus Dropout Fusion>>> <<<Ensemble>>> We also integrate our 2 models via an ensemble. We train each model separately and combine them at test time. To be specific, we take logits from the pre-trained models and select the answer with the highest sum of logits. <<</Ensemble>>> <<</Combining Image-Only & Image-History Joint Models>>> <<</Models>>> <<<Experimental Setup>>> <<<Dataset>>> We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context. <<</Dataset>>> <<<Metrics>>> For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values. <<</Metrics>>> <<<Training Details>>> In our models, the size of word vectors is 300, the dimension of visual feature is 2048, and hidden size of LSTM units which are used for encoders of questions, context history, and candidate answers is 512. We employ Adam BIBREF21 as the optimizer. We set the initial learning rate to 0.001 and decrease it by 0.0001 per epoch until 8th epoch and decay by 0.5 from 9th epoch on. For round dropout, we set the maximum number of history features to be dropped to 3 and we tune the p value to 0.25 for our instance dropout in the consensus dropout fusion module. Cross-entropy is used to calculate the loss. <<</Training Details>>> <<</Experimental Setup>>> <<<Analysis and Results>>> In this section, we first discuss how many questions are answered only from image and how many of them need image and history jointly to be answered by conducting a manual investigation. We find that a large portion of questions in the VisDial dataset can be answered by only using images. Next, to verify the observation from the manual investigation, we perform a follow-up experiment and find a trade-off relation between the amount of history features and the metric scoring trend of models. We then analyze the answers from two models (image-only and image-history joint model) and show they are in complementary relation. Lastly, we show each model can make up for the other by being combined in consensus dropout fusion or in an ensemble model. <<<Human Evaluation: Is Image Alone Enough?>>> We conduct a human evaluation on image, history, and question. To be specific, we randomly select 100 images (which leads to 1000 questions) from the validation set for the evaluation and count the number of questions which can be answered only with images and the number of questions which need conversation context to be answered (ground-truth answers are provided to check if the answers can be inferred given corresponding questions and images instead of providing all the 100 candidate answers). Two annotators conduct the experiment independently and questions on which both annotators mark as being able to be answered only with images are classified as only-image questions otherwise as need-history questions. The inter-annotation agreement (kappa) is 0.74. As shown in Table TABREF36, around 80% of the questions can be answered only from images. Conversely, this also implies that a model needs conversation context to better perform the task. However, as discussed in Sec.SECREF1, using only history is not enough either (only 1% of the questions can be answered) and thus history should be used jointly with images. Note that we consider a question with a pronoun as answerable only with an image if the pronoun can be inferred (co-reference) from the corresponding image (e.g., a question mentions `he' and the image has only one person who is a boy). <<</Human Evaluation: Is Image Alone Enough?>>> <<<Reduced Question-Answer Rounds>>> We next run our joint model with various lengths of history. To be specific, we make our joint model use only k previous history features to answer a question. As shown in Table TABREF40, there is a trade-off between the values of metrics and the number of history features. As the number of history features the joint model uses is increased, the score of NDCG is decreased while other metrics are increased. On the other hand, as the number of history features the joint model uses is decreased the score of NDCG is increased while other metrics are decreased. If we see the Visual Dialog primary task metric of NDCG as a barometer of the model's ability to generalize and the other metrics can be seen as an indicator of preciseness, this means that decreased size of history gives a model the ability of generalization at the cost of preciseness. From this tendency, the image-only model has the highest NDCG score. <<</Reduced Question-Answer Rounds>>> <<<Complementary Relation>>> If the image-only model is good at NDCG, can we exploit its ability by combining it with the joint model? To figure out this possibility, we compare each answer from the image-only model and the joint model. To be specific, for R@1, we list up the correct answers from each model and count answers which are in both sets, i.e., the intersection. From the intersection, we obtain the union of the two sets. For NDCG, there is not one single correct answer. So we roughly calculate the intersection by taking minimum values between the two models' scores and averaging them. As we can see in Table TABREF42, the intersections do not take the entire score of either model for both metrics. This could mean image-only and joint models have room to be improved by combining them together. <<</Complementary Relation>>> <<<Model Combination Results>>> Considering the complementary relation between image-only model and joint model, combining the two models would be a good approach to take the best from the both. So, we integrate these two models via two methods: consensus dropout fusion and ensemble (see Sec.SECREF26). <<<Consensus Dropout Fusion Results>>> As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics. Unlike ensemble way, consensus dropout fusion does not require much increase in the number of model parameters. <<</Consensus Dropout Fusion Results>>> <<<Ensemble Model Results>>> As also shown in Table TABREF46, the ensemble model seems to take the best results from each model. Specifically, the NDCG score of the ensemble model is comparable to that of the image-only model and the scores of other metrics are comparable to those of the image-history joint model. From this experiment, we can confirm that the two models are in complementary relation. <<</Ensemble Model Results>>> <<</Model Combination Results>>> <<<Final Visual Dialog Test Results>>> For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average. <<<Ensemble on More Models>>> We also run an ensemble model from our image-only, joint, and consensus dropout fusion models (6 of each and total 18 models) and evaluate it on the test-standard dataset of the VisDial v1.0. This model's scores (NDCG: 59.90, MRR: 64.05, R@1: 50.28, R@5: 80.95, R@10: 90.60, Mean: 4.00) are in between our image-only ensemble model and our consensus dropout fusion ensemble model, i.e., this ensemble model has a higher NDCG than the consensus dropout fusion ensemble model and higher non-NDCG scores than the image-only ensemble model. This result shows that our image-only, joint, and consensus dropout fusion models make up for each other by being combined in an ensemble model as we expected. <<</Ensemble on More Models>>> <<</Final Visual Dialog Test Results>>> <<</Analysis and Results>>> <<<Ablation Study>>> Round Dropout: As shown in Table TABREF52, our round dropout (see Sec.SECREF24) improves the NDCG score by 1.2. A possible interpretation is that round dropout could help the model avoid from over-fitting to some patterns in the history features by intentionally dropping some of the features in the training session. Consensus Dropout Fusion and Dropout Rate: We run our consensus dropout fusion model (see Sec.SECREF27) with different instance dropout rates to figure out how the dropout rates affect the performance of the model. As shown in Table.TABREF53, as the dropout rate increases the NDCG score is also increased while scores of non-NDCG metrics are decreased. By changing the dropout rate, we can modulate the influence of each model (image-only and joint models) over the combined model. We choose a value of 0.25 for the dropout rate since it yields more balanced scores over all metrics. Ensemble Combination: We try different combinations from image-only and joint models to build ensemble models. The total number of models amounts to 3, i.e., image-only + image-only (I+I), joint + joint (J+J), and image-only + joint (I+J) ensemble models. As shown in Table TABREF54, scores of the I+J ensemble model are comparable to same-kind ensemble models (I+I and J+J). To be specific, for the NDCG metric, the I+J model outperforms the J+J model, while, for other metrics (MRR, recall@k, and mean rank), the I+J model outperforms the I+I model. This might imply that the balanced scores (i.e., high scores over all metrics) of the I+J model are from the complementary relation between image-only and image-history joint model. Output Examples: Due to space constraints and no supplementary allowed in AAAI rules, we provide detailed examples in this arxiv version's appendix, showing the coreference and memorization phenomena of the joint image-history model as well as the image-only model's example outputs on image-only questions. Examples of only-image questions, and the ranking lists of the image-history joint and image-only models are also provided. <<</Ablation Study>>> <<<Conclusion>>> We first showed that current multimodal models on the Visual Dialog task over-rely on the dialogue history, and relatedly, image-only and image-history joint models achieve complementary performance gains. Hence, to balance the best abilities from each model, we proposed two ways of combining them: consensus dropout fusion and ensemble. Our consensus dropout fusion and ensemble model achieve strong ranks on multiple leaderboards. Specifically, the models show higher scores than the state-of-the-art results of the Visual Dialog challenge 2018 and more balanced scores than highest ranked results of the Visual Dialog challenge 2019. Given the characteristics of the dataset and current model behaviors, a potential future direction is to combine the power of the two models dynamically, e.g., learn to select a proper model based on the question type. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Conclusion, Related Work" ], "type": "disordered_section" }
1910.08210
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> RTFM: Generalising to Novel Environment Dynamics via Reading <<<Abstract>>> Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose txt2π, a model that captures three-way interactions between the goal, document, and observations. On RTFM, txt2π generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, txt2π produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps. <<</Abstract>>> <<<Introduction>>> Reinforcement learning (RL) has been successful in a variety of areas such as continuous control BIBREF0, dialogue systems BIBREF1, and game-playing BIBREF2. However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training. We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments. Prior work on language grounding and language-based RL (see BIBREF3 for a recent survey) are limited to scenarios in which language specifies the goal for some fixed environment dynamics BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, or the dynamics of the environment vary and are presented in language for some fixed goal BIBREF9. In practice, changes to goals and to environment dynamics tend to occur simultaneously—given some goal, we need to find and interpret relevant information to understand how to achieve the goal. That is, the agent should account for variations in both by selectively reading, thereby generalising to environments with dynamics not seen during training. Our contributions are two-fold. First, we propose a grounded policy learning problem that we call (). In , the agent must jointly reason over a language goal, a document that specifies environment dynamics, and environment observations. In particular, it must identify relevant information in the document to shape its policy and accomplish the goal. To necessitate reading comprehension, we expose the agent to ever changing environment dynamics and corresponding language descriptions such that it cannot avoid reading by memorising any particular environment dynamics. We procedurally generate environment dynamics and natural language templated descriptions of dynamics and goals to produced a combinatorially large number of environment dynamics to train and evaluate . Second, we propose to model the joint reasoning problem in . We show that generalises to goals and environment dynamics not seen during training, and outperforms previous language-conditioned models such as language-conditioned CNNs and FiLM BIBREF10, BIBREF6 both in terms of sample efficiency and final win-rate on . Through curriculum learning where we adapt trained on simpler tasks to more complex tasks, we obtain agents that generalise to tasks with natural language documents that require five hops of reasoning between the goal, document, and environment observations. Our qualitative analyses show that attends to parts of the document relevant to the goal and environment observations, and that the resulting agents exhibit complex behaviour such as retrieving correct items, engaging correct enemies after acquiring correct items, and avoiding incorrect enemies. Finally, we highlight the complexity of in scaling to longer documents, richer dynamics, and natural language variations. We show that significant improvement in language-grounded policy learning is needed to solve these problems in the future. <<</Introduction>>> <<<Related Work>>> <<<Language-conditioned policy learning.>>> A growing body of research is learning policies that follow imperative instructions. The granularity of instructions vary from high-level instructions for application control BIBREF11 and games BIBREF5, BIBREF6 to step-by-step navigation BIBREF7. In contrast to learning policies for imperative instructions, BIBREF4, BIBREF9 infer a policy for a fixed goal using features extracted from high level strategy descriptions and general information about domain dynamics. Unlike prior work, we study the combination of imperative instructions and descriptions of dynamics. Furthermore, we require that the agent learn to filter out irrelevant information to focus on dynamics relevant to accomplishing the goal. <<</Language-conditioned policy learning.>>> <<<Language grounding.>>> Language grounding refers to interpreting language in a non-linguistic context. Examples of such context include images BIBREF12, games BIBREF13, BIBREF14, robot control BIBREF15, BIBREF16, and navigation BIBREF17. We study language grounding in interactive games similar to BIBREF11, BIBREF5 or BIBREF8, where executable semantics are not provided and the agent must learn through experience. Unlike prior work, we require grounding between an underspecified goal, a document of environment dynamics, and world observations. In addition, we focus on generalisation to not only new goal descriptions but new environments dynamics. <<</Language grounding.>>> <<</Related Work>>> <<<>>> We consider a scenario where the agent must jointly reason over a language goal, relevant environment dynamics specified in a text document, and environment observations. In reading the document, the agent should identify relevant information key to solving the goal in the environment. A successful agent needs to perform this language grounding to generalise to new environments with dynamics not seen during training. To study generalisation via reading, the environment dynamics must differ every episode such that the agent cannot avoid reading by memorising a limited set of dynamics. Consequently, we procedurally generate a large number of unique environment dynamics (e.g. effective(blessed items, poison monsters)), along with language descriptions of environment dynamics (e.g. blessed items are effective against poison monsters) and goals (e.g. Defeat the order of the forest). We couple a large, customisable ontology inspired by rogue-like games such as NetHack or Diablo, with natural language templates to create a combinatorially rich set of environment dynamics to learn from and evaluate on. In , the agent is given a document of environment dynamics, observations of the environment, and an underspecified goal instruction. Figure FIGREF3 illustrates an instance of the game. Concretely, we design a set of dynamics that consists of monsters (e.g. wolf, goblin), teams (e.g. Order of the Forest), element types (e.g. fire, poison), item modifiers (e.g. fanatical, arcane), and items (e.g. sword, hammer). When the player is in the same cell with a monster or weapon, the player picks up the item or engages in combat with the monster. The player can possess one item at a time, and drops existing weapons if they pick up a new weapon. A monster moves towards the player with 60% probability, and otherwise moves randomly. The dynamics, the agent's inventory, and the underspecified goal are rendered as text. The game world is rendered as a matrix of text in which each cell describes the entity occupying the cell. We use human-written templates for stating which monsters belong to which team, which modifiers are effective against which element, and which team the agent should defeat (see appendix SECREF13 for details). In order to achieve the goal, the agent must cross-reference relevant information in the document and as well as in the observations. During every episode, we subsample a set of groups, monsters, modifiers, and elements to use. We randomly generate group assignments of which monsters belong to which team and which modifier is effective against which element. A document that consists of randomly ordered statements corresponding to this group assignment is presented to the agent. We sample one element, one team, and a monster from that team (e.g. “fire goblin” from “Order of the forest”) to be the target monster. Additionally, we sample one modifier that beats the element and an item to be the item that defeats the target monster (e.g. “fanatical sword”). Similarly, we sample an element, a team, and a monster from a different team to be the distractor monster (e.g. poison bat), as well as an item that defeats the distractor monster (e.g. arcane hammer). In order to win the game (e.g. Figure FIGREF3), the agent must identify the target team from the goal (e.g. Order of the Forest) identify the monsters that belong to that team (e.g. goblin, jaguar, and lynx) identify which monster is in the world (e.g. goblin), and its element (e.g. fire) identify the modifiers that are effective against this element (e.g. fanatical, shimmering) find which modifier is present (e.g. fanatical), and the item with the modifier (e.g. sword) pick up the correct item (e.g. fanatical sword) engage the correct monster in combat (e.g. fire goblin). If the agent deviates from this trajectory (e.g. does not have correct item before engaging in combat, engages with distractor monster), it cannot defeat the target monster and therefore will lose the game. The agent receives a reward of +1 if it wins the game and -1 otherwise. presents challenges not found in prior work in that it requires a large number of grounding steps in order to solve a task. In order to perform this grounding, the agent must jointly reason over a language goal and document of dynamics, as well as environment observations. In addition to the environment, the positions of the target and distractor within the document are randomised—the agent cannot memorise ordering patterns in order to solve the grounding problems, and must instead identify information relevant to the goal and environment at hand. We split environments into train and eval sets. No assignments of monster-team-modifier-element are shared between train and eval to test whether the agent is able to generalise to new environments with dynamics not seen during training via reading. There are more than 2 million train or eval environments without considering the natural language templates, and 200 million otherwise. With random ordering of templates, the number of unique documents exceeds 15 billion. In addition to the main tasks, we also study a simpler formulation called that has a fixed goal. In , the agent must interpret a document that describes the environment dynamics in order to solve the task. Given an set of characters (e.g. a-z), we sample 3 characters and set up a rock-paper-scissors-like dependency graph between the characters (e.g. “a beats b, b beats c, c beats a”). We then spawn a monster in the world with a randomly assigned type (e.g. “b goblin”), as well as an item corresponding to each type (e.g. “a”, “b”, and “c”). The attributes of the agent, monster, and items are set up such that the player must obtain the correct item and then engage the monster in order to win. Any other sequence of actions (e.g. engaging the monster without the correct weapon) results in a loss. The winning policy should then be to first identify the type of monster present, then cross-reference the document to find which item defeats that type, then pick up the item, and finally engage the monster in combat. Figure FIGREF49 shows an instance of . <<</>>> <<<Model>>> We propose the model, which builds representations that capture three-way interactions between the goal, document describing environment dynamics, and environment observations. We begin with definition of the () layer, which forms the core of our model. <<<() layer>>> Feature-wise linear modulation (FiLM), which modulates visual inputs using representations of textual instructions, is an effective method for image captioning BIBREF10 and instruction following BIBREF6. In , the agent must not only filter concepts in the visual domain using language but filter concepts in the text domain using visual observations. To support this, builds codependent representations of text and visual inputs by further incorporating conditional representations of the text given visual observations. Figure FIGREF12 shows the layer. We use upper-case bold letters to denote tensors, lower-case bold letters for vectors, and non-bold letters for scalars. Exact dimensions of these variables are shown in Table TABREF42 in appendix SECREF8. Let $_$ denote a fixed-length $_$-dimensional representation of the text and $_$ the representation of visual inputs with height $H$, width $W$, and $_$ channels. Let $$ denote a convolution layer. Let + and * symbols denote element-wise addition and multiplication operations that broadcast over spatial dimensions. We first modulate visual features using text features: Unlike FiLM, we additionally modulate text features using visual features: The output of the layer consists of the sum of the modulated features $$, as well as a max-pooled summary $$ over this sum across spatial dimensions. <<</() layer>>> <<<The model>>> We model interactions between observations from the environment, goal, and document using layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure FIGREF18 shows the model. Let $_$ denote word embeddings corresponding to the observations from the environment, where $_[:, :, i, j]$ represents the embeddings corresponding to the $_$-word string that describes the objects in location $(i, j)$ in the grid-world. Let $_$, $_$, and $_$ respectively denote the embeddings corresponding to the $_$-word document, the $_$-word inventory, and the $_$-word goal. We first compute a fixed-length summary $_$ of the the goal using a bidirectional LSTM BIBREF18 followed by self-attention BIBREF19, BIBREF20. We abbreviate self-attention over the goal as $_= (_)$. We similarly compute a summary of the inventory as $_= (_(_))$. Next, we represent the document encoding conditioned on the goal using dot-product attention BIBREF21. We abbreviate attention over the document encoding conditioned on the goal summary as $_= {_}{_}$. Next, we build the joint representation of the inputs using successive layers. At each layer, the visual input to the layer is the concatenation of the output of the previous layer with positional features. For each cell, the positional feature $_$ consists of the $x$ and $y$ distance from the cell to the agent's position respectively, normalized by the width and height of the grid-world. The text input is the concatenation of the goal summary, the inventory summary, the attention over the document given the goal, and the attention over the document given the previous visual summary. Let ${a; b}$ denote the feature-wise concatenation of $a$ and $b$. For the $i$th layer, we have $_{\text{-}}(_)$ is another encoding of the document similar to $_$, produced using a separate LSTM, such that the document is encoded differently for attention with the visual features and with the goal. For $i = 0$, we concatenate the bag-of-words embeddings of the grid with positional features as the initial visual features $^{(0)} = {\sum _j_{, j}; _}$. We max pool a linear transform of the initial visual features to compute the initial visual summary $^{(0)} = (_^{(0)} + _)$. Let $$ denote visual summary of the last layer. We compute the policy $$ and baseline $$ as where $_{\rm policy}$ and $_{\rm baseline}$ are 2-layer multi-layer perceptrons with $$ activation. We train using an implementation of IMPALA BIBREF22, which decouples actors from learners and uses V-trace for off-policy correction. Please refer to appendix SECREF10 for details. <<</The model>>> <<</Model>>> <<<Experiments>>> We consider variants of by varying the size of the grid-world ($6\times 6$ vs $10\times 10$), allowing many-to-one group assignments to make disambiguation more difficult (group), allowing dynamic, moving monsters that hunt down the player (dyna), and using natural language templated documents (nl). In the absence of many-to-one assignments, the agent does not need to perform steps 3 and 5 in section SECREF3 as there is no need to disambiguate among many assignees, making it easier to identify relevant information. We compare to the FiLM model by BIBREF6 and a language-conditioned residual CNN model. We train on one set of dynamics (e.g. group assignments of monsters and modifiers) and evaluated on a held-out set of dynamics. We also study three variants of . In no_task_attn, the document attention conditioned on the goal utterance ((DISPLAY_FORM26)) is removed and the goal instead represented through self-attention and concatenated with the rest of the text features. In no_vis_attn, we do not attend over the document given the visual output of the previous layer ((DISPLAY_FORM27)), and the document is instead represented through self-attention. In no_text_mod, text modulation using visual features ((DISPLAY_FORM14)) is removed. Please see appendix SECREF9 for model details on our model and baselines, and appendix SECREF10 for training details. <<<Comparison to baselines and ablations>>> We compare to baselines and ablated variants on a simplified variant of in which there are one-to-one group assignments (no group), stationary monsters (no dyna), and no natural language templated descriptions (no nl). Figure FIGREF29 shows that compared to baselines and ablated variants, is more sample efficient and converges to higher performance. Moreover, no ablated variant is able to solve the tasks—it is the combination of ablated features that enables to win consistently. Qualitatively, the ablated variants converge to locally optimum policies in which the agent often picks up a random item and then attacks the correct monster, resulting in a $\sim 50$% win rate. Table FIGREF29 shows that all models, with the exception of the CNN baseline, generalise to new evaluation environments with dynamics and world configurations not seen during training, with outperforming FiLM and the CNN model. We find similar results for , its ablated variants, and baselines on other tasks (see appendix SECREF11 for details). <<</Comparison to baselines and ablations>>> <<<Curriculum learning for complex environments>>> Due to the long sequence of co-references the agent must perform in order to solve the full ($10\times 10$ with moving monsters, many-to-one group assignments, and natural language templated documents) we design a curriculum to facilitate policy learning by starting with simpler variants of . We start with the simplest variant (no group, no dyna, no nl) and then add in an additional dimension of complexity. We repeatedly add more complexity until we obtain $10\times 10$ worlds with moving monsters, many-to-one group assignments and natural language templated descriptions. The performance across the curriculum is shown in Table TABREF32 (see Figure FIGREF58 in appendix SECREF12 for training curves of each stage). We see that curriculum learning is crucial to making progress on , and that initial policy training (first row of Table TABREF32) with additional complexities in any of the dimensions result in significantly worse performance. We take each of the 5 runs after training through the whole curriculum and evaluate them on dynamics not seen during training. Table TABREF33 shows variants of the last stage of the curriculum in which the model was trained on $6\times 6$ versions of the full and in which the model was trained on $10\times 10$ versions of the full . We see that models trained on smaller worlds generalise to bigger worlds. Despite curriculum learning, however, performance of the final model trail that of human players, who can consistently solve . This highlights the difficulties of the problem and suggests that there is significant room for improvement in developing better language grounded policy learners. <<<Attention maps.>>> Figure FIGREF34 shows attention conditioned on the goal and on observation summaries produced by intermediate layers. Goal-conditioned attention consistently locates the clause that contains the team the agent is supposed to attack. Intermediate layer attentions focus on regions near modifiers and monsters, particularly those that are present in the observations. These results suggests that attention mechanisms in help identify relevant information in the document. <<</Attention maps.>>> <<<Analysis of trajectories and failure modes.>>> We examine trajectories from well-performing policies (80% win rate) as well as poorly-performing policies (50% win rate) on the full . We find that well-performing policies exhibit a number of consistent behaviours such as identifying the correct item to pick up to fight the target monster, avoiding distractors, and engaging target monsters after acquiring the correct item. In contrast, the poorly-performing policies occasionally pick up the wrong item, causing the agent to lose when engaging with a monster. In addition, it occasionally gets stuck in evading monsters indefinitely, causing the agent to lose when the time runs out. Replays of both policies can be found in GIFs in the supplementary materials. <<</Analysis of trajectories and failure modes.>>> <<</Curriculum learning for complex environments>>> <<</Experiments>>> <<<Conclusion>>> We proposed , a grounded policy learning problem in which the agent must jointly reason over a language goal, relevant dynamics specified in a document, and environment observations. In order to study , we procedurally generated a combinatorially large number of environment dynamics such that the model cannot memorise a set of environment dynamics and must instead generalise via reading. We proposed , a model that captures three-way interactions between the goal, document, and observations, and that generalises to new environments with dynamics not seen during training. outperforms baselines such as FiLM and language-conditioned CNNs. Through curriculum learning, performs well on complex tasks that require several reasoning and coreference steps with natural language templated goals and descriptions of the dynamics. Our work suggests that language understanding via reading is a promising way to learn policies that generalise to new environments. Despite curriculum learning, our best models trail performance of human players, suggesting that there is ample room for improvement in grounded policy learning on complex problems. In addition to jointly learning policies based on external documentation and language goals, we are interested in exploring how to use supporting evidence in external documentation to reason about plans BIBREF23 and induce hierarchical policies BIBREF24, BIBREF25. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Related Work" ], "type": "disordered_section" }
1910.08210
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> RTFM: Generalising to Novel Environment Dynamics via Reading <<<Abstract>>> Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose txt2π, a model that captures three-way interactions between the goal, document, and observations. On RTFM, txt2π generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, txt2π produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps. <<</Abstract>>> <<<Introduction>>> Reinforcement learning (RL) has been successful in a variety of areas such as continuous control BIBREF0, dialogue systems BIBREF1, and game-playing BIBREF2. However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training. We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments. Prior work on language grounding and language-based RL (see BIBREF3 for a recent survey) are limited to scenarios in which language specifies the goal for some fixed environment dynamics BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, or the dynamics of the environment vary and are presented in language for some fixed goal BIBREF9. In practice, changes to goals and to environment dynamics tend to occur simultaneously—given some goal, we need to find and interpret relevant information to understand how to achieve the goal. That is, the agent should account for variations in both by selectively reading, thereby generalising to environments with dynamics not seen during training. Our contributions are two-fold. First, we propose a grounded policy learning problem that we call (). In , the agent must jointly reason over a language goal, a document that specifies environment dynamics, and environment observations. In particular, it must identify relevant information in the document to shape its policy and accomplish the goal. To necessitate reading comprehension, we expose the agent to ever changing environment dynamics and corresponding language descriptions such that it cannot avoid reading by memorising any particular environment dynamics. We procedurally generate environment dynamics and natural language templated descriptions of dynamics and goals to produced a combinatorially large number of environment dynamics to train and evaluate . Second, we propose to model the joint reasoning problem in . We show that generalises to goals and environment dynamics not seen during training, and outperforms previous language-conditioned models such as language-conditioned CNNs and FiLM BIBREF10, BIBREF6 both in terms of sample efficiency and final win-rate on . Through curriculum learning where we adapt trained on simpler tasks to more complex tasks, we obtain agents that generalise to tasks with natural language documents that require five hops of reasoning between the goal, document, and environment observations. Our qualitative analyses show that attends to parts of the document relevant to the goal and environment observations, and that the resulting agents exhibit complex behaviour such as retrieving correct items, engaging correct enemies after acquiring correct items, and avoiding incorrect enemies. Finally, we highlight the complexity of in scaling to longer documents, richer dynamics, and natural language variations. We show that significant improvement in language-grounded policy learning is needed to solve these problems in the future. <<</Introduction>>> <<<Related Work>>> <<<Language-conditioned policy learning.>>> A growing body of research is learning policies that follow imperative instructions. The granularity of instructions vary from high-level instructions for application control BIBREF11 and games BIBREF5, BIBREF6 to step-by-step navigation BIBREF7. In contrast to learning policies for imperative instructions, BIBREF4, BIBREF9 infer a policy for a fixed goal using features extracted from high level strategy descriptions and general information about domain dynamics. Unlike prior work, we study the combination of imperative instructions and descriptions of dynamics. Furthermore, we require that the agent learn to filter out irrelevant information to focus on dynamics relevant to accomplishing the goal. <<</Language-conditioned policy learning.>>> <<<Language grounding.>>> Language grounding refers to interpreting language in a non-linguistic context. Examples of such context include images BIBREF12, games BIBREF13, BIBREF14, robot control BIBREF15, BIBREF16, and navigation BIBREF17. We study language grounding in interactive games similar to BIBREF11, BIBREF5 or BIBREF8, where executable semantics are not provided and the agent must learn through experience. Unlike prior work, we require grounding between an underspecified goal, a document of environment dynamics, and world observations. In addition, we focus on generalisation to not only new goal descriptions but new environments dynamics. <<</Language grounding.>>> <<</Related Work>>> <<<>>> We consider a scenario where the agent must jointly reason over a language goal, relevant environment dynamics specified in a text document, and environment observations. In reading the document, the agent should identify relevant information key to solving the goal in the environment. A successful agent needs to perform this language grounding to generalise to new environments with dynamics not seen during training. To study generalisation via reading, the environment dynamics must differ every episode such that the agent cannot avoid reading by memorising a limited set of dynamics. Consequently, we procedurally generate a large number of unique environment dynamics (e.g. effective(blessed items, poison monsters)), along with language descriptions of environment dynamics (e.g. blessed items are effective against poison monsters) and goals (e.g. Defeat the order of the forest). We couple a large, customisable ontology inspired by rogue-like games such as NetHack or Diablo, with natural language templates to create a combinatorially rich set of environment dynamics to learn from and evaluate on. In , the agent is given a document of environment dynamics, observations of the environment, and an underspecified goal instruction. Figure FIGREF3 illustrates an instance of the game. Concretely, we design a set of dynamics that consists of monsters (e.g. wolf, goblin), teams (e.g. Order of the Forest), element types (e.g. fire, poison), item modifiers (e.g. fanatical, arcane), and items (e.g. sword, hammer). When the player is in the same cell with a monster or weapon, the player picks up the item or engages in combat with the monster. The player can possess one item at a time, and drops existing weapons if they pick up a new weapon. A monster moves towards the player with 60% probability, and otherwise moves randomly. The dynamics, the agent's inventory, and the underspecified goal are rendered as text. The game world is rendered as a matrix of text in which each cell describes the entity occupying the cell. We use human-written templates for stating which monsters belong to which team, which modifiers are effective against which element, and which team the agent should defeat (see appendix SECREF13 for details). In order to achieve the goal, the agent must cross-reference relevant information in the document and as well as in the observations. During every episode, we subsample a set of groups, monsters, modifiers, and elements to use. We randomly generate group assignments of which monsters belong to which team and which modifier is effective against which element. A document that consists of randomly ordered statements corresponding to this group assignment is presented to the agent. We sample one element, one team, and a monster from that team (e.g. “fire goblin” from “Order of the forest”) to be the target monster. Additionally, we sample one modifier that beats the element and an item to be the item that defeats the target monster (e.g. “fanatical sword”). Similarly, we sample an element, a team, and a monster from a different team to be the distractor monster (e.g. poison bat), as well as an item that defeats the distractor monster (e.g. arcane hammer). In order to win the game (e.g. Figure FIGREF3), the agent must identify the target team from the goal (e.g. Order of the Forest) identify the monsters that belong to that team (e.g. goblin, jaguar, and lynx) identify which monster is in the world (e.g. goblin), and its element (e.g. fire) identify the modifiers that are effective against this element (e.g. fanatical, shimmering) find which modifier is present (e.g. fanatical), and the item with the modifier (e.g. sword) pick up the correct item (e.g. fanatical sword) engage the correct monster in combat (e.g. fire goblin). If the agent deviates from this trajectory (e.g. does not have correct item before engaging in combat, engages with distractor monster), it cannot defeat the target monster and therefore will lose the game. The agent receives a reward of +1 if it wins the game and -1 otherwise. presents challenges not found in prior work in that it requires a large number of grounding steps in order to solve a task. In order to perform this grounding, the agent must jointly reason over a language goal and document of dynamics, as well as environment observations. In addition to the environment, the positions of the target and distractor within the document are randomised—the agent cannot memorise ordering patterns in order to solve the grounding problems, and must instead identify information relevant to the goal and environment at hand. We split environments into train and eval sets. No assignments of monster-team-modifier-element are shared between train and eval to test whether the agent is able to generalise to new environments with dynamics not seen during training via reading. There are more than 2 million train or eval environments without considering the natural language templates, and 200 million otherwise. With random ordering of templates, the number of unique documents exceeds 15 billion. In addition to the main tasks, we also study a simpler formulation called that has a fixed goal. In , the agent must interpret a document that describes the environment dynamics in order to solve the task. Given an set of characters (e.g. a-z), we sample 3 characters and set up a rock-paper-scissors-like dependency graph between the characters (e.g. “a beats b, b beats c, c beats a”). We then spawn a monster in the world with a randomly assigned type (e.g. “b goblin”), as well as an item corresponding to each type (e.g. “a”, “b”, and “c”). The attributes of the agent, monster, and items are set up such that the player must obtain the correct item and then engage the monster in order to win. Any other sequence of actions (e.g. engaging the monster without the correct weapon) results in a loss. The winning policy should then be to first identify the type of monster present, then cross-reference the document to find which item defeats that type, then pick up the item, and finally engage the monster in combat. Figure FIGREF49 shows an instance of . <<</>>> <<<Model>>> We propose the model, which builds representations that capture three-way interactions between the goal, document describing environment dynamics, and environment observations. We begin with definition of the () layer, which forms the core of our model. <<<() layer>>> Feature-wise linear modulation (FiLM), which modulates visual inputs using representations of textual instructions, is an effective method for image captioning BIBREF10 and instruction following BIBREF6. In , the agent must not only filter concepts in the visual domain using language but filter concepts in the text domain using visual observations. To support this, builds codependent representations of text and visual inputs by further incorporating conditional representations of the text given visual observations. Figure FIGREF12 shows the layer. We use upper-case bold letters to denote tensors, lower-case bold letters for vectors, and non-bold letters for scalars. Exact dimensions of these variables are shown in Table TABREF42 in appendix SECREF8. Let $_$ denote a fixed-length $_$-dimensional representation of the text and $_$ the representation of visual inputs with height $H$, width $W$, and $_$ channels. Let $$ denote a convolution layer. Let + and * symbols denote element-wise addition and multiplication operations that broadcast over spatial dimensions. We first modulate visual features using text features: Unlike FiLM, we additionally modulate text features using visual features: The output of the layer consists of the sum of the modulated features $$, as well as a max-pooled summary $$ over this sum across spatial dimensions. <<</() layer>>> <<<The model>>> We model interactions between observations from the environment, goal, and document using layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure FIGREF18 shows the model. Let $_$ denote word embeddings corresponding to the observations from the environment, where $_[:, :, i, j]$ represents the embeddings corresponding to the $_$-word string that describes the objects in location $(i, j)$ in the grid-world. Let $_$, $_$, and $_$ respectively denote the embeddings corresponding to the $_$-word document, the $_$-word inventory, and the $_$-word goal. We first compute a fixed-length summary $_$ of the the goal using a bidirectional LSTM BIBREF18 followed by self-attention BIBREF19, BIBREF20. We abbreviate self-attention over the goal as $_= (_)$. We similarly compute a summary of the inventory as $_= (_(_))$. Next, we represent the document encoding conditioned on the goal using dot-product attention BIBREF21. We abbreviate attention over the document encoding conditioned on the goal summary as $_= {_}{_}$. Next, we build the joint representation of the inputs using successive layers. At each layer, the visual input to the layer is the concatenation of the output of the previous layer with positional features. For each cell, the positional feature $_$ consists of the $x$ and $y$ distance from the cell to the agent's position respectively, normalized by the width and height of the grid-world. The text input is the concatenation of the goal summary, the inventory summary, the attention over the document given the goal, and the attention over the document given the previous visual summary. Let ${a; b}$ denote the feature-wise concatenation of $a$ and $b$. For the $i$th layer, we have $_{\text{-}}(_)$ is another encoding of the document similar to $_$, produced using a separate LSTM, such that the document is encoded differently for attention with the visual features and with the goal. For $i = 0$, we concatenate the bag-of-words embeddings of the grid with positional features as the initial visual features $^{(0)} = {\sum _j_{, j}; _}$. We max pool a linear transform of the initial visual features to compute the initial visual summary $^{(0)} = (_^{(0)} + _)$. Let $$ denote visual summary of the last layer. We compute the policy $$ and baseline $$ as where $_{\rm policy}$ and $_{\rm baseline}$ are 2-layer multi-layer perceptrons with $$ activation. We train using an implementation of IMPALA BIBREF22, which decouples actors from learners and uses V-trace for off-policy correction. Please refer to appendix SECREF10 for details. <<</The model>>> <<</Model>>> <<<Experiments>>> We consider variants of by varying the size of the grid-world ($6\times 6$ vs $10\times 10$), allowing many-to-one group assignments to make disambiguation more difficult (group), allowing dynamic, moving monsters that hunt down the player (dyna), and using natural language templated documents (nl). In the absence of many-to-one assignments, the agent does not need to perform steps 3 and 5 in section SECREF3 as there is no need to disambiguate among many assignees, making it easier to identify relevant information. We compare to the FiLM model by BIBREF6 and a language-conditioned residual CNN model. We train on one set of dynamics (e.g. group assignments of monsters and modifiers) and evaluated on a held-out set of dynamics. We also study three variants of . In no_task_attn, the document attention conditioned on the goal utterance ((DISPLAY_FORM26)) is removed and the goal instead represented through self-attention and concatenated with the rest of the text features. In no_vis_attn, we do not attend over the document given the visual output of the previous layer ((DISPLAY_FORM27)), and the document is instead represented through self-attention. In no_text_mod, text modulation using visual features ((DISPLAY_FORM14)) is removed. Please see appendix SECREF9 for model details on our model and baselines, and appendix SECREF10 for training details. <<<Comparison to baselines and ablations>>> We compare to baselines and ablated variants on a simplified variant of in which there are one-to-one group assignments (no group), stationary monsters (no dyna), and no natural language templated descriptions (no nl). Figure FIGREF29 shows that compared to baselines and ablated variants, is more sample efficient and converges to higher performance. Moreover, no ablated variant is able to solve the tasks—it is the combination of ablated features that enables to win consistently. Qualitatively, the ablated variants converge to locally optimum policies in which the agent often picks up a random item and then attacks the correct monster, resulting in a $\sim 50$% win rate. Table FIGREF29 shows that all models, with the exception of the CNN baseline, generalise to new evaluation environments with dynamics and world configurations not seen during training, with outperforming FiLM and the CNN model. We find similar results for , its ablated variants, and baselines on other tasks (see appendix SECREF11 for details). <<</Comparison to baselines and ablations>>> <<<Curriculum learning for complex environments>>> Due to the long sequence of co-references the agent must perform in order to solve the full ($10\times 10$ with moving monsters, many-to-one group assignments, and natural language templated documents) we design a curriculum to facilitate policy learning by starting with simpler variants of . We start with the simplest variant (no group, no dyna, no nl) and then add in an additional dimension of complexity. We repeatedly add more complexity until we obtain $10\times 10$ worlds with moving monsters, many-to-one group assignments and natural language templated descriptions. The performance across the curriculum is shown in Table TABREF32 (see Figure FIGREF58 in appendix SECREF12 for training curves of each stage). We see that curriculum learning is crucial to making progress on , and that initial policy training (first row of Table TABREF32) with additional complexities in any of the dimensions result in significantly worse performance. We take each of the 5 runs after training through the whole curriculum and evaluate them on dynamics not seen during training. Table TABREF33 shows variants of the last stage of the curriculum in which the model was trained on $6\times 6$ versions of the full and in which the model was trained on $10\times 10$ versions of the full . We see that models trained on smaller worlds generalise to bigger worlds. Despite curriculum learning, however, performance of the final model trail that of human players, who can consistently solve . This highlights the difficulties of the problem and suggests that there is significant room for improvement in developing better language grounded policy learners. <<<Attention maps.>>> Figure FIGREF34 shows attention conditioned on the goal and on observation summaries produced by intermediate layers. Goal-conditioned attention consistently locates the clause that contains the team the agent is supposed to attack. Intermediate layer attentions focus on regions near modifiers and monsters, particularly those that are present in the observations. These results suggests that attention mechanisms in help identify relevant information in the document. <<</Attention maps.>>> <<<Analysis of trajectories and failure modes.>>> We examine trajectories from well-performing policies (80% win rate) as well as poorly-performing policies (50% win rate) on the full . We find that well-performing policies exhibit a number of consistent behaviours such as identifying the correct item to pick up to fight the target monster, avoiding distractors, and engaging target monsters after acquiring the correct item. In contrast, the poorly-performing policies occasionally pick up the wrong item, causing the agent to lose when engaging with a monster. In addition, it occasionally gets stuck in evading monsters indefinitely, causing the agent to lose when the time runs out. Replays of both policies can be found in GIFs in the supplementary materials. <<</Analysis of trajectories and failure modes.>>> <<</Curriculum learning for complex environments>>> <<</Experiments>>> <<<Conclusion>>> We proposed , a grounded policy learning problem in which the agent must jointly reason over a language goal, relevant dynamics specified in a document, and environment observations. In order to study , we procedurally generated a combinatorially large number of environment dynamics such that the model cannot memorise a set of environment dynamics and must instead generalise via reading. We proposed , a model that captures three-way interactions between the goal, document, and observations, and that generalises to new environments with dynamics not seen during training. outperforms baselines such as FiLM and language-conditioned CNNs. Through curriculum learning, performs well on complex tasks that require several reasoning and coreference steps with natural language templated goals and descriptions of the dynamics. Our work suggests that language understanding via reading is a promising way to learn policies that generalise to new environments. Despite curriculum learning, our best models trail performance of human players, suggesting that there is ample room for improvement in grounded policy learning on complex problems. In addition to jointly learning policies based on external documentation and language goals, we are interested in exploring how to use supporting evidence in external documentation to reason about plans BIBREF23 and induce hierarchical policies BIBREF24, BIBREF25. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Related Work, Abstract" ], "type": "disordered_section" }
1910.08210
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> RTFM: Generalising to Novel Environment Dynamics via Reading <<<Abstract>>> Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose txt2π, a model that captures three-way interactions between the goal, document, and observations. On RTFM, txt2π generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, txt2π produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps. <<</Abstract>>> <<<Introduction>>> Reinforcement learning (RL) has been successful in a variety of areas such as continuous control BIBREF0, dialogue systems BIBREF1, and game-playing BIBREF2. However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training. We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments. Prior work on language grounding and language-based RL (see BIBREF3 for a recent survey) are limited to scenarios in which language specifies the goal for some fixed environment dynamics BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, or the dynamics of the environment vary and are presented in language for some fixed goal BIBREF9. In practice, changes to goals and to environment dynamics tend to occur simultaneously—given some goal, we need to find and interpret relevant information to understand how to achieve the goal. That is, the agent should account for variations in both by selectively reading, thereby generalising to environments with dynamics not seen during training. Our contributions are two-fold. First, we propose a grounded policy learning problem that we call (). In , the agent must jointly reason over a language goal, a document that specifies environment dynamics, and environment observations. In particular, it must identify relevant information in the document to shape its policy and accomplish the goal. To necessitate reading comprehension, we expose the agent to ever changing environment dynamics and corresponding language descriptions such that it cannot avoid reading by memorising any particular environment dynamics. We procedurally generate environment dynamics and natural language templated descriptions of dynamics and goals to produced a combinatorially large number of environment dynamics to train and evaluate . Second, we propose to model the joint reasoning problem in . We show that generalises to goals and environment dynamics not seen during training, and outperforms previous language-conditioned models such as language-conditioned CNNs and FiLM BIBREF10, BIBREF6 both in terms of sample efficiency and final win-rate on . Through curriculum learning where we adapt trained on simpler tasks to more complex tasks, we obtain agents that generalise to tasks with natural language documents that require five hops of reasoning between the goal, document, and environment observations. Our qualitative analyses show that attends to parts of the document relevant to the goal and environment observations, and that the resulting agents exhibit complex behaviour such as retrieving correct items, engaging correct enemies after acquiring correct items, and avoiding incorrect enemies. Finally, we highlight the complexity of in scaling to longer documents, richer dynamics, and natural language variations. We show that significant improvement in language-grounded policy learning is needed to solve these problems in the future. <<</Introduction>>> <<<Related Work>>> <<<Language-conditioned policy learning.>>> A growing body of research is learning policies that follow imperative instructions. The granularity of instructions vary from high-level instructions for application control BIBREF11 and games BIBREF5, BIBREF6 to step-by-step navigation BIBREF7. In contrast to learning policies for imperative instructions, BIBREF4, BIBREF9 infer a policy for a fixed goal using features extracted from high level strategy descriptions and general information about domain dynamics. Unlike prior work, we study the combination of imperative instructions and descriptions of dynamics. Furthermore, we require that the agent learn to filter out irrelevant information to focus on dynamics relevant to accomplishing the goal. <<</Language-conditioned policy learning.>>> <<<Language grounding.>>> Language grounding refers to interpreting language in a non-linguistic context. Examples of such context include images BIBREF12, games BIBREF13, BIBREF14, robot control BIBREF15, BIBREF16, and navigation BIBREF17. We study language grounding in interactive games similar to BIBREF11, BIBREF5 or BIBREF8, where executable semantics are not provided and the agent must learn through experience. Unlike prior work, we require grounding between an underspecified goal, a document of environment dynamics, and world observations. In addition, we focus on generalisation to not only new goal descriptions but new environments dynamics. <<</Language grounding.>>> <<</Related Work>>> <<<>>> We consider a scenario where the agent must jointly reason over a language goal, relevant environment dynamics specified in a text document, and environment observations. In reading the document, the agent should identify relevant information key to solving the goal in the environment. A successful agent needs to perform this language grounding to generalise to new environments with dynamics not seen during training. To study generalisation via reading, the environment dynamics must differ every episode such that the agent cannot avoid reading by memorising a limited set of dynamics. Consequently, we procedurally generate a large number of unique environment dynamics (e.g. effective(blessed items, poison monsters)), along with language descriptions of environment dynamics (e.g. blessed items are effective against poison monsters) and goals (e.g. Defeat the order of the forest). We couple a large, customisable ontology inspired by rogue-like games such as NetHack or Diablo, with natural language templates to create a combinatorially rich set of environment dynamics to learn from and evaluate on. In , the agent is given a document of environment dynamics, observations of the environment, and an underspecified goal instruction. Figure FIGREF3 illustrates an instance of the game. Concretely, we design a set of dynamics that consists of monsters (e.g. wolf, goblin), teams (e.g. Order of the Forest), element types (e.g. fire, poison), item modifiers (e.g. fanatical, arcane), and items (e.g. sword, hammer). When the player is in the same cell with a monster or weapon, the player picks up the item or engages in combat with the monster. The player can possess one item at a time, and drops existing weapons if they pick up a new weapon. A monster moves towards the player with 60% probability, and otherwise moves randomly. The dynamics, the agent's inventory, and the underspecified goal are rendered as text. The game world is rendered as a matrix of text in which each cell describes the entity occupying the cell. We use human-written templates for stating which monsters belong to which team, which modifiers are effective against which element, and which team the agent should defeat (see appendix SECREF13 for details). In order to achieve the goal, the agent must cross-reference relevant information in the document and as well as in the observations. During every episode, we subsample a set of groups, monsters, modifiers, and elements to use. We randomly generate group assignments of which monsters belong to which team and which modifier is effective against which element. A document that consists of randomly ordered statements corresponding to this group assignment is presented to the agent. We sample one element, one team, and a monster from that team (e.g. “fire goblin” from “Order of the forest”) to be the target monster. Additionally, we sample one modifier that beats the element and an item to be the item that defeats the target monster (e.g. “fanatical sword”). Similarly, we sample an element, a team, and a monster from a different team to be the distractor monster (e.g. poison bat), as well as an item that defeats the distractor monster (e.g. arcane hammer). In order to win the game (e.g. Figure FIGREF3), the agent must identify the target team from the goal (e.g. Order of the Forest) identify the monsters that belong to that team (e.g. goblin, jaguar, and lynx) identify which monster is in the world (e.g. goblin), and its element (e.g. fire) identify the modifiers that are effective against this element (e.g. fanatical, shimmering) find which modifier is present (e.g. fanatical), and the item with the modifier (e.g. sword) pick up the correct item (e.g. fanatical sword) engage the correct monster in combat (e.g. fire goblin). If the agent deviates from this trajectory (e.g. does not have correct item before engaging in combat, engages with distractor monster), it cannot defeat the target monster and therefore will lose the game. The agent receives a reward of +1 if it wins the game and -1 otherwise. presents challenges not found in prior work in that it requires a large number of grounding steps in order to solve a task. In order to perform this grounding, the agent must jointly reason over a language goal and document of dynamics, as well as environment observations. In addition to the environment, the positions of the target and distractor within the document are randomised—the agent cannot memorise ordering patterns in order to solve the grounding problems, and must instead identify information relevant to the goal and environment at hand. We split environments into train and eval sets. No assignments of monster-team-modifier-element are shared between train and eval to test whether the agent is able to generalise to new environments with dynamics not seen during training via reading. There are more than 2 million train or eval environments without considering the natural language templates, and 200 million otherwise. With random ordering of templates, the number of unique documents exceeds 15 billion. In addition to the main tasks, we also study a simpler formulation called that has a fixed goal. In , the agent must interpret a document that describes the environment dynamics in order to solve the task. Given an set of characters (e.g. a-z), we sample 3 characters and set up a rock-paper-scissors-like dependency graph between the characters (e.g. “a beats b, b beats c, c beats a”). We then spawn a monster in the world with a randomly assigned type (e.g. “b goblin”), as well as an item corresponding to each type (e.g. “a”, “b”, and “c”). The attributes of the agent, monster, and items are set up such that the player must obtain the correct item and then engage the monster in order to win. Any other sequence of actions (e.g. engaging the monster without the correct weapon) results in a loss. The winning policy should then be to first identify the type of monster present, then cross-reference the document to find which item defeats that type, then pick up the item, and finally engage the monster in combat. Figure FIGREF49 shows an instance of . <<</>>> <<<Model>>> We propose the model, which builds representations that capture three-way interactions between the goal, document describing environment dynamics, and environment observations. We begin with definition of the () layer, which forms the core of our model. <<<() layer>>> Feature-wise linear modulation (FiLM), which modulates visual inputs using representations of textual instructions, is an effective method for image captioning BIBREF10 and instruction following BIBREF6. In , the agent must not only filter concepts in the visual domain using language but filter concepts in the text domain using visual observations. To support this, builds codependent representations of text and visual inputs by further incorporating conditional representations of the text given visual observations. Figure FIGREF12 shows the layer. We use upper-case bold letters to denote tensors, lower-case bold letters for vectors, and non-bold letters for scalars. Exact dimensions of these variables are shown in Table TABREF42 in appendix SECREF8. Let $_$ denote a fixed-length $_$-dimensional representation of the text and $_$ the representation of visual inputs with height $H$, width $W$, and $_$ channels. Let $$ denote a convolution layer. Let + and * symbols denote element-wise addition and multiplication operations that broadcast over spatial dimensions. We first modulate visual features using text features: Unlike FiLM, we additionally modulate text features using visual features: The output of the layer consists of the sum of the modulated features $$, as well as a max-pooled summary $$ over this sum across spatial dimensions. <<</() layer>>> <<<The model>>> We model interactions between observations from the environment, goal, and document using layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure FIGREF18 shows the model. Let $_$ denote word embeddings corresponding to the observations from the environment, where $_[:, :, i, j]$ represents the embeddings corresponding to the $_$-word string that describes the objects in location $(i, j)$ in the grid-world. Let $_$, $_$, and $_$ respectively denote the embeddings corresponding to the $_$-word document, the $_$-word inventory, and the $_$-word goal. We first compute a fixed-length summary $_$ of the the goal using a bidirectional LSTM BIBREF18 followed by self-attention BIBREF19, BIBREF20. We abbreviate self-attention over the goal as $_= (_)$. We similarly compute a summary of the inventory as $_= (_(_))$. Next, we represent the document encoding conditioned on the goal using dot-product attention BIBREF21. We abbreviate attention over the document encoding conditioned on the goal summary as $_= {_}{_}$. Next, we build the joint representation of the inputs using successive layers. At each layer, the visual input to the layer is the concatenation of the output of the previous layer with positional features. For each cell, the positional feature $_$ consists of the $x$ and $y$ distance from the cell to the agent's position respectively, normalized by the width and height of the grid-world. The text input is the concatenation of the goal summary, the inventory summary, the attention over the document given the goal, and the attention over the document given the previous visual summary. Let ${a; b}$ denote the feature-wise concatenation of $a$ and $b$. For the $i$th layer, we have $_{\text{-}}(_)$ is another encoding of the document similar to $_$, produced using a separate LSTM, such that the document is encoded differently for attention with the visual features and with the goal. For $i = 0$, we concatenate the bag-of-words embeddings of the grid with positional features as the initial visual features $^{(0)} = {\sum _j_{, j}; _}$. We max pool a linear transform of the initial visual features to compute the initial visual summary $^{(0)} = (_^{(0)} + _)$. Let $$ denote visual summary of the last layer. We compute the policy $$ and baseline $$ as where $_{\rm policy}$ and $_{\rm baseline}$ are 2-layer multi-layer perceptrons with $$ activation. We train using an implementation of IMPALA BIBREF22, which decouples actors from learners and uses V-trace for off-policy correction. Please refer to appendix SECREF10 for details. <<</The model>>> <<</Model>>> <<<Experiments>>> We consider variants of by varying the size of the grid-world ($6\times 6$ vs $10\times 10$), allowing many-to-one group assignments to make disambiguation more difficult (group), allowing dynamic, moving monsters that hunt down the player (dyna), and using natural language templated documents (nl). In the absence of many-to-one assignments, the agent does not need to perform steps 3 and 5 in section SECREF3 as there is no need to disambiguate among many assignees, making it easier to identify relevant information. We compare to the FiLM model by BIBREF6 and a language-conditioned residual CNN model. We train on one set of dynamics (e.g. group assignments of monsters and modifiers) and evaluated on a held-out set of dynamics. We also study three variants of . In no_task_attn, the document attention conditioned on the goal utterance ((DISPLAY_FORM26)) is removed and the goal instead represented through self-attention and concatenated with the rest of the text features. In no_vis_attn, we do not attend over the document given the visual output of the previous layer ((DISPLAY_FORM27)), and the document is instead represented through self-attention. In no_text_mod, text modulation using visual features ((DISPLAY_FORM14)) is removed. Please see appendix SECREF9 for model details on our model and baselines, and appendix SECREF10 for training details. <<<Comparison to baselines and ablations>>> We compare to baselines and ablated variants on a simplified variant of in which there are one-to-one group assignments (no group), stationary monsters (no dyna), and no natural language templated descriptions (no nl). Figure FIGREF29 shows that compared to baselines and ablated variants, is more sample efficient and converges to higher performance. Moreover, no ablated variant is able to solve the tasks—it is the combination of ablated features that enables to win consistently. Qualitatively, the ablated variants converge to locally optimum policies in which the agent often picks up a random item and then attacks the correct monster, resulting in a $\sim 50$% win rate. Table FIGREF29 shows that all models, with the exception of the CNN baseline, generalise to new evaluation environments with dynamics and world configurations not seen during training, with outperforming FiLM and the CNN model. We find similar results for , its ablated variants, and baselines on other tasks (see appendix SECREF11 for details). <<</Comparison to baselines and ablations>>> <<<Curriculum learning for complex environments>>> Due to the long sequence of co-references the agent must perform in order to solve the full ($10\times 10$ with moving monsters, many-to-one group assignments, and natural language templated documents) we design a curriculum to facilitate policy learning by starting with simpler variants of . We start with the simplest variant (no group, no dyna, no nl) and then add in an additional dimension of complexity. We repeatedly add more complexity until we obtain $10\times 10$ worlds with moving monsters, many-to-one group assignments and natural language templated descriptions. The performance across the curriculum is shown in Table TABREF32 (see Figure FIGREF58 in appendix SECREF12 for training curves of each stage). We see that curriculum learning is crucial to making progress on , and that initial policy training (first row of Table TABREF32) with additional complexities in any of the dimensions result in significantly worse performance. We take each of the 5 runs after training through the whole curriculum and evaluate them on dynamics not seen during training. Table TABREF33 shows variants of the last stage of the curriculum in which the model was trained on $6\times 6$ versions of the full and in which the model was trained on $10\times 10$ versions of the full . We see that models trained on smaller worlds generalise to bigger worlds. Despite curriculum learning, however, performance of the final model trail that of human players, who can consistently solve . This highlights the difficulties of the problem and suggests that there is significant room for improvement in developing better language grounded policy learners. <<<Attention maps.>>> Figure FIGREF34 shows attention conditioned on the goal and on observation summaries produced by intermediate layers. Goal-conditioned attention consistently locates the clause that contains the team the agent is supposed to attack. Intermediate layer attentions focus on regions near modifiers and monsters, particularly those that are present in the observations. These results suggests that attention mechanisms in help identify relevant information in the document. <<</Attention maps.>>> <<<Analysis of trajectories and failure modes.>>> We examine trajectories from well-performing policies (80% win rate) as well as poorly-performing policies (50% win rate) on the full . We find that well-performing policies exhibit a number of consistent behaviours such as identifying the correct item to pick up to fight the target monster, avoiding distractors, and engaging target monsters after acquiring the correct item. In contrast, the poorly-performing policies occasionally pick up the wrong item, causing the agent to lose when engaging with a monster. In addition, it occasionally gets stuck in evading monsters indefinitely, causing the agent to lose when the time runs out. Replays of both policies can be found in GIFs in the supplementary materials. <<</Analysis of trajectories and failure modes.>>> <<</Curriculum learning for complex environments>>> <<</Experiments>>> <<<Conclusion>>> We proposed , a grounded policy learning problem in which the agent must jointly reason over a language goal, relevant dynamics specified in a document, and environment observations. In order to study , we procedurally generated a combinatorially large number of environment dynamics such that the model cannot memorise a set of environment dynamics and must instead generalise via reading. We proposed , a model that captures three-way interactions between the goal, document, and observations, and that generalises to new environments with dynamics not seen during training. outperforms baselines such as FiLM and language-conditioned CNNs. Through curriculum learning, performs well on complex tasks that require several reasoning and coreference steps with natural language templated goals and descriptions of the dynamics. Our work suggests that language understanding via reading is a promising way to learn policies that generalise to new environments. Despite curriculum learning, our best models trail performance of human players, suggesting that there is ample room for improvement in grounded policy learning on complex problems. In addition to jointly learning policies based on external documentation and language goals, we are interested in exploring how to use supporting evidence in external documentation to reason about plans BIBREF23 and induce hierarchical policies BIBREF24, BIBREF25. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Conclusion" ], "type": "disordered_section" }
1908.08593
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Revealing the Dark Secrets of BERT <<<Abstract>>> BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models. <<</Abstract>>> <<<Introduction>>> Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN). One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4. However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions: We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%. <<</Introduction>>> <<<Related work>>> There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers. BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text. Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models. Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations. <<</Related work>>> <<<Methodology>>> We pose the following research questions: What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30) What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36) How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39) The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable. We use the following subset of GLUE tasks BIBREF4 for fine-tuning: MRPC: the Microsoft Research Paraphrase Corpus BIBREF13 STS-B: the Semantic Textual Similarity Benchmark BIBREF14 SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15 QQP: the Quora Question Pairs dataset RTE: the Recognizing Textual Entailment datasets QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3 MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16 Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert). In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens. <<</Methodology>>> <<<Experiments>>> In this section, we present the experiments conducted to address the above research questions. <<<BERT's self-attention patterns>>> Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes: Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP]; Diagonal: formed by the attention to the previous/following tokens; Vertical+Diagonal: a mix of the previous two types, Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC), Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure. Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding. To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set. <<<Results>>> fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens. <<</Results>>> <<</BERT's self-attention patterns>>> <<<Relation-specific heads in BERT>>> In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments. The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation. We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames. To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences. For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence. <<</Relation-specific heads in BERT>>> <<<Change in self-attention patterns after fine-tuning>>> Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution. <<</Change in self-attention patterns after fine-tuning>>> <<<Attention to linguistic features>>> In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes). We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks. For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature. For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others. <<</Attention to linguistic features>>> <<<Token-to-token attention>>> To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token. <<</Token-to-token attention>>> <<<Disabling self-attention heads>>> Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers. <<</Disabling self-attention heads>>> <<</Experiments>>> <<<Discussion>>> In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it. We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks. <<</Discussion>>> <<<Conclusion>>> In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT. Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition. Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Methodology" ], "type": "disordered_section" }
1908.08593
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Revealing the Dark Secrets of BERT <<<Abstract>>> BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models. <<</Abstract>>> <<<Introduction>>> Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN). One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4. However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions: We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%. <<</Introduction>>> <<<Related work>>> There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers. BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text. Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models. Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations. <<</Related work>>> <<<Methodology>>> We pose the following research questions: What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30) What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36) How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39) The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable. We use the following subset of GLUE tasks BIBREF4 for fine-tuning: MRPC: the Microsoft Research Paraphrase Corpus BIBREF13 STS-B: the Semantic Textual Similarity Benchmark BIBREF14 SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15 QQP: the Quora Question Pairs dataset RTE: the Recognizing Textual Entailment datasets QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3 MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16 Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert). In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens. <<</Methodology>>> <<<Experiments>>> In this section, we present the experiments conducted to address the above research questions. <<<BERT's self-attention patterns>>> Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes: Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP]; Diagonal: formed by the attention to the previous/following tokens; Vertical+Diagonal: a mix of the previous two types, Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC), Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure. Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding. To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set. <<<Results>>> fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens. <<</Results>>> <<</BERT's self-attention patterns>>> <<<Relation-specific heads in BERT>>> In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments. The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation. We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames. To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences. For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence. <<</Relation-specific heads in BERT>>> <<<Change in self-attention patterns after fine-tuning>>> Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution. <<</Change in self-attention patterns after fine-tuning>>> <<<Attention to linguistic features>>> In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes). We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks. For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature. For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others. <<</Attention to linguistic features>>> <<<Token-to-token attention>>> To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token. <<</Token-to-token attention>>> <<<Disabling self-attention heads>>> Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers. <<</Disabling self-attention heads>>> <<</Experiments>>> <<<Discussion>>> In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it. We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks. <<</Discussion>>> <<<Conclusion>>> In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT. Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition. Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Related work, Conclusion" ], "type": "disordered_section" }
1908.08593
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Revealing the Dark Secrets of BERT <<<Abstract>>> BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models. <<</Abstract>>> <<<Introduction>>> Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN). One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4. However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions: We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%. <<</Introduction>>> <<<Related work>>> There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers. BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text. Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models. Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations. <<</Related work>>> <<<Methodology>>> We pose the following research questions: What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30) What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36) How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39) The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable. We use the following subset of GLUE tasks BIBREF4 for fine-tuning: MRPC: the Microsoft Research Paraphrase Corpus BIBREF13 STS-B: the Semantic Textual Similarity Benchmark BIBREF14 SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15 QQP: the Quora Question Pairs dataset RTE: the Recognizing Textual Entailment datasets QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3 MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16 Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert). In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens. <<</Methodology>>> <<<Experiments>>> In this section, we present the experiments conducted to address the above research questions. <<<BERT's self-attention patterns>>> Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes: Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP]; Diagonal: formed by the attention to the previous/following tokens; Vertical+Diagonal: a mix of the previous two types, Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC), Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure. Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding. To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set. <<<Results>>> fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens. <<</Results>>> <<</BERT's self-attention patterns>>> <<<Relation-specific heads in BERT>>> In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments. The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation. We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames. To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences. For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence. <<</Relation-specific heads in BERT>>> <<<Change in self-attention patterns after fine-tuning>>> Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution. <<</Change in self-attention patterns after fine-tuning>>> <<<Attention to linguistic features>>> In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes). We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks. For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature. For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others. <<</Attention to linguistic features>>> <<<Token-to-token attention>>> To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token. <<</Token-to-token attention>>> <<<Disabling self-attention heads>>> Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers. <<</Disabling self-attention heads>>> <<</Experiments>>> <<<Discussion>>> In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it. We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks. <<</Discussion>>> <<<Conclusion>>> In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT. Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition. Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Conclusion" ], "type": "disordered_section" }
1911.02711
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Exploring Hierarchical Interaction Between Review and Summary for Better Sentiment Analysis <<<Abstract>>> Sentiment analysis provides a useful overview of customer review contents. Many review websites allow a user to enter a summary in addition to a full review. It has been shown that jointly predicting the review summary and the sentiment rating benefits both tasks. However, these methods consider the integration of review and summary information in an implicit manner, which limits their performance to some extent. In this paper, we propose a hierarchically-refined attention network for better exploiting multi-interaction between a review and its summary for sentiment analysis. In particular, the representation of a review is layer-wise refined by attention over the summary representation. Empirical results show that our model can better make use of user-written summaries for review sentiment analysis, and is also more effective compared to existing methods when the user summary is replaced with summary generated by an automatic summarization system. <<</Abstract>>> <<<Introduction>>> Sentiment analysis BIBREF0, BIBREF1 is a fundamental task in natural language processing. In particular, sentiment analysis of user reviews has wide applicationsBIBREF2, BIBREF3, BIBREF4, BIBREF5. In many review websites such as Amazon and IMDb, the user is allowed to give a summary in addition to their review. Summaries usually contain more abstract information about the review. As shown in Figure FIGREF3, two screenshots of reviews were taken from Amazon and IMDb websites, respectively. The user-written summaries of these reviews can be highly indicative of the final polarity. As a result, it is worth considering them together with the review itself for making sentiment classification. To this end, some recent work BIBREF6, BIBREF7 exploits joint modeling. The model structure can be illustrated by Figure FIGREF4. In particular, given a review input, a model is trained to simultaneously predict the sentiment and summary. As a result, both summary information and review information are integrated in the review encoder through back-propagation training. However, one limitation of this method is that it does not explicitly encode a summary during test time. One solution, as shown in Figure FIGREF4, is to train a separate summary generator, which learns to predict a summary given a review. This allows a sentiment classifier to simultaneously encode the review and its summary, before making a prediction using both representations. One further advantage of this model is that it can make use of a user-given summary if it is available with the review, which is the case for the review websites shown in Figure 1. We therefore investigate such a model. One limitation of this method, however, is that it does not capture interaction of review and summary information as thoroughly as the method shown in Figure FIGREF4, since the review and the summary are encoded using two separate encoders. To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer. As shown in Figure FIGREF5, our model consists of a summary encoder, a hierarchically-refined review encoder and an output layer. The review encoder is composed of multiple attention layers, each consisting of a sequence encoding layer and an attention inference layer. Summary information is integrated into the representation of the review content at each attention layer, thus, a more abstract review representation is learned in subsequent layers based on a lower-layer representation. This mechanism allows the summary to better guide the representation of the review in a bottom-up manner for improved sentiment classification. We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. In scenarios where there is no user-written summary for a review, we use pointer-generator network BIBREF9 to generate abstractive summaries. Empirical results show that our model significantly outperforms all strong baselines, including joint modeling, separate encoder and joint encoder methods. In addition, our model achieves new state-of-the-art performance, attaining 2.1% (with generated summary) and 4.8% (with golden summary) absolutely improvements compared to the previous best method on SNAP Amazon review benchmark. <<</Introduction>>> <<<Related Work>>> The majority of recent sentiment analysis models are based on either convolutional or recurrent neural networks to encode sequences BIBREF10, BIBREF11. In particular, attention-based models have been widely explored, which assign attention weights to hidden states to generate a representation of the input sequence. A hierarchical model with two levels of attention mechanisms was proposed for document classification BIBREF12. Self-attention mechanism has also been used in sentiment analysis BIBREF13, BIBREF14. However, BIBREF15 empirically showed that self-attention mechanism does not consistently agree with the most salient features, which means that self-attention models may suffer from attending on explicit but irrelevant sentimental words. Rationales were also introduced to sentiment analysis task. BIBREF16 proposed a unsupervised latent model that selects a rationale and then uses the rationale for sentiment analysis. A rationale-augmented CNN model BIBREF17 was proposed, which regards golden rationales as additional input and uses the probability as rationale-level attention weights to generate the final representation for text classification. There has also been work focusing on joint summarization and sentiment classification BIBREF6, BIBREF7, whose general structures are illustrated in Figure FIGREF4. These models can predict sentiment label and summary simultaneously. However, they do not encode summaries explicitly during test time, which makes their performance be limited to some extent. <<</Related Work>>> <<<Method>>> In this section, we introduce our proposed model in details. We first give the problem formulation, followed by an overview of the proposed model, and explain each layer of our model in details, before finally giving the loss function and training methods. <<<Problem Formulation>>> The input to our task is a pair $(X^w, X^s)$, where $X^w = x^w_1, x^w_2, ..., x^w_n$ is a summary and $X^s = x^s_1, x^s_2,...,x^s_m$ is a review, the task is to predict the sentiment label $y \in [1, 5]$, where 1 denotes the most negative sentiment and 5 denotes the most positive sentiment. $n$ and $m$ denote the size of the review and summary in the number of words, respectively. The training set is $D=\lbrace (X^w_i, X^s_i, y_i)\rbrace |_{i=1}^M$ where $M$ is the total number of training examples. <<</Problem Formulation>>> <<<Model Overview>>> Figure FIGREF5 gives the architecture of the proposed model, which consists of three modules: a summary encoder, a hierarchically-refined review encoder and an output layer. The summary encoder encodes the summary into a hidden state matrix. The review encoder consists of several layers for representing $\mathbf {x}^w$, each containing a sequence encoding sublayer and an attention inference sublayer. The sequence encoding sublayer encodes the review text as a word sequence. The attention inference layer acts as a key component, which takes the hidden states from both the original review and the summary as input calculating dot-product attention weights for original review under additional supervision from summary information. Multi-head attention BIBREF18 as well as residual connection are also adopted. The output layer predicts the potential sentiment label according to hidden states from the previous layer. <<</Model Overview>>> <<<Summary Encoder>>> Input for the summary encoder is a sequence of summary word representations $\mathbf {x}^s = \mathbf {x}^s_1, \mathbf {x}^s_2, ..., \mathbf {x}^s_m = \lbrace emb(x_1^s), ..., emb(x_m^s)\rbrace $, where $emb$ denotes a word embedding lookup table. Word representations are fed into a standard BiLSTM. We adopt a standard LSTM formulation, where a sequence of hidden states $\mathbf {h}_t$ are calculated from a sequence of $\mathbf {x}_t$($t \in [1,...,m]$). A forward left-to-right LSTM layer and a backward right-to-left LSTM yield a sequence of forward hidden states $\lbrace {\stackrel{\rightarrow }{\mathbf {h}_1^s}},...,{\stackrel{\rightarrow }{\mathbf {h}_n^s}}\rbrace $ and a sequence of backward hidden states $\lbrace {\stackrel{\leftarrow }{\mathbf {h}_1^s}},...,{\stackrel{\leftarrow }{\mathbf {h}_n^s}}\rbrace $, respectively. The two hidden states are concatenated to form a final representation: We then apply an average-pooling operation over the hidden and take $\mathbf {h}^s = avg\_pooling(\mathbf {h}^s_1, \mathbf {h}^s_2,...,\mathbf {h}^s_n)$ as the final representation of summary text. <<</Summary Encoder>>> <<<Hierarchically-Refined Review Encoder>>> The hierarchically-refined review encoder consists of several review encoder layers, each of which is composed of a sequence encoding layer and an attention inference layer. <<<Sequence Encoding Layer>>> Given a review $\mathbf {x}^w = \lbrace emb(x_1^w),...,emb(x_n^w)\rbrace $, another BiLSTM is adopted (the same equation with different parameters compared to the one used in the summary encoder), deriving a sequence of review hidden states $\mathbf {H}^w=\lbrace \mathbf {h}^w_1, \mathbf {h}^w_2,...,\mathbf {h}^s_n \rbrace $. <<</Sequence Encoding Layer>>> <<<Attention Inference Layer>>> In the attention inference layer, we model the dependencies between the original review and the summary with multi-head dot-product attention.Each head produces an attention matrix $\mathbf {\alpha } \in \mathbb {R}^{d_h \times 1}$ consisting of a set of similarity scores between the hidden state of each token of the review text and the summary representation. The hidden state outputs are calculated by where $\mathbf {W}_i^Q \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$, $\mathbf {W}_i^K \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ and $\mathbf {W}_i^V \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ are model parameters. $Q$, $K$ and $V$ represent Query, Key and Value, respectively. $k$ is the number of parallel heads and $i \in [1,k]$ indicates which head is being processed. Following BIBREF18, we adopt a residual connection around each attention inference layer, followed by layer normalization BIBREF19 : $\mathbf {H}$ is then fed to the subsequent sequence encoding layer as input, if any. According to the equations of standard LSTM and Equation DISPLAY_FORM13, tokens of the original review that are the most relevant to the summary are focused on more by consulting summary representation. The hidden states $\mathbf {H}^{w,s}$ are thus a representation matrix of the review text that encompass key features of summary representation. Multi-head attention mechanism ensures that multi-faced semantic dependency features can be captured during the process, which is beneficial for scenarios where several key points exist in one review. Note also that our design of the review encoding part of the hierarchically-refined attention network is similar to the Transformer architecture in the use of multi-head attention, residual connection and layer normalization BIBREF18. However, our experiments show that bi-directional LSTM works better compared to self-attention network as a basic layer structure. This may result from the fact that Transformer requires a larger amount of training data for the most effectiveness. <<</Attention Inference Layer>>> <<</Hierarchically-Refined Review Encoder>>> <<<Output Layer>>> Finally, global average pooling is applied after the previous layer, and then followed by a classifier layer: where $\hat{y}$ is the predicted sentiment label; $\mathbf {W}$ and $\mathbf {b}$ are parameters to be learned. <<</Output Layer>>> <<<Training>>> Given a dataset $D={\lbrace (X^w_t,X^s_t,y_t)\rbrace }|^{|T|}_{t=1}$, our model can be trained by minimizing the cross-entropy loss between where $\mathbf {p}^{y_t}$ denotes the value of the label in $\mathbf {p}$ that corresponds to $y_t$. <<</Training>>> <<</Method>>> <<<Experiments>>> We compare our model with several strong baselines and previous state-of-the-art methods, investigating its main effects. <<<Datasets>>> We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set. <<</Datasets>>> <<<Experimental Settings>>> We use GloVe BIBREF22 300-dimensional embeddings as pretrained word vectors. A LSTM hidden size of 256 and four heads for multi-head attention mechanism are adopted. We use Adam BIBREF23 to optimize our model, with an initial learning rate of 0.0003, a decay rate of 0.97, momentum parameters $\beta _1 = 0.9$, $\beta _2 = 0.999$, and $\epsilon = 1 \times 10^{-8}$. The dropout rate is set depending on the size of each dataset, which is 0.5 for both Toys & Games and Sports & Outdoors and 0.2 for Movies & TV. We conduct experiments with both golden summaries and generated summaries. For generating automatic-decoded summaries, we train a pointer-generator network (PG-Net) with coverage mechanism BIBREF9, which is a specially designed sequence-to-sequence attention-based model that can generate the summary by copying words from the text document or generating words from a fixed vocabulary set at the same time. We generally follow the experimental settings in the original paper except for some minor adjustments specially made for our datasets. Noted that in our work PG-Net can be replaced by any other summarization model. <<</Experimental Settings>>> <<<Baselines>>> <<<HSSC @!START@BIBREF6@!END@.>>> This model adopts encoder parameter sharing for jointly sentiment classification and summarization. It predicts the sentiment label using a highway layer, concatenating the hidden state in summary decoder and the original text representation in encoder. <<</HSSC @!START@BIBREF6@!END@.>>> <<<SAHSSC @!START@BIBREF7@!END@.>>> This work also adopts encoder parameter sharing for jointly sentiment classification and summarization. They use two separate BiLSTMs with self-attention mechanism for generating review and summary representations. <<</SAHSSC @!START@BIBREF7@!END@.>>> <<<BiLSTM+Pooling.>>> For this baseline, we use a BiLSTM with hidden sizes of 256 in both directions, and average pooling across all hidden states to form the representation. This method serves as a naive baseline for making use of both review and summary in sentiment classification. It can also be used to compare the effectiveness of the review itself, the summary itself and the combination of both when used as inputs to the problem. <<</BiLSTM+Pooling.>>> <<<BiLSTM+Self-attention @!START@BIBREF13@!END@.>>> This baseline uses a BiLSTM with hidden size of 256 in both directions. On the top of BiLSTM, self-attention is used to provide a set of summation weight vectors for the final representation. This method is conceptually simple yet gives the state-of-the-art results for many classification and text matching tasks. Its main difference to our model lies in the fact that attention is performed only in the top hidden layer in this method, yet in every layer in ours. <<</BiLSTM+Self-attention @!START@BIBREF13@!END@.>>> <<<BiLSTM+Hard Attention>>> To demonstrate the efficiency of our model structure, we also adopt hard attention BIBREF24 for comparison, which is supervised using an extractive summarization objective. In particular, words in the original review that match to the corresponding summary are treated as the summary in their original order. In the case of Figure FIGREF3, the extractive summaries for the review are “James Cameron's Titanic is easily the most overrated film in history”, which corresponds to the user-written summary “James Cameron's 1997 Titanic is easily the most overrated film in history!”. The model also calculates another loss between attention weights and extractive summary labels, so that the hard attention weights are trained to strictly follow the extractive summary. For baselines that adopt the separate encoder structure, we generally calculate the representations of review and summary separately with two encoders that hold their own parameters, and then concatenate the two representations alongside the hidden-size dimension. For the joint encoder baselines, we first concatenate the review and summary text, and then encode the concatenated text with one single encoder. <<</BiLSTM+Hard Attention>>> <<</Baselines>>> <<<Development Experiments>>> We use the Toys & Games development set to investigate different key configurations of our model. The results are shown in Table TABREF29. <<<Self-attention Baseline>>> We compare different numbers of BiLSTM layers and hidden sizes in BiLSTM self-attention. As can be seen, with more layers a stacked BiLSTM with larger hidden sizes does not give better results compared to a hidden size of 256 either. <<</Self-attention Baseline>>> <<<Hidden Size>>> We see an evident improvement of our model when the hidden size increases from 128 to 256. However, the improvement becomes relatively small compared to a large increase in the number of parameters when the hidden size is further increased to 360. Therefore, we adopt 256 as the hidden size in our experiments. <<</Hidden Size>>> <<<Number of Layers>>> As Table TABREF29 shows, the accuracy increases when increasing layer numbers from 1 to 2. More layers do not increase the accuracy on development set. We thus set 2 as the number of review encoder layers in the experiments. The best performing model size is comparable to that of the BiLSTM self-attention, demonstrating that the number of parameters is not the key factor to models' performance. <<</Number of Layers>>> <<</Development Experiments>>> <<<Results>>> Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. In the scenario where golden summaries are used, BiLSTM+self-attention performs the best among all the baselines, which shows that attention is a useful way to integrate summary and review information. Hard-attention receives more supervision information compared with soft-attention, by supervision signals from extractive summaries. However, it underperforms the soft attention model, which indicates that the most salient words for making sentiment classification may not strictly overlap with extractive summaries. This justifies the importance of user written or automatic-generated summary. A comparison between models that use summary information and those that do not use summary information shows that the review summary is useful for sentiment classification. In addition, the same models work consistently better when the user written gold summary is used compared to a system generated summary, which is intuitively reasonable since the current state-of-the-art abstractive summarization models are far from perfect. Interestingly, as shown in the second section of the table, the gold summary itself does not lead to better sentiment accuracy compared with the review itself, which shows that summaries better serve as auxiliary information sources to review contents. With both gold summaries and automatic-generated summaries, our model gives better results as compared to BiLSTM+self-attention. The latter integrates information from reviews and summaries only in the top representation layer, which is also the standard practice in question answering BIBREF25 and machine translation BIBREF26 models. In contrast, our model integrates summary information into the review representation in each layer, thereby allowing the integrated representation to be hierarchically refined, leading to more abstract hidden states. Finally, the fact that with gold summary, our baseline and final models outperforms the state-of-the-art methods by jointly training shows the importance of making use of user written summaries when they are available. Even with system summary, out models still outperforms HSSC and SAHSSC, showing that our network is more effective than parameter sharing under the same setting without input summaries. <<<Review Length>>> Figure FIGREF37 consists of line graphs on the accuracy of BiLSTM+self-attention, BiLSTM+pooling and our model against the review length. As the review length increases, the performance of all models decreases. BiLSTM+self-attention does not outperform BiLSTM+pooling on long text. Our method gives better results compared to two baseline models for long reviews, demonstrating that our model is effective for capturing long-term dependency. This is likely because hierarchically-refined attention maintains the most salient information while ignoring the redundant parts of the original review text. Our model can thus be more robust when review has irrelevant sentimental words, which usually exists in larger reviews such as the example in Figure FIGREF3. The hierarchical architecture allows the lower layers to encode local information, while the higher layers can capture long-term dependency and thus better encode global information. <<</Review Length>>> <<<Case Study>>> Our model has a natural advantage of interpretability thanks to the use of attention inference layer. We visualize the hierarchically-refined attention of two samples from the test set of Toys & Games. We also visualize self-attention distribution for fair comparison. To make the visualizations clear and to avoid confusion, we choose to visualize the most salient parts, by rescaling all attention weights into an interval of $[0, 100]$ and adopting 50 as a threshold for attention visualization, showing only attention weights $\ge 50$. As shown in Figure FIGREF38, the example with generated summary has 5 stars as its golden rating score. The summary text is “fun for the whole new game in all ages ! ! ! fun ! ! !", which suggests that the game is (1) fun (from word “fun") and (2) not difficult to learn (from phrase “all ages"). It can be seen that both the self-attention model and the first layer of our model attend to the strongly positive phrase “quite fun", which is relevant to the word “fun" in the summary. In comparisons the second layer attends to the phrase “much easier", which is relevant to the phrase “in all ages" in the summary. This verifies our model's effectiveness of leveraging abstractive summary information. Figure FIGREF38 illustrates a 5-star-rating example with golden summary. The summary text is “Favorite Game to Teach to Newbies". As shown in the heatmap, self-attention can only attend to some general sentimental words, such as “hard", “fun", “immensely" and “most", which deviates from the main idea of the document text. In comparison, the first layer of our model attends to phrases like “easy to teach", which is a perfect match of the phrase “teach to newbies" in the summary. This shows that the shallow sequence inference layer can learn direct similarity matching information under the supervision of summarization. In addition, the second layer of our model attends to phrases including “would recommend this to anyone", which links to “easy to teach" and “Teach to Newbies", showing that the deeper sequence inference layer of our model can learn potential connections between the review and the summary. <<</Case Study>>> <<</Results>>> <<</Experiments>>> <<<Conclusion>>> We investigated a hierarchically-refined attention network for better sentiment prediction. Our model allows multi-interaction between summary and review representation in a hierarchical manner. Empirical results show that the proposed method outperforms all strong baselines and previous work and achieves new state-of-the-art performance on SNAP Amazon Review dataset. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Method" ], "type": "disordered_section" }
1911.02711
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Exploring Hierarchical Interaction Between Review and Summary for Better Sentiment Analysis <<<Abstract>>> Sentiment analysis provides a useful overview of customer review contents. Many review websites allow a user to enter a summary in addition to a full review. It has been shown that jointly predicting the review summary and the sentiment rating benefits both tasks. However, these methods consider the integration of review and summary information in an implicit manner, which limits their performance to some extent. In this paper, we propose a hierarchically-refined attention network for better exploiting multi-interaction between a review and its summary for sentiment analysis. In particular, the representation of a review is layer-wise refined by attention over the summary representation. Empirical results show that our model can better make use of user-written summaries for review sentiment analysis, and is also more effective compared to existing methods when the user summary is replaced with summary generated by an automatic summarization system. <<</Abstract>>> <<<Introduction>>> Sentiment analysis BIBREF0, BIBREF1 is a fundamental task in natural language processing. In particular, sentiment analysis of user reviews has wide applicationsBIBREF2, BIBREF3, BIBREF4, BIBREF5. In many review websites such as Amazon and IMDb, the user is allowed to give a summary in addition to their review. Summaries usually contain more abstract information about the review. As shown in Figure FIGREF3, two screenshots of reviews were taken from Amazon and IMDb websites, respectively. The user-written summaries of these reviews can be highly indicative of the final polarity. As a result, it is worth considering them together with the review itself for making sentiment classification. To this end, some recent work BIBREF6, BIBREF7 exploits joint modeling. The model structure can be illustrated by Figure FIGREF4. In particular, given a review input, a model is trained to simultaneously predict the sentiment and summary. As a result, both summary information and review information are integrated in the review encoder through back-propagation training. However, one limitation of this method is that it does not explicitly encode a summary during test time. One solution, as shown in Figure FIGREF4, is to train a separate summary generator, which learns to predict a summary given a review. This allows a sentiment classifier to simultaneously encode the review and its summary, before making a prediction using both representations. One further advantage of this model is that it can make use of a user-given summary if it is available with the review, which is the case for the review websites shown in Figure 1. We therefore investigate such a model. One limitation of this method, however, is that it does not capture interaction of review and summary information as thoroughly as the method shown in Figure FIGREF4, since the review and the summary are encoded using two separate encoders. To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer. As shown in Figure FIGREF5, our model consists of a summary encoder, a hierarchically-refined review encoder and an output layer. The review encoder is composed of multiple attention layers, each consisting of a sequence encoding layer and an attention inference layer. Summary information is integrated into the representation of the review content at each attention layer, thus, a more abstract review representation is learned in subsequent layers based on a lower-layer representation. This mechanism allows the summary to better guide the representation of the review in a bottom-up manner for improved sentiment classification. We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. In scenarios where there is no user-written summary for a review, we use pointer-generator network BIBREF9 to generate abstractive summaries. Empirical results show that our model significantly outperforms all strong baselines, including joint modeling, separate encoder and joint encoder methods. In addition, our model achieves new state-of-the-art performance, attaining 2.1% (with generated summary) and 4.8% (with golden summary) absolutely improvements compared to the previous best method on SNAP Amazon review benchmark. <<</Introduction>>> <<<Related Work>>> The majority of recent sentiment analysis models are based on either convolutional or recurrent neural networks to encode sequences BIBREF10, BIBREF11. In particular, attention-based models have been widely explored, which assign attention weights to hidden states to generate a representation of the input sequence. A hierarchical model with two levels of attention mechanisms was proposed for document classification BIBREF12. Self-attention mechanism has also been used in sentiment analysis BIBREF13, BIBREF14. However, BIBREF15 empirically showed that self-attention mechanism does not consistently agree with the most salient features, which means that self-attention models may suffer from attending on explicit but irrelevant sentimental words. Rationales were also introduced to sentiment analysis task. BIBREF16 proposed a unsupervised latent model that selects a rationale and then uses the rationale for sentiment analysis. A rationale-augmented CNN model BIBREF17 was proposed, which regards golden rationales as additional input and uses the probability as rationale-level attention weights to generate the final representation for text classification. There has also been work focusing on joint summarization and sentiment classification BIBREF6, BIBREF7, whose general structures are illustrated in Figure FIGREF4. These models can predict sentiment label and summary simultaneously. However, they do not encode summaries explicitly during test time, which makes their performance be limited to some extent. <<</Related Work>>> <<<Method>>> In this section, we introduce our proposed model in details. We first give the problem formulation, followed by an overview of the proposed model, and explain each layer of our model in details, before finally giving the loss function and training methods. <<<Problem Formulation>>> The input to our task is a pair $(X^w, X^s)$, where $X^w = x^w_1, x^w_2, ..., x^w_n$ is a summary and $X^s = x^s_1, x^s_2,...,x^s_m$ is a review, the task is to predict the sentiment label $y \in [1, 5]$, where 1 denotes the most negative sentiment and 5 denotes the most positive sentiment. $n$ and $m$ denote the size of the review and summary in the number of words, respectively. The training set is $D=\lbrace (X^w_i, X^s_i, y_i)\rbrace |_{i=1}^M$ where $M$ is the total number of training examples. <<</Problem Formulation>>> <<<Model Overview>>> Figure FIGREF5 gives the architecture of the proposed model, which consists of three modules: a summary encoder, a hierarchically-refined review encoder and an output layer. The summary encoder encodes the summary into a hidden state matrix. The review encoder consists of several layers for representing $\mathbf {x}^w$, each containing a sequence encoding sublayer and an attention inference sublayer. The sequence encoding sublayer encodes the review text as a word sequence. The attention inference layer acts as a key component, which takes the hidden states from both the original review and the summary as input calculating dot-product attention weights for original review under additional supervision from summary information. Multi-head attention BIBREF18 as well as residual connection are also adopted. The output layer predicts the potential sentiment label according to hidden states from the previous layer. <<</Model Overview>>> <<<Summary Encoder>>> Input for the summary encoder is a sequence of summary word representations $\mathbf {x}^s = \mathbf {x}^s_1, \mathbf {x}^s_2, ..., \mathbf {x}^s_m = \lbrace emb(x_1^s), ..., emb(x_m^s)\rbrace $, where $emb$ denotes a word embedding lookup table. Word representations are fed into a standard BiLSTM. We adopt a standard LSTM formulation, where a sequence of hidden states $\mathbf {h}_t$ are calculated from a sequence of $\mathbf {x}_t$($t \in [1,...,m]$). A forward left-to-right LSTM layer and a backward right-to-left LSTM yield a sequence of forward hidden states $\lbrace {\stackrel{\rightarrow }{\mathbf {h}_1^s}},...,{\stackrel{\rightarrow }{\mathbf {h}_n^s}}\rbrace $ and a sequence of backward hidden states $\lbrace {\stackrel{\leftarrow }{\mathbf {h}_1^s}},...,{\stackrel{\leftarrow }{\mathbf {h}_n^s}}\rbrace $, respectively. The two hidden states are concatenated to form a final representation: We then apply an average-pooling operation over the hidden and take $\mathbf {h}^s = avg\_pooling(\mathbf {h}^s_1, \mathbf {h}^s_2,...,\mathbf {h}^s_n)$ as the final representation of summary text. <<</Summary Encoder>>> <<<Hierarchically-Refined Review Encoder>>> The hierarchically-refined review encoder consists of several review encoder layers, each of which is composed of a sequence encoding layer and an attention inference layer. <<<Sequence Encoding Layer>>> Given a review $\mathbf {x}^w = \lbrace emb(x_1^w),...,emb(x_n^w)\rbrace $, another BiLSTM is adopted (the same equation with different parameters compared to the one used in the summary encoder), deriving a sequence of review hidden states $\mathbf {H}^w=\lbrace \mathbf {h}^w_1, \mathbf {h}^w_2,...,\mathbf {h}^s_n \rbrace $. <<</Sequence Encoding Layer>>> <<<Attention Inference Layer>>> In the attention inference layer, we model the dependencies between the original review and the summary with multi-head dot-product attention.Each head produces an attention matrix $\mathbf {\alpha } \in \mathbb {R}^{d_h \times 1}$ consisting of a set of similarity scores between the hidden state of each token of the review text and the summary representation. The hidden state outputs are calculated by where $\mathbf {W}_i^Q \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$, $\mathbf {W}_i^K \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ and $\mathbf {W}_i^V \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ are model parameters. $Q$, $K$ and $V$ represent Query, Key and Value, respectively. $k$ is the number of parallel heads and $i \in [1,k]$ indicates which head is being processed. Following BIBREF18, we adopt a residual connection around each attention inference layer, followed by layer normalization BIBREF19 : $\mathbf {H}$ is then fed to the subsequent sequence encoding layer as input, if any. According to the equations of standard LSTM and Equation DISPLAY_FORM13, tokens of the original review that are the most relevant to the summary are focused on more by consulting summary representation. The hidden states $\mathbf {H}^{w,s}$ are thus a representation matrix of the review text that encompass key features of summary representation. Multi-head attention mechanism ensures that multi-faced semantic dependency features can be captured during the process, which is beneficial for scenarios where several key points exist in one review. Note also that our design of the review encoding part of the hierarchically-refined attention network is similar to the Transformer architecture in the use of multi-head attention, residual connection and layer normalization BIBREF18. However, our experiments show that bi-directional LSTM works better compared to self-attention network as a basic layer structure. This may result from the fact that Transformer requires a larger amount of training data for the most effectiveness. <<</Attention Inference Layer>>> <<</Hierarchically-Refined Review Encoder>>> <<<Output Layer>>> Finally, global average pooling is applied after the previous layer, and then followed by a classifier layer: where $\hat{y}$ is the predicted sentiment label; $\mathbf {W}$ and $\mathbf {b}$ are parameters to be learned. <<</Output Layer>>> <<<Training>>> Given a dataset $D={\lbrace (X^w_t,X^s_t,y_t)\rbrace }|^{|T|}_{t=1}$, our model can be trained by minimizing the cross-entropy loss between where $\mathbf {p}^{y_t}$ denotes the value of the label in $\mathbf {p}$ that corresponds to $y_t$. <<</Training>>> <<</Method>>> <<<Experiments>>> We compare our model with several strong baselines and previous state-of-the-art methods, investigating its main effects. <<<Datasets>>> We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set. <<</Datasets>>> <<<Experimental Settings>>> We use GloVe BIBREF22 300-dimensional embeddings as pretrained word vectors. A LSTM hidden size of 256 and four heads for multi-head attention mechanism are adopted. We use Adam BIBREF23 to optimize our model, with an initial learning rate of 0.0003, a decay rate of 0.97, momentum parameters $\beta _1 = 0.9$, $\beta _2 = 0.999$, and $\epsilon = 1 \times 10^{-8}$. The dropout rate is set depending on the size of each dataset, which is 0.5 for both Toys & Games and Sports & Outdoors and 0.2 for Movies & TV. We conduct experiments with both golden summaries and generated summaries. For generating automatic-decoded summaries, we train a pointer-generator network (PG-Net) with coverage mechanism BIBREF9, which is a specially designed sequence-to-sequence attention-based model that can generate the summary by copying words from the text document or generating words from a fixed vocabulary set at the same time. We generally follow the experimental settings in the original paper except for some minor adjustments specially made for our datasets. Noted that in our work PG-Net can be replaced by any other summarization model. <<</Experimental Settings>>> <<<Baselines>>> <<<HSSC @!START@BIBREF6@!END@.>>> This model adopts encoder parameter sharing for jointly sentiment classification and summarization. It predicts the sentiment label using a highway layer, concatenating the hidden state in summary decoder and the original text representation in encoder. <<</HSSC @!START@BIBREF6@!END@.>>> <<<SAHSSC @!START@BIBREF7@!END@.>>> This work also adopts encoder parameter sharing for jointly sentiment classification and summarization. They use two separate BiLSTMs with self-attention mechanism for generating review and summary representations. <<</SAHSSC @!START@BIBREF7@!END@.>>> <<<BiLSTM+Pooling.>>> For this baseline, we use a BiLSTM with hidden sizes of 256 in both directions, and average pooling across all hidden states to form the representation. This method serves as a naive baseline for making use of both review and summary in sentiment classification. It can also be used to compare the effectiveness of the review itself, the summary itself and the combination of both when used as inputs to the problem. <<</BiLSTM+Pooling.>>> <<<BiLSTM+Self-attention @!START@BIBREF13@!END@.>>> This baseline uses a BiLSTM with hidden size of 256 in both directions. On the top of BiLSTM, self-attention is used to provide a set of summation weight vectors for the final representation. This method is conceptually simple yet gives the state-of-the-art results for many classification and text matching tasks. Its main difference to our model lies in the fact that attention is performed only in the top hidden layer in this method, yet in every layer in ours. <<</BiLSTM+Self-attention @!START@BIBREF13@!END@.>>> <<<BiLSTM+Hard Attention>>> To demonstrate the efficiency of our model structure, we also adopt hard attention BIBREF24 for comparison, which is supervised using an extractive summarization objective. In particular, words in the original review that match to the corresponding summary are treated as the summary in their original order. In the case of Figure FIGREF3, the extractive summaries for the review are “James Cameron's Titanic is easily the most overrated film in history”, which corresponds to the user-written summary “James Cameron's 1997 Titanic is easily the most overrated film in history!”. The model also calculates another loss between attention weights and extractive summary labels, so that the hard attention weights are trained to strictly follow the extractive summary. For baselines that adopt the separate encoder structure, we generally calculate the representations of review and summary separately with two encoders that hold their own parameters, and then concatenate the two representations alongside the hidden-size dimension. For the joint encoder baselines, we first concatenate the review and summary text, and then encode the concatenated text with one single encoder. <<</BiLSTM+Hard Attention>>> <<</Baselines>>> <<<Development Experiments>>> We use the Toys & Games development set to investigate different key configurations of our model. The results are shown in Table TABREF29. <<<Self-attention Baseline>>> We compare different numbers of BiLSTM layers and hidden sizes in BiLSTM self-attention. As can be seen, with more layers a stacked BiLSTM with larger hidden sizes does not give better results compared to a hidden size of 256 either. <<</Self-attention Baseline>>> <<<Hidden Size>>> We see an evident improvement of our model when the hidden size increases from 128 to 256. However, the improvement becomes relatively small compared to a large increase in the number of parameters when the hidden size is further increased to 360. Therefore, we adopt 256 as the hidden size in our experiments. <<</Hidden Size>>> <<<Number of Layers>>> As Table TABREF29 shows, the accuracy increases when increasing layer numbers from 1 to 2. More layers do not increase the accuracy on development set. We thus set 2 as the number of review encoder layers in the experiments. The best performing model size is comparable to that of the BiLSTM self-attention, demonstrating that the number of parameters is not the key factor to models' performance. <<</Number of Layers>>> <<</Development Experiments>>> <<<Results>>> Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. In the scenario where golden summaries are used, BiLSTM+self-attention performs the best among all the baselines, which shows that attention is a useful way to integrate summary and review information. Hard-attention receives more supervision information compared with soft-attention, by supervision signals from extractive summaries. However, it underperforms the soft attention model, which indicates that the most salient words for making sentiment classification may not strictly overlap with extractive summaries. This justifies the importance of user written or automatic-generated summary. A comparison between models that use summary information and those that do not use summary information shows that the review summary is useful for sentiment classification. In addition, the same models work consistently better when the user written gold summary is used compared to a system generated summary, which is intuitively reasonable since the current state-of-the-art abstractive summarization models are far from perfect. Interestingly, as shown in the second section of the table, the gold summary itself does not lead to better sentiment accuracy compared with the review itself, which shows that summaries better serve as auxiliary information sources to review contents. With both gold summaries and automatic-generated summaries, our model gives better results as compared to BiLSTM+self-attention. The latter integrates information from reviews and summaries only in the top representation layer, which is also the standard practice in question answering BIBREF25 and machine translation BIBREF26 models. In contrast, our model integrates summary information into the review representation in each layer, thereby allowing the integrated representation to be hierarchically refined, leading to more abstract hidden states. Finally, the fact that with gold summary, our baseline and final models outperforms the state-of-the-art methods by jointly training shows the importance of making use of user written summaries when they are available. Even with system summary, out models still outperforms HSSC and SAHSSC, showing that our network is more effective than parameter sharing under the same setting without input summaries. <<<Review Length>>> Figure FIGREF37 consists of line graphs on the accuracy of BiLSTM+self-attention, BiLSTM+pooling and our model against the review length. As the review length increases, the performance of all models decreases. BiLSTM+self-attention does not outperform BiLSTM+pooling on long text. Our method gives better results compared to two baseline models for long reviews, demonstrating that our model is effective for capturing long-term dependency. This is likely because hierarchically-refined attention maintains the most salient information while ignoring the redundant parts of the original review text. Our model can thus be more robust when review has irrelevant sentimental words, which usually exists in larger reviews such as the example in Figure FIGREF3. The hierarchical architecture allows the lower layers to encode local information, while the higher layers can capture long-term dependency and thus better encode global information. <<</Review Length>>> <<<Case Study>>> Our model has a natural advantage of interpretability thanks to the use of attention inference layer. We visualize the hierarchically-refined attention of two samples from the test set of Toys & Games. We also visualize self-attention distribution for fair comparison. To make the visualizations clear and to avoid confusion, we choose to visualize the most salient parts, by rescaling all attention weights into an interval of $[0, 100]$ and adopting 50 as a threshold for attention visualization, showing only attention weights $\ge 50$. As shown in Figure FIGREF38, the example with generated summary has 5 stars as its golden rating score. The summary text is “fun for the whole new game in all ages ! ! ! fun ! ! !", which suggests that the game is (1) fun (from word “fun") and (2) not difficult to learn (from phrase “all ages"). It can be seen that both the self-attention model and the first layer of our model attend to the strongly positive phrase “quite fun", which is relevant to the word “fun" in the summary. In comparisons the second layer attends to the phrase “much easier", which is relevant to the phrase “in all ages" in the summary. This verifies our model's effectiveness of leveraging abstractive summary information. Figure FIGREF38 illustrates a 5-star-rating example with golden summary. The summary text is “Favorite Game to Teach to Newbies". As shown in the heatmap, self-attention can only attend to some general sentimental words, such as “hard", “fun", “immensely" and “most", which deviates from the main idea of the document text. In comparison, the first layer of our model attends to phrases like “easy to teach", which is a perfect match of the phrase “teach to newbies" in the summary. This shows that the shallow sequence inference layer can learn direct similarity matching information under the supervision of summarization. In addition, the second layer of our model attends to phrases including “would recommend this to anyone", which links to “easy to teach" and “Teach to Newbies", showing that the deeper sequence inference layer of our model can learn potential connections between the review and the summary. <<</Case Study>>> <<</Results>>> <<</Experiments>>> <<<Conclusion>>> We investigated a hierarchically-refined attention network for better sentiment prediction. Our model allows multi-interaction between summary and review representation in a hierarchical manner. Empirical results show that the proposed method outperforms all strong baselines and previous work and achieves new state-of-the-art performance on SNAP Amazon Review dataset. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Experiments, Conclusion" ], "type": "disordered_section" }
1910.13890
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> A Latent Morphology Model for Open-Vocabulary Neural Machine Translation <<<Abstract>>> Translation into morphologically-rich languages challenges neural machine translation (NMT) models with extremely sparse vocabularies where atomic treatment of surface forms is unrealistic. This problem is typically addressed by either pre-processing words into subword units or performing translation directly at the level of characters. The former is based on word segmentation algorithms optimized using corpus-level statistics with no regard to the translation task. The latter learns directly from translation data but requires rather deep architectures. In this paper, we propose to translate words by modeling word formation through a hierarchical latent variable model which mimics the process of morphological inflection. Our model generates words one character at a time by composing two latent representations: a continuous one, aimed at capturing the lexical semantics, and a set of (approximately) discrete features, aimed at capturing the morphosyntactic function, which are shared among different surface forms. Our model achieves better accuracy in translation into three morphologically-rich languages than conventional open-vocabulary NMT methods, while also demonstrating a better generalization capacity under low to mid-resource settings. <<</Abstract>>> <<<Introduction>>> Neural machine translation (NMT) systems are conventionally trained based on the approach of maximizing the log-likelihood on a training corpus in order to learn distributed representations of words according to their sentence context, which is highly demanding in terms of training data as well as the network capacity. Under conditions of lexical sparsity, which may include the cases when the amount of training examples is insufficient to observe words in different context, and particularly in translation of morphologically-rich languages, where the same word can have exponentially many different surface realizations due to syntactic conditions, which are often rarely or ever observed in any set of collected examples, the model may suffer in learning accurate representations of words. The standard approach to overcome this limitation is to replace the word representations in the model with subword units that are shared among words, which are, in principle, more reliable as they are observed more frequently in varying context BIBREF0, BIBREF1. One drawback related to this approach, however, is that the estimation of the subword vocabulary relies on word segmentation methods optimized using corpus-dependent statistics, disregarding any linguistic notion and the translation objective, which may result in morphological errors during splitting, resulting in subword units that are semantically ambiguous as they might be used in far too many lexical contexts BIBREF2. Moreover, the words are generated predicting multiple subword units, which makes generalizing to unseen word forms more difficult, where some of the subword units that could be used to reconstruct a given word may be unlikely in the given context. To alleviate the sub-optimal effects of using explicit segmentation and generalize better to new morphological forms, recent studies explored the idea of extending the same approach to model translation directly at the level of characters BIBREF3, BIBREF4, which, in turn, have demonstrated the requirement of using comparably deeper networks, as the network would then need to learn longer distance grammatical dependencies BIBREF5. In this paper, we explore the benefit of explicitly modeling variations in the surface forms of words using methods from deep latent variable modeling in order to improve the translation accuracy in low-resource and morphologically-rich languages. Latent variable models allow us to inject inductive biases relevant to the task, which, in our case, is word formation, and we believe that follows a certain hierarchical procedure. Our model translates words one character at a time based on word representations learned compositionally from sub-lexical components, which are parameterized by a hierarchical latent variable model mimicking the process of morphological inflection, consisting of a continuous-space dense vector capturing the lexical semantics, and a set of (approximately) discrete features, representing the morphosyntactic role of the word in a given sentence. Each word representation during decoding is reformulated based on the shared latent morphological features, aiding in learning more reliable representations of words under sparse settings by generalizing across their different surface forms. We evaluate our method in translating English into three morphologically-rich languages each with a distinct morphological typology: Arabic, Czech and Turkish, and show that our model is able to obtain better translation accuracy and generalization capacity than conventional approaches to open-vocabulary NMT. <<</Introduction>>> <<<Evaluation>>> <<<Models>>> We evaluate our model by comparing it in machine translation against three baselines which constitute the conventional open-vocabulary NMT methods, including architectures using atomic parameterization either with subword units segmented with BPE BIBREF0 or characters, and the hierarchical parameterization method employed for generating all words in the output. We implement all architectures using Pytorch BIBREF6 within the OpenNMT-py framework BIBREF7. <<</Models>>> <<<Data and Languages>>> In order to evaluate our model we design two sets of experiments. The experiments in §SECREF8 aim to evaluate different methods under low-resource settings, for languages with different morphological typology. We model the machine translation task from English into three languages with distinct morphological characteristics: Arabic (templatic), Czech (fusional), and Turkish (agglutinative). We use the TED Talks corpora BIBREF8 for training the NMT models for these experiments. In §SECREF10, we conduct more experiments in Turkish to demonstrate the case of increased data sparsity using multi-domain training corpora, where we extend the training set using corpora from EU Bookshop BIBREF9, Global Voices, Gnome, Tatoeba, Ubuntu BIBREF10, KDE4 BIBREF11, Open Subtitles BIBREF12 and SETIMES BIBREF13. The statistical characteristics of the training sets are given in Tables TABREF16 and TABREF17. We use the official evaluation sets of the IWSLT for validating and testing the accuracy of the models. In order to increase the number of unknown and rare words in the evaluation sets we measure accuracy on large test sets combining evaluation sets from many years (Table TABREF18 presents the evaluation sets used for development and testing). The accuracy of each model output is measured using BLEU BIBREF15 and chrF3 BIBREF16 metrics, whereas the significance of the improvements are computed using bootstrap hypothesis testing BIBREF17. <<</Data and Languages>>> <<<Training Settings>>> All models are implemented using gated recurrent units (GRU) BIBREF18, and have a single-layer bi-RNN encoder. The source sides of the data used for training all NMT models, and the target sides of the data used in training the subword-level NMT models are segmented using BPE with 16,000 merge rules. We implement all decoders using a comparable number of GRU parameters, including 3-layer stacked-GRU subword and character-level decoders, where the attention is computed after the 1st layer BIBREF19 and a 3-layer hierarchical decoder which implements the attention mechanism after the 2nd layer. All models use an embedding dimension and GRU size of 512. The latent morphology model uses the same hierarchical GRU architecture, where the middle layer is augmented using 4 multi-layer perceptrons with 256 hidden units. We use a lemma vector dimension of 150, 10 inflectional features (See §SECREF21 for experiments conducted to tune the feature dimensions) and set the regularization constant to $\rho =0.4$. All models are trained using the Adam optimizer BIBREF20 with a batch size of 100, dropout rate of 0.2, learning rate of 0.0004 and learning rate decay of 0.8, applied when the perplexity does not decrease at a given epoch. Translations are generated with beam search with a beam size of 5, where the hierarchical models implement the hierarchical beam search BIBREF21. <<</Training Settings>>> <<<Results>>> <<<The Effect of Morphological Typology>>> The experiment results given in Table TABREF9 shows the performance of each model in translating English into Arabic, Czech and Turkish. In Turkish, the most sparse target language in our benchmark, using character-based decoding shows to be more advantageous compared to the subword-level and hierarchical models, due to the fact that reduced granularity in the vocabulary units might aid in better predicting words under conditions of high data sparsity. In Arabic, on the other hand, using a hierarchical decoding model shows to be advantageous compared to the character-level decoder, as it might be useful in better learning syntactic dependencies, whereas it also outperforms the subword-level decoder. Using the latent morphology model provides improvements of 0.51 and 0.30 BLEU points in Arabic and Turkish over the best performing baselines, respectively. The fact that our model can efficiently work in both Arabic and Turkish suggests that it can handle the generation of both concatenative and non-concatenative morphological transformations. The results in the English-to-Czech translation direction do not indicate a specific advantage of using either method for generating fusional morphology, where morphemes are already optimized at the surface level, although our model is still able to achieve translation accuracy comparable to the character-level model. <<</The Effect of Morphological Typology>>> <<<The Effect of Data Size>>> The experiment conducted in the English-to-Turkish translation direction by increasing the amount of training data with multi-domain corpora demonstrates a more challenging case, where there is a greater possibility of observing rare words, either in the form of morphological inflections due to the complex agglutinative morphology of Turkish, or ambiguous terminology raising from the multi-domain characteristics. In this experiment, the character-level model experiences a drop in performance and its accuracy is much lower than the subword-level one, suggesting that its capacity cannot cope with the increased amount of sparsity. Empirical results suggest that with increased capacity, character-level models carry the potential to reach comparable performance to subword-level models BIBREF4. Our model reaches a much larger improvement of 0.82 BLEU points over the subword-level and 2.54 BLEU points over the character-level decoders, suggesting that it could make use of the increased sparsity in learning more accurate representations. <<</The Effect of Data Size>>> <<<Predicting Unseen Words>>> In addition to general evaluation using automatic metrics, we perform a more focused analysis to illustrate the performance of different methods in predicting unseen words. We sample the sentences from the development sets which contain out-of-vocabulary words, and compute the average perplexity per character on these sentences using different NMT models, as suggested by BIBREF22. In general, the highest perplexities are obtained using the subword-based model, suggesting that generating unseen words using subword units is indeed increasing the difficulty of prediction, compared to the character-level which obtains the lowest perplexity. This result indicates that increased granularity aids in reducing the uncertainty during prediction. Similar to the results in §SECREF8, in Czech the values are almost comparable. Due to its stochastic nature, our model yields higher perplexity values compared to the hierarchical model, whereas the values range between subword and character-based models, possibly finding an optimal level of granularity between the two solutions. <<</Predicting Unseen Words>>> <<<Feature Variations>>> In order to understand whether the latent inflectional features in fact capture information about variations related to morphological transformations, we try generating different surface forms of the same lemma by assigning different values to the inflectional features. We use the latent morphology model based decoder to translate the English word `go', and after sampling the lemma, we fix its value and vary the values of the inflectional features at random positions for generating different outputs. Table TABREF14 presents different sets of feature values and the corresponding outputs generated by the decoder. The model generates different surface forms for different sets of features, confirming that latent variables encode information related to the infinitive form of the verb, as well as its formality conditions, prepositions, person, number and tense. We also observe that many trials based on different feature combinations may result in the same outputs, although some feature values may not be set in a single-word context. Varying the features individually does not necessarily yield distinct changes in the output, suggesting that some features may act jointly in determining the word form. <<</Feature Variations>>> <<</Results>>> <<</Evaluation>>> <<<Conclusion>>> In this paper we presented a novel decoding architecture for NMT employing a hierarchical latent variable model to promote sparsity in lexical representations, which demonstrated promising application for morphologically-rich and low-resource languages. Our model generates words one character at a time by composing two latent features representing their lemmas and inflectional features. We evaluate our model against conventional open-vocabulary NMT solutions such as subword and character-level decoding methods in translationg English into three morphologically-rich languages with different morphological typologies under low to mid-resource settings. Our results show that our model can significantly outperform subword-level NMT models, whereas demonstrates better capacity than character-level models in coping with increased amounts of data sparsity. We also conduct ablation studies on the effect of feature variations to the predictions, which prove that despite being completely unsupervised, our model can in fact capture morphosyntactic information and generalize to different surface forms of words. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Introduction" ], "type": "disordered_section" }
1910.13890
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> A Latent Morphology Model for Open-Vocabulary Neural Machine Translation <<<Abstract>>> Translation into morphologically-rich languages challenges neural machine translation (NMT) models with extremely sparse vocabularies where atomic treatment of surface forms is unrealistic. This problem is typically addressed by either pre-processing words into subword units or performing translation directly at the level of characters. The former is based on word segmentation algorithms optimized using corpus-level statistics with no regard to the translation task. The latter learns directly from translation data but requires rather deep architectures. In this paper, we propose to translate words by modeling word formation through a hierarchical latent variable model which mimics the process of morphological inflection. Our model generates words one character at a time by composing two latent representations: a continuous one, aimed at capturing the lexical semantics, and a set of (approximately) discrete features, aimed at capturing the morphosyntactic function, which are shared among different surface forms. Our model achieves better accuracy in translation into three morphologically-rich languages than conventional open-vocabulary NMT methods, while also demonstrating a better generalization capacity under low to mid-resource settings. <<</Abstract>>> <<<Introduction>>> Neural machine translation (NMT) systems are conventionally trained based on the approach of maximizing the log-likelihood on a training corpus in order to learn distributed representations of words according to their sentence context, which is highly demanding in terms of training data as well as the network capacity. Under conditions of lexical sparsity, which may include the cases when the amount of training examples is insufficient to observe words in different context, and particularly in translation of morphologically-rich languages, where the same word can have exponentially many different surface realizations due to syntactic conditions, which are often rarely or ever observed in any set of collected examples, the model may suffer in learning accurate representations of words. The standard approach to overcome this limitation is to replace the word representations in the model with subword units that are shared among words, which are, in principle, more reliable as they are observed more frequently in varying context BIBREF0, BIBREF1. One drawback related to this approach, however, is that the estimation of the subword vocabulary relies on word segmentation methods optimized using corpus-dependent statistics, disregarding any linguistic notion and the translation objective, which may result in morphological errors during splitting, resulting in subword units that are semantically ambiguous as they might be used in far too many lexical contexts BIBREF2. Moreover, the words are generated predicting multiple subword units, which makes generalizing to unseen word forms more difficult, where some of the subword units that could be used to reconstruct a given word may be unlikely in the given context. To alleviate the sub-optimal effects of using explicit segmentation and generalize better to new morphological forms, recent studies explored the idea of extending the same approach to model translation directly at the level of characters BIBREF3, BIBREF4, which, in turn, have demonstrated the requirement of using comparably deeper networks, as the network would then need to learn longer distance grammatical dependencies BIBREF5. In this paper, we explore the benefit of explicitly modeling variations in the surface forms of words using methods from deep latent variable modeling in order to improve the translation accuracy in low-resource and morphologically-rich languages. Latent variable models allow us to inject inductive biases relevant to the task, which, in our case, is word formation, and we believe that follows a certain hierarchical procedure. Our model translates words one character at a time based on word representations learned compositionally from sub-lexical components, which are parameterized by a hierarchical latent variable model mimicking the process of morphological inflection, consisting of a continuous-space dense vector capturing the lexical semantics, and a set of (approximately) discrete features, representing the morphosyntactic role of the word in a given sentence. Each word representation during decoding is reformulated based on the shared latent morphological features, aiding in learning more reliable representations of words under sparse settings by generalizing across their different surface forms. We evaluate our method in translating English into three morphologically-rich languages each with a distinct morphological typology: Arabic, Czech and Turkish, and show that our model is able to obtain better translation accuracy and generalization capacity than conventional approaches to open-vocabulary NMT. <<</Introduction>>> <<<Evaluation>>> <<<Models>>> We evaluate our model by comparing it in machine translation against three baselines which constitute the conventional open-vocabulary NMT methods, including architectures using atomic parameterization either with subword units segmented with BPE BIBREF0 or characters, and the hierarchical parameterization method employed for generating all words in the output. We implement all architectures using Pytorch BIBREF6 within the OpenNMT-py framework BIBREF7. <<</Models>>> <<<Data and Languages>>> In order to evaluate our model we design two sets of experiments. The experiments in §SECREF8 aim to evaluate different methods under low-resource settings, for languages with different morphological typology. We model the machine translation task from English into three languages with distinct morphological characteristics: Arabic (templatic), Czech (fusional), and Turkish (agglutinative). We use the TED Talks corpora BIBREF8 for training the NMT models for these experiments. In §SECREF10, we conduct more experiments in Turkish to demonstrate the case of increased data sparsity using multi-domain training corpora, where we extend the training set using corpora from EU Bookshop BIBREF9, Global Voices, Gnome, Tatoeba, Ubuntu BIBREF10, KDE4 BIBREF11, Open Subtitles BIBREF12 and SETIMES BIBREF13. The statistical characteristics of the training sets are given in Tables TABREF16 and TABREF17. We use the official evaluation sets of the IWSLT for validating and testing the accuracy of the models. In order to increase the number of unknown and rare words in the evaluation sets we measure accuracy on large test sets combining evaluation sets from many years (Table TABREF18 presents the evaluation sets used for development and testing). The accuracy of each model output is measured using BLEU BIBREF15 and chrF3 BIBREF16 metrics, whereas the significance of the improvements are computed using bootstrap hypothesis testing BIBREF17. <<</Data and Languages>>> <<<Training Settings>>> All models are implemented using gated recurrent units (GRU) BIBREF18, and have a single-layer bi-RNN encoder. The source sides of the data used for training all NMT models, and the target sides of the data used in training the subword-level NMT models are segmented using BPE with 16,000 merge rules. We implement all decoders using a comparable number of GRU parameters, including 3-layer stacked-GRU subword and character-level decoders, where the attention is computed after the 1st layer BIBREF19 and a 3-layer hierarchical decoder which implements the attention mechanism after the 2nd layer. All models use an embedding dimension and GRU size of 512. The latent morphology model uses the same hierarchical GRU architecture, where the middle layer is augmented using 4 multi-layer perceptrons with 256 hidden units. We use a lemma vector dimension of 150, 10 inflectional features (See §SECREF21 for experiments conducted to tune the feature dimensions) and set the regularization constant to $\rho =0.4$. All models are trained using the Adam optimizer BIBREF20 with a batch size of 100, dropout rate of 0.2, learning rate of 0.0004 and learning rate decay of 0.8, applied when the perplexity does not decrease at a given epoch. Translations are generated with beam search with a beam size of 5, where the hierarchical models implement the hierarchical beam search BIBREF21. <<</Training Settings>>> <<<Results>>> <<<The Effect of Morphological Typology>>> The experiment results given in Table TABREF9 shows the performance of each model in translating English into Arabic, Czech and Turkish. In Turkish, the most sparse target language in our benchmark, using character-based decoding shows to be more advantageous compared to the subword-level and hierarchical models, due to the fact that reduced granularity in the vocabulary units might aid in better predicting words under conditions of high data sparsity. In Arabic, on the other hand, using a hierarchical decoding model shows to be advantageous compared to the character-level decoder, as it might be useful in better learning syntactic dependencies, whereas it also outperforms the subword-level decoder. Using the latent morphology model provides improvements of 0.51 and 0.30 BLEU points in Arabic and Turkish over the best performing baselines, respectively. The fact that our model can efficiently work in both Arabic and Turkish suggests that it can handle the generation of both concatenative and non-concatenative morphological transformations. The results in the English-to-Czech translation direction do not indicate a specific advantage of using either method for generating fusional morphology, where morphemes are already optimized at the surface level, although our model is still able to achieve translation accuracy comparable to the character-level model. <<</The Effect of Morphological Typology>>> <<<The Effect of Data Size>>> The experiment conducted in the English-to-Turkish translation direction by increasing the amount of training data with multi-domain corpora demonstrates a more challenging case, where there is a greater possibility of observing rare words, either in the form of morphological inflections due to the complex agglutinative morphology of Turkish, or ambiguous terminology raising from the multi-domain characteristics. In this experiment, the character-level model experiences a drop in performance and its accuracy is much lower than the subword-level one, suggesting that its capacity cannot cope with the increased amount of sparsity. Empirical results suggest that with increased capacity, character-level models carry the potential to reach comparable performance to subword-level models BIBREF4. Our model reaches a much larger improvement of 0.82 BLEU points over the subword-level and 2.54 BLEU points over the character-level decoders, suggesting that it could make use of the increased sparsity in learning more accurate representations. <<</The Effect of Data Size>>> <<<Predicting Unseen Words>>> In addition to general evaluation using automatic metrics, we perform a more focused analysis to illustrate the performance of different methods in predicting unseen words. We sample the sentences from the development sets which contain out-of-vocabulary words, and compute the average perplexity per character on these sentences using different NMT models, as suggested by BIBREF22. In general, the highest perplexities are obtained using the subword-based model, suggesting that generating unseen words using subword units is indeed increasing the difficulty of prediction, compared to the character-level which obtains the lowest perplexity. This result indicates that increased granularity aids in reducing the uncertainty during prediction. Similar to the results in §SECREF8, in Czech the values are almost comparable. Due to its stochastic nature, our model yields higher perplexity values compared to the hierarchical model, whereas the values range between subword and character-based models, possibly finding an optimal level of granularity between the two solutions. <<</Predicting Unseen Words>>> <<<Feature Variations>>> In order to understand whether the latent inflectional features in fact capture information about variations related to morphological transformations, we try generating different surface forms of the same lemma by assigning different values to the inflectional features. We use the latent morphology model based decoder to translate the English word `go', and after sampling the lemma, we fix its value and vary the values of the inflectional features at random positions for generating different outputs. Table TABREF14 presents different sets of feature values and the corresponding outputs generated by the decoder. The model generates different surface forms for different sets of features, confirming that latent variables encode information related to the infinitive form of the verb, as well as its formality conditions, prepositions, person, number and tense. We also observe that many trials based on different feature combinations may result in the same outputs, although some feature values may not be set in a single-word context. Varying the features individually does not necessarily yield distinct changes in the output, suggesting that some features may act jointly in determining the word form. <<</Feature Variations>>> <<</Results>>> <<</Evaluation>>> <<<Conclusion>>> In this paper we presented a novel decoding architecture for NMT employing a hierarchical latent variable model to promote sparsity in lexical representations, which demonstrated promising application for morphologically-rich and low-resource languages. Our model generates words one character at a time by composing two latent features representing their lemmas and inflectional features. We evaluate our model against conventional open-vocabulary NMT solutions such as subword and character-level decoding methods in translationg English into three morphologically-rich languages with different morphological typologies under low to mid-resource settings. Our results show that our model can significantly outperform subword-level NMT models, whereas demonstrates better capacity than character-level models in coping with increased amounts of data sparsity. We also conduct ablation studies on the effect of feature variations to the predictions, which prove that despite being completely unsupervised, our model can in fact capture morphosyntactic information and generalize to different surface forms of words. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Conclusion, Introduction" ], "type": "disordered_section" }
1909.01492
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation <<<Abstract>>> Neural networks are part of many contemporary NLP systems, yet their empirical successes come at the price of vulnerability to adversarial attacks. Previous work has used adversarial training and data augmentation to partially mitigate such brittleness, but these are unlikely to find worst-case adversaries due to the complexity of the search space arising from discrete text perturbations. In this work, we approach the problem from the opposite direction: to formally verify a system's robustness against a predefined class of adversarial attacks. We study text classification under synonym replacements or character flip perturbations. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation -- a formal model verification method. We modify the conventional log-likelihood training objective to train models that can be efficiently verified, which would otherwise come with exponential search complexity. The resulting models show only little difference in terms of nominal accuracy, but have much improved verified accuracy under perturbations and come with an efficiently computable formal guarantee on worst case adversaries. <<</Abstract>>> <<<Introduction>>> Deep models have been shown to be vulnerable against adversarial input perturbations BIBREF0, BIBREF1. Small, semantically invariant input alterations can lead to drastic changes in predictions, leading to poor performance on adversarially chosen samples. Recent work BIBREF2, BIBREF3, BIBREF4 also exposed the vulnerabilities of neural NLP models, e.g. with small character perturbations BIBREF5 or paraphrases BIBREF6, BIBREF7. These adversarial attacks highlight often unintuitive model failure modes and present a challenge to deploying NLP models. Common attempts to mitigate the issue are adversarial training BIBREF5 and data augmentation BIBREF3, BIBREF8, which lead to improved accuracy on adversarial examples. However, this might cause a false sense of security, as there is generally no guarantee that stronger adversaries could not circumvent defenses to find other successful attacks BIBREF9, BIBREF10, BIBREF11. Rather than continuing the race with adversaries, formal verification BIBREF12, BIBREF13, BIBREF14 offers a different approach: it aims at providing provable guarantees to a given model specification. In the case of adversarial robustness, such a specification can be formulated as prediction consistency under any altered – but semantically invariant – input change. In this paper, we study verifiable robustness, i.e., providing a certificate that for a given network and test input, no attack or perturbation under the specification can change predictions, using the example of text classification tasks, Stanford Sentiment Treebank (SST) BIBREF15 and AG News BIBREF16. The specification against which we verify is that a text classification model should preserve its prediction under character (or synonym) substitutions in a character (or word) based model. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation (IBP) BIBREF17, BIBREF18, BIBREF19 to compute worst case bounds on specification satisfaction, as illustrated in Figure FIGREF1. Since these bounds can be computed efficiently, we can furthermore derive an auxiliary objective for models to become verifiable. The resulting classifiers are efficiently verifiable and improve robustness on adversarial examples, while maintaining comparable performance in terms of nominal test accuracy. The contributions of this paper are twofold: To the best of our knowledge, this paper is the first to introduce verification and verifiable training for neural networks in natural language processing (§SECREF3). Through a series of experiments (§SECREF4), we demonstrate (a) the effectiveness of modeling input perturbations as a simplex and using simplex bounds with IBP for training and testing, (b) the weakness of adversarial training under exhaustive verification, (c) the effects of perturbation space on the performance of different methods, and (d) the impact of using GloVe and counter-fitted embeddings on the IBP verification bounds. <<</Introduction>>> <<<Related Work>>> <<<Adversarial Examples in NLP.>>> Creating adversarial examples for NLP systems requires identifying semantically invariant text transformations to define an input perturbation space. In this paper, given our specification, we study word- and character-level HotFlip attacks BIBREF5 – which consist of character and synonym replacements – on text classification tasks. We compare our verifiable approach to other defenses including adversarial training BIBREF20 and data augmentation BIBREF8, BIBREF3. Note that some existing adversarial perturbations such as syntactically controlled paraphrasing BIBREF7, exploiting backtranslation systems BIBREF6, or using targeted keyword attack BIBREF21 are beyond the specification in this paper. <<</Adversarial Examples in NLP.>>> <<<Formal Verification of Neural Networks.>>> Formal verification provides a provable guarantee that models are consistent with a specification for all possible model inputs. Previous work can be categorised into complete methods that use Mixed-Integer Programming (MIP) BIBREF22, BIBREF23 or Satisfiability Modulo Theory (SMT) BIBREF14, BIBREF24, and incomplete methods that solve a convex relaxation of the verification problem BIBREF25, BIBREF26, BIBREF27. Complete methods perform exhaustive enumeration to find the worst case. Hence, complete methods are expensive and difficult to scale, though they provide exact robustness bounds. Incomplete methods provide loose robustness bounds, but can be more scalable and used inside the training loop for training models to be robust and verifiable BIBREF28, BIBREF26, BIBREF19, BIBREF17. Our work is the first to extend incomplete verification to text classification, considering input perturbations on a simplex and minimising worst case bounds to adversarial attacks in text classification. We highlight that the verification of neural networks is an extremely challenging task, and that scaling complete and incomplete methods to large models remains an open challenge. <<</Formal Verification of Neural Networks.>>> <<<Representations of Combinatorial Spaces.>>> Word lattices and hypergraphs are data structures that have often been used to efficiently represent and process exponentially large numbers of sentences without exhaustively enumerating them. Applications include automatic speech recognition (ASR) output rescoring BIBREF29, machine translation of ASR outputs BIBREF30, paraphrase variants BIBREF31, and word segmentation alternatives BIBREF32. The specifications used to characterise the space of adversarial attacks are likewise a compact representation, and the algorithms discussed below operate on them without exhaustive enumeration. <<</Representations of Combinatorial Spaces.>>> <<</Related Work>>> <<<Methodology>>> We assume a fixed initial vector representation $\mathbf {z} _0$ of a given input sentence $z$ (e.g. the concatenation of pretrained word embeddings) and use a neural network model, i.e. a series of differentiable transformations $h_k$: where $\mathbf {z} _k$ is the vector of activations in the $k$-th layer and the final output $\mathbf {z} _K$ consists of the logits for each class. Typically each $h_k$ will be an affine transformation followed by an activation function (e.g. ReLU or sigmoid). The affine transformation can be a convolution (with the inputs and outputs having an implied 2D structure) of a vector of activations at each point in a sequence; in what follows these activations will be concatenated along the sequence to form a vector $\mathbf {z} _k$. <<<Verification>>> Verification is the process of examining whether the output of a model satisfies a given specification. Formally, this means establishing whether the following holds true for a given normal model input $\mathbf {x} _0$: $\forall \mathbf {z} _0 \in \mathcal {X}_\mathrm {in}(\mathbf {x} _0):~ \mathbf {z} _K \in \mathcal {X}_\mathrm {out}$, where $\mathcal {X}_\mathrm {out}$ characterizes a constraint on the outputs, and $\mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ defines a neighbourhood of $\mathbf {x} _0$ throughout which the constraint should be satisfied. In our concrete use case, we consider a specification of robustness against adversarial attacks which are defined by bounded input perturbations (synonym flips up to $\delta $ words, or character flips up to $\delta $ characters) of the original sentence $x$. The attack space $\mathcal {X}_\mathrm {in} (\mathbf {x} _0)$ is the set of vector representations (embeddings) of all such perturbed sentences. Denoting by $z_{K,y}$ the logit of label $y$, we formulate the output constraint that for all classes $y: z_{K,y_\textrm {true}} \ge z_{K,y}$. This specification establishes that the prediction of all perturbed sentences $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ should correspond to the correct label $y_\textrm {true}$. This specification may equivalently be formulated as a set of half-space constraints on the logits: for each class $y$ where $\mathbf {e}_{i}$ is a one-hot vector with 1 in the $i$-th position. In other words, the true class logit should be greater or equal than those for all other classes $y$, which means the prediction remains constant. <<</Verification>>> <<<Verification as Optimisation>>> Verifying the specification in Eq. (DISPLAY_FORM10) can be done by solving the following constrained optimisation problem to find the input that would most strongly violate it: where $\mathbf {c} $ is a vector with entries $c_y = 1$, $c_{y_\textrm {true}} = -1$ and 0 everywhere else. If the optimal value of the above optimisation problem is smaller than 0, then the specification in Eq. (DISPLAY_FORM10) is satisfied, otherwise a counter-example has been found. In our case, this corresponds to a successful adversarial attack. <<</Verification as Optimisation>>> <<<Modeling Input Perturbations using Simplices>>> In the interests of computational feasibility, we will actually attempt to verify the specification on a larger, but more tractable input perturbation space $\bar{\mathcal {X}}_\mathrm {in} \supseteq \mathcal {X}_\mathrm {in}$. Any data point that is verifiable on this larger input perturbation space is necessarily verifiable with respect to the original specification. In the domain of image classification, $\mathcal {X}_\mathrm {in}$ is often modeled as an $L_\infty $-ball, corresponding to input perturbations in which each pixel may be independently varied within a small interval. However, using such interval bounds is unsuitable for our situation of perturbations consisting of a small number $\delta $ of symbol substitutions. Although we could construct an axis-aligned bounding box $\bar{\mathcal {X}}_\mathrm {in}$ in embedding space that encompasses all of $\mathcal {X}_\mathrm {in}$, it would over-approximate the perturbation space to such an extent that it would contain perturbations where all symbols in the sentence have been substituted simultaneously. To remedy this, we propose a tighter over-approximation in the form of a `simplex' in embedding space. We first define this for the special case $\delta =1$, in which $\mathcal {X}_\mathrm {in} = \lbrace \mathbf {x} _0\rbrace \cup \lbrace \mathbf {p} ^{(m)}_0 : 1\le m\le M\rbrace $ consists of the representations of all $M$ sentences $p^{(m)}$ derived from $x$ by performing a single synonym (or character) substitution, together with the unperturbed sentence $x$ itself. In this case we define $\bar{\mathcal {X}}_\mathrm {in}$ to be the convex hull $\mathcal {S}_1$ of $\mathcal {X}_\mathrm {in}$. Note we are not considering contextual embeddings BIBREF33 here. Each `vertex' $\mathbf {p} ^{(m)}_0$ is a sequence of embedding vectors that differs from $\mathbf {x} _0$ at only one word (or character) position. For a larger perturbation radius $\delta >1$, the cardinality of $\mathcal {X}_\mathrm {in}$ grows exponentially, so manipulating its convex hull becomes infeasible. However, dilating $\mathcal {S}_1$ centered at $\mathbf {x} _0$, scaling it up by a factor of $\delta $, yields a simplex $\mathcal {S}_\delta $ with $M+1$ vertices that contains $\mathcal {X}_\mathrm {in}$. More formally, we define a region in the input embedding space based on the $M$ `elementary' perturbations $\lbrace \mathbf {p} ^{(m)}_0: m = 1 \ldots M\rbrace $ of $\mathbf {x} _0$ defined earlier for the $\delta =1$ case. For perturbations of up to $\delta $ substitutions, we define $\bar{\mathcal {X}}_\mathrm {in}(\mathbf {x} _0)$ as the convex hull of $\lbrace \mathbf {z} ^{(m)}_0: m = 0 \ldots M\rbrace $, where $\mathbf {z} ^{(0)}_0=\mathbf {x} _0$ denotes the original (unperturbed) sentence representation and, for $m\ge 1$, $\mathbf {z} ^{(m)}_0 = \mathbf {x} _0+\delta \cdot (\mathbf {p} ^{(m)}_0-\mathbf {x} _0)$. The convex hull is an over-approximation of $\mathcal {X}_\mathrm {in}(\mathbf {x} _0)$: it contains the representations of all sentences derived from $x$ by performing up to $\delta $ substitutions at distinct word (or character) positions. <<</Modeling Input Perturbations using Simplices>>> <<<Interval Bound Propagation>>> To estimate the optimal value of the problem (DISPLAY_FORM12), given an input $\mathbf {z} _0$, we can propagate the upper/lower bounds on the activations $\mathbf {z} _k$ of each layer using interval arithmetic BIBREF17. We begin by computing interval bounds on the first layer's activations. Recall that any input $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in}$ will lie within the convex hull of certain vertices $\lbrace \mathbf {z} ^{(m)}_0: m = 0 \ldots M\rbrace $. Then, assuming that the first layer $h_1$ is an affine transformation (e.g. linear or convolutional) followed by a monotonic activation function, the lower and upper bounds on the components $z_{1,i}$ of the first layer's activations $\mathbf {z} _1$ are as follows: Note that these bounds are efficient to compute (by passing each perturbation $\mathbf {z} ^{(m)}_0$ through the first layer); in particular there is no need to compute the convex hull polytope. For subsequent layers $k>1$, the bounds on the components $z_{k,i}$ of $\mathbf {z} _k$ are: The above optimisation problems can be solved in closed form quickly for affine layers and monotonic activation functions, as illustrated in IBP. Finally, the lower and upper bounds of the output logits $\mathbf {z} _K$ can be used to construct an upper bound on the solution of (DISPLAY_FORM12): <<<Verifiable Training.>>> The upper bound in (DISPLAY_FORM17) is fast to compute (only requires two forward passes for upper and lower bounds through the network). Hence, we can define a loss to optimise models such that the models are trained to be verifiable. Solving (DISPLAY_FORM17) is equivalent to finding the worst-case logit difference, and this is achieved when the logit of the true class is equal to its lower bound, and all other logits equal to their upper bounds. Concretely, for each class $y \ne y_\textrm {true} $: $\hat{\mathbf {z}}_{K,y}(\delta ) = \overline{\mathbf {z}}_{K,y} (\delta ) $, and $\hat{\mathbf {z}}_{K,y_\textrm {true}}(\delta ) = \underline{\mathbf {z}}_{K,y_\textrm {true}} (\delta ) $. The training loss can then be formulated as where $\ell $ is the cross-entropy loss, $\kappa $ a hyperparameter that controls the relative weights between the classification loss $L_\textrm {normal}$ and specification loss $L_\textrm {spec}$. If $\delta = 0$ then $\mathbf {z} _K = \hat{\mathbf {z}}_K(\delta )$, and thus $L$ reduces to a standard classification loss. Empirically, we found that a curriculum-based training, starting with $\kappa $=1 and linearly decreasing to 0.25, is effective for verifiable training. <<</Verifiable Training.>>> <<</Interval Bound Propagation>>> <<</Methodology>>> <<<Experiments>>> We conduct verification experiments on two text classification datasets, Stanford Sentiment Treebank (SST) BIBREF15 and AG News corpus, processed in BIBREF16. We focus on word-level and character-level experiments on SST and character-level experiments on AG News. Our specification is that models should preserve their prediction against up to $\delta $ synonym substitutions or character typos, respectively. <<<A Motivating Example>>> We provide an example from Table TABREF29 to highlight different evaluation metrics and training methods. Given a sentence, “you ' ve seen them a million times .”, that is predicted correctly (called Nominal Accuracy) by a classification model, we want to further examine whether the model is robust against character typos (e.g., up to $\delta =3$ typos) to this example. One way is to use some heuristic to search for a valid example with up to 3 typos that can change the prediction the most (called adversarial example). We evaluate the model using this adversarial example and report the performance (called Adversarial Accuracy). However, even if the adversarial example is predicted correctly, one can still ask: is the model truly robust against any typos (up to 3) to this example? In order to have a certificate that the prediction will not change under any $\delta =3$ character typos (called verifiably robust), we could in theory exhaustively search over all possible cases and check whether any of the predictions is changed (called Oracle Accuracy). If we only allow a character to be replaced by another character nearby on the keyboard, already for this short sentence we need to exhaustively search over 2,951 possible perturbations. To avoid this combinatorial growth, we can instead model all possible perturbations using the proposed simplex bounds and propagate the bounds through IBP at the cost of two forward passes. Following Eq. (DISPLAY_FORM12), we can check whether this example can be verified to be robust against all perturbations (called IBP-Verified Accuracy). There are also a number of ways in which the training procedure can be enhanced to improve the verifiable robustness of a model against typos to the sentence. The baseline is to train the model with the original/normal sentence directly (called Normal Training). Another way is to randomly sample typo sentences among the 2,951 possible perturbations and add these sentences to the training data (called Data Augmentation Training). Yet another way is to find, at each training iteration, the adversarial example among the (subset of) 2,951 possible perturbations that can change the prediction the most; we then use the adversarial example alongside the training example (called Adversarial Training). Finally, as simplex bounds with IBP is efficient to run, we can train a model to be verifiable by minimising Eq. (DISPLAY_FORM19) (called Verifiable Training). <<</A Motivating Example>>> <<<Baselines>>> In this section we detail our baseline models. <<<Adversarial Training.>>> In adversarial training BIBREF34, BIBREF20, the goal is to optimise the following saddle point problem: where the inner maximisation problem is to find an adversarial perturbation $\mathbf {z} _0\in \mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ that can maximise the loss. In the inner maximisation problem, we use HotFlip BIBREF5 with perturbation budget $\delta $ to find the adversarial example. The outer minimisation problem aims to update model parameters such that the adversarial risk of (DISPLAY_FORM24) is minimised. To balance between the adversarial robustness and nominal accuracy, we use an interpolation weight of 0.5 between the original cross-entropy loss and the adversarial risk. <<</Adversarial Training.>>> <<<Data Augmentation Training.>>> In the data augmentation setup, we randomly sample a valid perturbation $z$ with perturbation budget $\delta $ from a normal input $x$, and minimise the cross-entropy loss given the perturbed sample $z$ (denoted as data augmentation loss). We also set the interpolation weight between the data augmentation loss and the original normal cross-entropy loss to 0.5. <<</Data Augmentation Training.>>> <<<Normal Training.>>> In normal training, we use the likelihood-based training using the normal training input $x$. <<</Normal Training.>>> <<</Baselines>>> <<<Setup>>> We use a shallow convolutional network with a small number of fully-connected layers for SST and AG News experiments. The detailed model architectures and hyperparameter details are introduced in the supplementary material. Although we use shallow models for ease of verifiable training, our nominal accuracy is on par with previous work such as BIBREF15 (85.4%) and BIBREF35 (84.3%) in SST and BIBREF16 (87.18%) in AG News. During training, we set the maximum number of perturbations to $\delta =3$, and evaluate performance with the maximum number of perturbations from $\delta =1$ to 6 at test time. For word-level experiments, we construct the synonym pairs using the PPDB database BIBREF36 and filter the synonyms with fine-grained part-of-speech tags using Spacy BIBREF37. For character-level experiments, we use synthetic keyboard typos from BIBREF3, and allow one possible alteration per character that is adjacent to it on an American keyboard. The allowable input perturbation space is much larger than for word-level synonym substitutions, as shown in Table TABREF48. <<</Setup>>> <<<Evaluation Metrics>>> We use the following four metrics to evaluate our models: i) test set accuracy (called Acc.), ii) adversarial test accuracy (called Adv. Acc.), which uses samples generated by HotFlip attacks on the original test examples, iii) verifiable accuracy under IBP verification (called IBP-verified), that is, the ratio of test samples for which IBP can verify that the specification is not violated, and iv) exhaustively verified accuracy (called Oracle), computed by enumerating all possible perturbations given the perturbation budget $\delta $, where a sample is verifiably robust if the prediction is unchanged under all valid perturbations. <<</Evaluation Metrics>>> <<<Results>>> Table TABREF28 shows the results of IBP training and baseline models under $\delta =3$ and $\delta =2$ perturbations on SST and AG News, respectively. Figures FIGREF31 and FIGREF36 show the character- and word-level results with $\delta $ between 1 and 6 under four metrics on the SST test set; similar figures for SST word-level (adversarial training, data augmentation) models and AG News dataset can be found in the supplementary material. <<<Oracle Accuracy and Adversarial Accuracy.>>> In Table TABREF28, comparing adversarial accuracy with exhaustive verification accuracy (oracle), we observe that although adversarial training is effective at defending against HotFlip attacks (74.9 / 76.8 / 85.5%), the oracle adversarial accuracy under exhaustive testing (25.8 / 74.6 / 81.6%) is much lower in SST-character / SST-word / AG-character level, respectively. For illustration, we show some concrete adversarial examples from the HotFlip attack in Table TABREF29. For some samples, even though the model is robust with respect to HotFlip attacks, its predictions are incorrect for stronger adversarial examples obtained using the exhaustive verification oracle. This underscores the need for verification, as robustness with respect to suboptimal adversarial attacks alone might give a false sense of security. <<</Oracle Accuracy and Adversarial Accuracy.>>> <<<Effectiveness of Simplex Bounds with IBP.>>> Rather than sampling individual points from the perturbation space, IBP training covers the full space at once. The resulting models achieve the highest exhaustively verified accuracy at the cost of only moderate deterioration in nominal accuracy (Table TABREF28). At test time, IBP allows for constant-time verification with arbitrary $\delta $, whereas exhaustive verification requires evaluation over an exponentially growing search space. <<</Effectiveness of Simplex Bounds with IBP.>>> <<<Perturbation Space Size.>>> In Table TABREF28, when the perturbation space is larger (SST character-level vs. SST word-level), (a) across models, there is a larger gap in adversarial accuracy and true robustness (oracle); (b) the difference in oracle robustness between IBP and adversarial training is even larger (73.1% vs. 25.8% and 76.5% vs. 74.6%). <<</Perturbation Space Size.>>> <<<Perturbation Budget.>>> In Figures FIGREF31 and FIGREF36, we compare normal training, adversarial training, data augmentation, and verifiable training models with four metrics under various perturbation budgets on the SST dataset. Overall, as the perturbation budget increases, the adversarial accuracy, oracle accuracy, and IBP-verified accuracy decrease. We can observe that even for large perturbation budgets, verifiably trained models are still able to verify a sizable number of samples. Again, although adversarial accuracy flattens for larger perturbation budgets in the word level experiments, oracle verification can further find counterexamples to change the prediction. Note that exhaustive verification becomes intractable with large perturbation sizes. <<</Perturbation Budget.>>> <<<Computational Cost of Exhaustive Verification.>>> The perturbation space in NLP problems is discrete and finite, and a valid option to verify the specification is to exhaustively generate predictions for all $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in} (\mathbf {x} _0)$, and then check if at least one does not match the correct label. Conversely, such an exhaustive (oracle) approach can also identify the strongest possible attack. But the size of $\mathcal {X}_\mathrm {in}$ grows exponentially with $\delta $, and exhaustive verification quickly becomes prohibitively expensive. In Table TABREF48, we show the maximum perturbation space size in the SST and AG News test set for different perturbation radii $\delta $. This number grows exponentially as $\delta $ increases. To further illustrate this, Figure FIGREF49 shows the number of forward passes required to verify a given proportion of the SST test set for an IBP-trained model using exhaustive verification and IBP verification. IBP reaches verification levels comparable to an exhaustive verification oracle, but requires only two forward passes to verify any sample – one pass for computing the upper, and one for the lower bounds. Exhaustive verification, on the other hand, requires several orders of magnitude more forward passes, and there is a tail of samples with extremely large attack spaces. <<</Computational Cost of Exhaustive Verification.>>> <<</Results>>> <<<Counter-Fitted Embeddings>>> As shown in Figures FIGREF31 and FIGREF36, although IBP can verify arbitrary networks in theory, the verification bound is very loose except for models trained to be IBP-verifiable. One possible reason is the potentially large volume of the perturbation simplex. Since representations of substitution words/characters are not necessarily close to those of synonyms/typos in embedding space, the vertices of the simplex could be far apart, and thus cover a large area in representation space. Therefore, when propagating the interval bounds through the network, the interval bounds become too loose and fail to verify most of the examples if the models are not specifically trained. To test this hypothesis, we follow BIBREF38 and use fine-tuned GloVe embeddings trained to respect linguistic constraints; these representations (called counter-fitted embeddings) force synonyms to be closer and antonyms to be farther apart using word pairs from the PPDB database BIBREF36 and WordNet BIBREF39. We repeat the word level experiments with these counter-fitted embeddings, Figures FIGREF36 and FIGREF36 show the experimental results. We observe that IBP verified accuracy is now substantially higher across models, especially for $\delta =1, 2, 3$. The examples which IBP can verify increase by up to 33.2% when using the counter-fitted embeddings (normal training, $\delta =1$). Moreover, adversarial and exhaustively verified accuracy are also improved, at the cost of a mild deterioration in nominal test accuracy. The IBP-trained model also further improves both its oracle accuracy and IBP verified accuracy. These results validate our hypothesis that reducing the simplex volume via soft linguistic constraints can provide even tighter bounds for IBP, resulting in larger proportions of verifiable samples. <<</Counter-Fitted Embeddings>>> <<</Experiments>>> <<<Discussion>>> Our experiments indicate that adversarial attacks are not always the worst adversarial inputs, which can only be revealed via verification. On the other hand, exhaustive verification is computationally very expensive. Our results show that using the proposed simplex bounds with IBP can verify a sizable amount of test samples, and can be considered a potent verification method in an NLP context. We note however two limitations within the scope of this work: i) limited model depth: we only investigated models with few layers. IBP bounds are likely to become looser as the number of layers increases. ii) limited model types: we only studied models with CNN and fully connected layers. We focused on the HotFlip attack to showcase specification verification in the NLP context, with the goal of understanding factors that impact its effectiveness (e.g. the perturbation space volume, see Section SECREF50). It is worth noting that symbol substitution is general enough to encompass other threat models such as lexical entailment perturbations BIBREF40, and could potentially be extended to the addition of pre/postfixes BIBREF2, BIBREF41. Interesting directions of future work include: tightening IBP bounds to allow applicability to deeper models, investigating bound propagation in other types of neural architectures (e.g. those based on recurrent networks or self-attention), and exploring other forms of specifications in NLP. <<</Discussion>>> <<<Conclusion>>> We introduced formal verification of text classification models against synonym and character flip perturbations. Through experiments, we demonstrated the effectiveness of the proposed simplex bounds with IBP both during training and testing, and found weaknesses of adversarial training compared with exhaustive verification. Verifiably trained models achieve the highest exhaustive verification accuracy on SST and AG News. IBP verifies models in constant time, which is exponentially more efficient than naive verification via exhaustive search. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Introduction" ], "type": "disordered_section" }
1909.01492
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation <<<Abstract>>> Neural networks are part of many contemporary NLP systems, yet their empirical successes come at the price of vulnerability to adversarial attacks. Previous work has used adversarial training and data augmentation to partially mitigate such brittleness, but these are unlikely to find worst-case adversaries due to the complexity of the search space arising from discrete text perturbations. In this work, we approach the problem from the opposite direction: to formally verify a system's robustness against a predefined class of adversarial attacks. We study text classification under synonym replacements or character flip perturbations. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation -- a formal model verification method. We modify the conventional log-likelihood training objective to train models that can be efficiently verified, which would otherwise come with exponential search complexity. The resulting models show only little difference in terms of nominal accuracy, but have much improved verified accuracy under perturbations and come with an efficiently computable formal guarantee on worst case adversaries. <<</Abstract>>> <<<Introduction>>> Deep models have been shown to be vulnerable against adversarial input perturbations BIBREF0, BIBREF1. Small, semantically invariant input alterations can lead to drastic changes in predictions, leading to poor performance on adversarially chosen samples. Recent work BIBREF2, BIBREF3, BIBREF4 also exposed the vulnerabilities of neural NLP models, e.g. with small character perturbations BIBREF5 or paraphrases BIBREF6, BIBREF7. These adversarial attacks highlight often unintuitive model failure modes and present a challenge to deploying NLP models. Common attempts to mitigate the issue are adversarial training BIBREF5 and data augmentation BIBREF3, BIBREF8, which lead to improved accuracy on adversarial examples. However, this might cause a false sense of security, as there is generally no guarantee that stronger adversaries could not circumvent defenses to find other successful attacks BIBREF9, BIBREF10, BIBREF11. Rather than continuing the race with adversaries, formal verification BIBREF12, BIBREF13, BIBREF14 offers a different approach: it aims at providing provable guarantees to a given model specification. In the case of adversarial robustness, such a specification can be formulated as prediction consistency under any altered – but semantically invariant – input change. In this paper, we study verifiable robustness, i.e., providing a certificate that for a given network and test input, no attack or perturbation under the specification can change predictions, using the example of text classification tasks, Stanford Sentiment Treebank (SST) BIBREF15 and AG News BIBREF16. The specification against which we verify is that a text classification model should preserve its prediction under character (or synonym) substitutions in a character (or word) based model. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation (IBP) BIBREF17, BIBREF18, BIBREF19 to compute worst case bounds on specification satisfaction, as illustrated in Figure FIGREF1. Since these bounds can be computed efficiently, we can furthermore derive an auxiliary objective for models to become verifiable. The resulting classifiers are efficiently verifiable and improve robustness on adversarial examples, while maintaining comparable performance in terms of nominal test accuracy. The contributions of this paper are twofold: To the best of our knowledge, this paper is the first to introduce verification and verifiable training for neural networks in natural language processing (§SECREF3). Through a series of experiments (§SECREF4), we demonstrate (a) the effectiveness of modeling input perturbations as a simplex and using simplex bounds with IBP for training and testing, (b) the weakness of adversarial training under exhaustive verification, (c) the effects of perturbation space on the performance of different methods, and (d) the impact of using GloVe and counter-fitted embeddings on the IBP verification bounds. <<</Introduction>>> <<<Related Work>>> <<<Adversarial Examples in NLP.>>> Creating adversarial examples for NLP systems requires identifying semantically invariant text transformations to define an input perturbation space. In this paper, given our specification, we study word- and character-level HotFlip attacks BIBREF5 – which consist of character and synonym replacements – on text classification tasks. We compare our verifiable approach to other defenses including adversarial training BIBREF20 and data augmentation BIBREF8, BIBREF3. Note that some existing adversarial perturbations such as syntactically controlled paraphrasing BIBREF7, exploiting backtranslation systems BIBREF6, or using targeted keyword attack BIBREF21 are beyond the specification in this paper. <<</Adversarial Examples in NLP.>>> <<<Formal Verification of Neural Networks.>>> Formal verification provides a provable guarantee that models are consistent with a specification for all possible model inputs. Previous work can be categorised into complete methods that use Mixed-Integer Programming (MIP) BIBREF22, BIBREF23 or Satisfiability Modulo Theory (SMT) BIBREF14, BIBREF24, and incomplete methods that solve a convex relaxation of the verification problem BIBREF25, BIBREF26, BIBREF27. Complete methods perform exhaustive enumeration to find the worst case. Hence, complete methods are expensive and difficult to scale, though they provide exact robustness bounds. Incomplete methods provide loose robustness bounds, but can be more scalable and used inside the training loop for training models to be robust and verifiable BIBREF28, BIBREF26, BIBREF19, BIBREF17. Our work is the first to extend incomplete verification to text classification, considering input perturbations on a simplex and minimising worst case bounds to adversarial attacks in text classification. We highlight that the verification of neural networks is an extremely challenging task, and that scaling complete and incomplete methods to large models remains an open challenge. <<</Formal Verification of Neural Networks.>>> <<<Representations of Combinatorial Spaces.>>> Word lattices and hypergraphs are data structures that have often been used to efficiently represent and process exponentially large numbers of sentences without exhaustively enumerating them. Applications include automatic speech recognition (ASR) output rescoring BIBREF29, machine translation of ASR outputs BIBREF30, paraphrase variants BIBREF31, and word segmentation alternatives BIBREF32. The specifications used to characterise the space of adversarial attacks are likewise a compact representation, and the algorithms discussed below operate on them without exhaustive enumeration. <<</Representations of Combinatorial Spaces.>>> <<</Related Work>>> <<<Methodology>>> We assume a fixed initial vector representation $\mathbf {z} _0$ of a given input sentence $z$ (e.g. the concatenation of pretrained word embeddings) and use a neural network model, i.e. a series of differentiable transformations $h_k$: where $\mathbf {z} _k$ is the vector of activations in the $k$-th layer and the final output $\mathbf {z} _K$ consists of the logits for each class. Typically each $h_k$ will be an affine transformation followed by an activation function (e.g. ReLU or sigmoid). The affine transformation can be a convolution (with the inputs and outputs having an implied 2D structure) of a vector of activations at each point in a sequence; in what follows these activations will be concatenated along the sequence to form a vector $\mathbf {z} _k$. <<<Verification>>> Verification is the process of examining whether the output of a model satisfies a given specification. Formally, this means establishing whether the following holds true for a given normal model input $\mathbf {x} _0$: $\forall \mathbf {z} _0 \in \mathcal {X}_\mathrm {in}(\mathbf {x} _0):~ \mathbf {z} _K \in \mathcal {X}_\mathrm {out}$, where $\mathcal {X}_\mathrm {out}$ characterizes a constraint on the outputs, and $\mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ defines a neighbourhood of $\mathbf {x} _0$ throughout which the constraint should be satisfied. In our concrete use case, we consider a specification of robustness against adversarial attacks which are defined by bounded input perturbations (synonym flips up to $\delta $ words, or character flips up to $\delta $ characters) of the original sentence $x$. The attack space $\mathcal {X}_\mathrm {in} (\mathbf {x} _0)$ is the set of vector representations (embeddings) of all such perturbed sentences. Denoting by $z_{K,y}$ the logit of label $y$, we formulate the output constraint that for all classes $y: z_{K,y_\textrm {true}} \ge z_{K,y}$. This specification establishes that the prediction of all perturbed sentences $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ should correspond to the correct label $y_\textrm {true}$. This specification may equivalently be formulated as a set of half-space constraints on the logits: for each class $y$ where $\mathbf {e}_{i}$ is a one-hot vector with 1 in the $i$-th position. In other words, the true class logit should be greater or equal than those for all other classes $y$, which means the prediction remains constant. <<</Verification>>> <<<Verification as Optimisation>>> Verifying the specification in Eq. (DISPLAY_FORM10) can be done by solving the following constrained optimisation problem to find the input that would most strongly violate it: where $\mathbf {c} $ is a vector with entries $c_y = 1$, $c_{y_\textrm {true}} = -1$ and 0 everywhere else. If the optimal value of the above optimisation problem is smaller than 0, then the specification in Eq. (DISPLAY_FORM10) is satisfied, otherwise a counter-example has been found. In our case, this corresponds to a successful adversarial attack. <<</Verification as Optimisation>>> <<<Modeling Input Perturbations using Simplices>>> In the interests of computational feasibility, we will actually attempt to verify the specification on a larger, but more tractable input perturbation space $\bar{\mathcal {X}}_\mathrm {in} \supseteq \mathcal {X}_\mathrm {in}$. Any data point that is verifiable on this larger input perturbation space is necessarily verifiable with respect to the original specification. In the domain of image classification, $\mathcal {X}_\mathrm {in}$ is often modeled as an $L_\infty $-ball, corresponding to input perturbations in which each pixel may be independently varied within a small interval. However, using such interval bounds is unsuitable for our situation of perturbations consisting of a small number $\delta $ of symbol substitutions. Although we could construct an axis-aligned bounding box $\bar{\mathcal {X}}_\mathrm {in}$ in embedding space that encompasses all of $\mathcal {X}_\mathrm {in}$, it would over-approximate the perturbation space to such an extent that it would contain perturbations where all symbols in the sentence have been substituted simultaneously. To remedy this, we propose a tighter over-approximation in the form of a `simplex' in embedding space. We first define this for the special case $\delta =1$, in which $\mathcal {X}_\mathrm {in} = \lbrace \mathbf {x} _0\rbrace \cup \lbrace \mathbf {p} ^{(m)}_0 : 1\le m\le M\rbrace $ consists of the representations of all $M$ sentences $p^{(m)}$ derived from $x$ by performing a single synonym (or character) substitution, together with the unperturbed sentence $x$ itself. In this case we define $\bar{\mathcal {X}}_\mathrm {in}$ to be the convex hull $\mathcal {S}_1$ of $\mathcal {X}_\mathrm {in}$. Note we are not considering contextual embeddings BIBREF33 here. Each `vertex' $\mathbf {p} ^{(m)}_0$ is a sequence of embedding vectors that differs from $\mathbf {x} _0$ at only one word (or character) position. For a larger perturbation radius $\delta >1$, the cardinality of $\mathcal {X}_\mathrm {in}$ grows exponentially, so manipulating its convex hull becomes infeasible. However, dilating $\mathcal {S}_1$ centered at $\mathbf {x} _0$, scaling it up by a factor of $\delta $, yields a simplex $\mathcal {S}_\delta $ with $M+1$ vertices that contains $\mathcal {X}_\mathrm {in}$. More formally, we define a region in the input embedding space based on the $M$ `elementary' perturbations $\lbrace \mathbf {p} ^{(m)}_0: m = 1 \ldots M\rbrace $ of $\mathbf {x} _0$ defined earlier for the $\delta =1$ case. For perturbations of up to $\delta $ substitutions, we define $\bar{\mathcal {X}}_\mathrm {in}(\mathbf {x} _0)$ as the convex hull of $\lbrace \mathbf {z} ^{(m)}_0: m = 0 \ldots M\rbrace $, where $\mathbf {z} ^{(0)}_0=\mathbf {x} _0$ denotes the original (unperturbed) sentence representation and, for $m\ge 1$, $\mathbf {z} ^{(m)}_0 = \mathbf {x} _0+\delta \cdot (\mathbf {p} ^{(m)}_0-\mathbf {x} _0)$. The convex hull is an over-approximation of $\mathcal {X}_\mathrm {in}(\mathbf {x} _0)$: it contains the representations of all sentences derived from $x$ by performing up to $\delta $ substitutions at distinct word (or character) positions. <<</Modeling Input Perturbations using Simplices>>> <<<Interval Bound Propagation>>> To estimate the optimal value of the problem (DISPLAY_FORM12), given an input $\mathbf {z} _0$, we can propagate the upper/lower bounds on the activations $\mathbf {z} _k$ of each layer using interval arithmetic BIBREF17. We begin by computing interval bounds on the first layer's activations. Recall that any input $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in}$ will lie within the convex hull of certain vertices $\lbrace \mathbf {z} ^{(m)}_0: m = 0 \ldots M\rbrace $. Then, assuming that the first layer $h_1$ is an affine transformation (e.g. linear or convolutional) followed by a monotonic activation function, the lower and upper bounds on the components $z_{1,i}$ of the first layer's activations $\mathbf {z} _1$ are as follows: Note that these bounds are efficient to compute (by passing each perturbation $\mathbf {z} ^{(m)}_0$ through the first layer); in particular there is no need to compute the convex hull polytope. For subsequent layers $k>1$, the bounds on the components $z_{k,i}$ of $\mathbf {z} _k$ are: The above optimisation problems can be solved in closed form quickly for affine layers and monotonic activation functions, as illustrated in IBP. Finally, the lower and upper bounds of the output logits $\mathbf {z} _K$ can be used to construct an upper bound on the solution of (DISPLAY_FORM12): <<<Verifiable Training.>>> The upper bound in (DISPLAY_FORM17) is fast to compute (only requires two forward passes for upper and lower bounds through the network). Hence, we can define a loss to optimise models such that the models are trained to be verifiable. Solving (DISPLAY_FORM17) is equivalent to finding the worst-case logit difference, and this is achieved when the logit of the true class is equal to its lower bound, and all other logits equal to their upper bounds. Concretely, for each class $y \ne y_\textrm {true} $: $\hat{\mathbf {z}}_{K,y}(\delta ) = \overline{\mathbf {z}}_{K,y} (\delta ) $, and $\hat{\mathbf {z}}_{K,y_\textrm {true}}(\delta ) = \underline{\mathbf {z}}_{K,y_\textrm {true}} (\delta ) $. The training loss can then be formulated as where $\ell $ is the cross-entropy loss, $\kappa $ a hyperparameter that controls the relative weights between the classification loss $L_\textrm {normal}$ and specification loss $L_\textrm {spec}$. If $\delta = 0$ then $\mathbf {z} _K = \hat{\mathbf {z}}_K(\delta )$, and thus $L$ reduces to a standard classification loss. Empirically, we found that a curriculum-based training, starting with $\kappa $=1 and linearly decreasing to 0.25, is effective for verifiable training. <<</Verifiable Training.>>> <<</Interval Bound Propagation>>> <<</Methodology>>> <<<Experiments>>> We conduct verification experiments on two text classification datasets, Stanford Sentiment Treebank (SST) BIBREF15 and AG News corpus, processed in BIBREF16. We focus on word-level and character-level experiments on SST and character-level experiments on AG News. Our specification is that models should preserve their prediction against up to $\delta $ synonym substitutions or character typos, respectively. <<<A Motivating Example>>> We provide an example from Table TABREF29 to highlight different evaluation metrics and training methods. Given a sentence, “you ' ve seen them a million times .”, that is predicted correctly (called Nominal Accuracy) by a classification model, we want to further examine whether the model is robust against character typos (e.g., up to $\delta =3$ typos) to this example. One way is to use some heuristic to search for a valid example with up to 3 typos that can change the prediction the most (called adversarial example). We evaluate the model using this adversarial example and report the performance (called Adversarial Accuracy). However, even if the adversarial example is predicted correctly, one can still ask: is the model truly robust against any typos (up to 3) to this example? In order to have a certificate that the prediction will not change under any $\delta =3$ character typos (called verifiably robust), we could in theory exhaustively search over all possible cases and check whether any of the predictions is changed (called Oracle Accuracy). If we only allow a character to be replaced by another character nearby on the keyboard, already for this short sentence we need to exhaustively search over 2,951 possible perturbations. To avoid this combinatorial growth, we can instead model all possible perturbations using the proposed simplex bounds and propagate the bounds through IBP at the cost of two forward passes. Following Eq. (DISPLAY_FORM12), we can check whether this example can be verified to be robust against all perturbations (called IBP-Verified Accuracy). There are also a number of ways in which the training procedure can be enhanced to improve the verifiable robustness of a model against typos to the sentence. The baseline is to train the model with the original/normal sentence directly (called Normal Training). Another way is to randomly sample typo sentences among the 2,951 possible perturbations and add these sentences to the training data (called Data Augmentation Training). Yet another way is to find, at each training iteration, the adversarial example among the (subset of) 2,951 possible perturbations that can change the prediction the most; we then use the adversarial example alongside the training example (called Adversarial Training). Finally, as simplex bounds with IBP is efficient to run, we can train a model to be verifiable by minimising Eq. (DISPLAY_FORM19) (called Verifiable Training). <<</A Motivating Example>>> <<<Baselines>>> In this section we detail our baseline models. <<<Adversarial Training.>>> In adversarial training BIBREF34, BIBREF20, the goal is to optimise the following saddle point problem: where the inner maximisation problem is to find an adversarial perturbation $\mathbf {z} _0\in \mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ that can maximise the loss. In the inner maximisation problem, we use HotFlip BIBREF5 with perturbation budget $\delta $ to find the adversarial example. The outer minimisation problem aims to update model parameters such that the adversarial risk of (DISPLAY_FORM24) is minimised. To balance between the adversarial robustness and nominal accuracy, we use an interpolation weight of 0.5 between the original cross-entropy loss and the adversarial risk. <<</Adversarial Training.>>> <<<Data Augmentation Training.>>> In the data augmentation setup, we randomly sample a valid perturbation $z$ with perturbation budget $\delta $ from a normal input $x$, and minimise the cross-entropy loss given the perturbed sample $z$ (denoted as data augmentation loss). We also set the interpolation weight between the data augmentation loss and the original normal cross-entropy loss to 0.5. <<</Data Augmentation Training.>>> <<<Normal Training.>>> In normal training, we use the likelihood-based training using the normal training input $x$. <<</Normal Training.>>> <<</Baselines>>> <<<Setup>>> We use a shallow convolutional network with a small number of fully-connected layers for SST and AG News experiments. The detailed model architectures and hyperparameter details are introduced in the supplementary material. Although we use shallow models for ease of verifiable training, our nominal accuracy is on par with previous work such as BIBREF15 (85.4%) and BIBREF35 (84.3%) in SST and BIBREF16 (87.18%) in AG News. During training, we set the maximum number of perturbations to $\delta =3$, and evaluate performance with the maximum number of perturbations from $\delta =1$ to 6 at test time. For word-level experiments, we construct the synonym pairs using the PPDB database BIBREF36 and filter the synonyms with fine-grained part-of-speech tags using Spacy BIBREF37. For character-level experiments, we use synthetic keyboard typos from BIBREF3, and allow one possible alteration per character that is adjacent to it on an American keyboard. The allowable input perturbation space is much larger than for word-level synonym substitutions, as shown in Table TABREF48. <<</Setup>>> <<<Evaluation Metrics>>> We use the following four metrics to evaluate our models: i) test set accuracy (called Acc.), ii) adversarial test accuracy (called Adv. Acc.), which uses samples generated by HotFlip attacks on the original test examples, iii) verifiable accuracy under IBP verification (called IBP-verified), that is, the ratio of test samples for which IBP can verify that the specification is not violated, and iv) exhaustively verified accuracy (called Oracle), computed by enumerating all possible perturbations given the perturbation budget $\delta $, where a sample is verifiably robust if the prediction is unchanged under all valid perturbations. <<</Evaluation Metrics>>> <<<Results>>> Table TABREF28 shows the results of IBP training and baseline models under $\delta =3$ and $\delta =2$ perturbations on SST and AG News, respectively. Figures FIGREF31 and FIGREF36 show the character- and word-level results with $\delta $ between 1 and 6 under four metrics on the SST test set; similar figures for SST word-level (adversarial training, data augmentation) models and AG News dataset can be found in the supplementary material. <<<Oracle Accuracy and Adversarial Accuracy.>>> In Table TABREF28, comparing adversarial accuracy with exhaustive verification accuracy (oracle), we observe that although adversarial training is effective at defending against HotFlip attacks (74.9 / 76.8 / 85.5%), the oracle adversarial accuracy under exhaustive testing (25.8 / 74.6 / 81.6%) is much lower in SST-character / SST-word / AG-character level, respectively. For illustration, we show some concrete adversarial examples from the HotFlip attack in Table TABREF29. For some samples, even though the model is robust with respect to HotFlip attacks, its predictions are incorrect for stronger adversarial examples obtained using the exhaustive verification oracle. This underscores the need for verification, as robustness with respect to suboptimal adversarial attacks alone might give a false sense of security. <<</Oracle Accuracy and Adversarial Accuracy.>>> <<<Effectiveness of Simplex Bounds with IBP.>>> Rather than sampling individual points from the perturbation space, IBP training covers the full space at once. The resulting models achieve the highest exhaustively verified accuracy at the cost of only moderate deterioration in nominal accuracy (Table TABREF28). At test time, IBP allows for constant-time verification with arbitrary $\delta $, whereas exhaustive verification requires evaluation over an exponentially growing search space. <<</Effectiveness of Simplex Bounds with IBP.>>> <<<Perturbation Space Size.>>> In Table TABREF28, when the perturbation space is larger (SST character-level vs. SST word-level), (a) across models, there is a larger gap in adversarial accuracy and true robustness (oracle); (b) the difference in oracle robustness between IBP and adversarial training is even larger (73.1% vs. 25.8% and 76.5% vs. 74.6%). <<</Perturbation Space Size.>>> <<<Perturbation Budget.>>> In Figures FIGREF31 and FIGREF36, we compare normal training, adversarial training, data augmentation, and verifiable training models with four metrics under various perturbation budgets on the SST dataset. Overall, as the perturbation budget increases, the adversarial accuracy, oracle accuracy, and IBP-verified accuracy decrease. We can observe that even for large perturbation budgets, verifiably trained models are still able to verify a sizable number of samples. Again, although adversarial accuracy flattens for larger perturbation budgets in the word level experiments, oracle verification can further find counterexamples to change the prediction. Note that exhaustive verification becomes intractable with large perturbation sizes. <<</Perturbation Budget.>>> <<<Computational Cost of Exhaustive Verification.>>> The perturbation space in NLP problems is discrete and finite, and a valid option to verify the specification is to exhaustively generate predictions for all $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in} (\mathbf {x} _0)$, and then check if at least one does not match the correct label. Conversely, such an exhaustive (oracle) approach can also identify the strongest possible attack. But the size of $\mathcal {X}_\mathrm {in}$ grows exponentially with $\delta $, and exhaustive verification quickly becomes prohibitively expensive. In Table TABREF48, we show the maximum perturbation space size in the SST and AG News test set for different perturbation radii $\delta $. This number grows exponentially as $\delta $ increases. To further illustrate this, Figure FIGREF49 shows the number of forward passes required to verify a given proportion of the SST test set for an IBP-trained model using exhaustive verification and IBP verification. IBP reaches verification levels comparable to an exhaustive verification oracle, but requires only two forward passes to verify any sample – one pass for computing the upper, and one for the lower bounds. Exhaustive verification, on the other hand, requires several orders of magnitude more forward passes, and there is a tail of samples with extremely large attack spaces. <<</Computational Cost of Exhaustive Verification.>>> <<</Results>>> <<<Counter-Fitted Embeddings>>> As shown in Figures FIGREF31 and FIGREF36, although IBP can verify arbitrary networks in theory, the verification bound is very loose except for models trained to be IBP-verifiable. One possible reason is the potentially large volume of the perturbation simplex. Since representations of substitution words/characters are not necessarily close to those of synonyms/typos in embedding space, the vertices of the simplex could be far apart, and thus cover a large area in representation space. Therefore, when propagating the interval bounds through the network, the interval bounds become too loose and fail to verify most of the examples if the models are not specifically trained. To test this hypothesis, we follow BIBREF38 and use fine-tuned GloVe embeddings trained to respect linguistic constraints; these representations (called counter-fitted embeddings) force synonyms to be closer and antonyms to be farther apart using word pairs from the PPDB database BIBREF36 and WordNet BIBREF39. We repeat the word level experiments with these counter-fitted embeddings, Figures FIGREF36 and FIGREF36 show the experimental results. We observe that IBP verified accuracy is now substantially higher across models, especially for $\delta =1, 2, 3$. The examples which IBP can verify increase by up to 33.2% when using the counter-fitted embeddings (normal training, $\delta =1$). Moreover, adversarial and exhaustively verified accuracy are also improved, at the cost of a mild deterioration in nominal test accuracy. The IBP-trained model also further improves both its oracle accuracy and IBP verified accuracy. These results validate our hypothesis that reducing the simplex volume via soft linguistic constraints can provide even tighter bounds for IBP, resulting in larger proportions of verifiable samples. <<</Counter-Fitted Embeddings>>> <<</Experiments>>> <<<Discussion>>> Our experiments indicate that adversarial attacks are not always the worst adversarial inputs, which can only be revealed via verification. On the other hand, exhaustive verification is computationally very expensive. Our results show that using the proposed simplex bounds with IBP can verify a sizable amount of test samples, and can be considered a potent verification method in an NLP context. We note however two limitations within the scope of this work: i) limited model depth: we only investigated models with few layers. IBP bounds are likely to become looser as the number of layers increases. ii) limited model types: we only studied models with CNN and fully connected layers. We focused on the HotFlip attack to showcase specification verification in the NLP context, with the goal of understanding factors that impact its effectiveness (e.g. the perturbation space volume, see Section SECREF50). It is worth noting that symbol substitution is general enough to encompass other threat models such as lexical entailment perturbations BIBREF40, and could potentially be extended to the addition of pre/postfixes BIBREF2, BIBREF41. Interesting directions of future work include: tightening IBP bounds to allow applicability to deeper models, investigating bound propagation in other types of neural architectures (e.g. those based on recurrent networks or self-attention), and exploring other forms of specifications in NLP. <<</Discussion>>> <<<Conclusion>>> We introduced formal verification of text classification models against synonym and character flip perturbations. Through experiments, we demonstrated the effectiveness of the proposed simplex bounds with IBP both during training and testing, and found weaknesses of adversarial training compared with exhaustive verification. Verifiably trained models achieve the highest exhaustive verification accuracy on SST and AG News. IBP verifies models in constant time, which is exponentially more efficient than naive verification via exhaustive search. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Related Work, Abstract" ], "type": "disordered_section" }
1908.06006
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Bidirectional Context-Aware Hierarchical Attention Network for Document Understanding <<<Abstract>>> The Hierarchical Attention Network (HAN) has made great strides, but it suffers a major limitation: at level 1, each sentence is encoded in complete isolation. In this work, we propose and compare several modifications of HAN in which the sentence encoder is able to make context-aware attentional decisions (CAHAN). Furthermore, we propose a bidirectional document encoder that processes the document forwards and backwards, using the preceding and following sentences as context. Experiments on three large-scale sentiment and topic classification datasets show that the bidirectional version of CAHAN outperforms HAN everywhere, with only a modest increase in computation time. While results are promising, we expect the superiority of CAHAN to be even more evident on tasks requiring a deeper understanding of the input documents, such as abstractive summarization. Code is publicly available. <<</Abstract>>> <<<Introduction>>> Recently, hierarchical architectures have become ubiquitous in NLP. They have been applied to a wide variety of tasks such as language modeling and generation BIBREF0, BIBREF1, neural machine translation (NMT) BIBREF2, summarization BIBREF3, sentiment and topic classification BIBREF4, BIBREF5, and spoken language understanding BIBREF6, BIBREF7, to cite only a few examples. All hierarchical architectures capitalize on the same intuitive idea that the representation of the input text should be learned in a bottom-up fashion by using a different encoder at each granularity level (e.g., words, sentences, paragraphs), where the encoder at level $l+1$ takes as input the output of the encoder at level $l$. One of the earliest and most influential examples is the Hierarchical Attention Network (HAN) of BIBREF5 (see Fig. FIGREF6 and section SECREF2). It is a two-level architecture, where at level 1, each sentence in the document is separately encoded by the same sentence encoder, resulting in a sequence of sentence vectors. That sequence is then processed at level 2 by the document encoder which returns a single vector representing the entire document. The sentence and document encoders are both self-attentional bidirectional Recurrent Neural Networks (RNNs), with different parameters. <<<Observed problem>>> HAN was highly successful and established new state of the art on six large-scale sentiment and topic classification datasets. However, it has a major weakness: at level 1, each sentence is encoded in isolation. That is, while producing the representation of a given sentence in the document, HAN completely ignores the other sentences. This lack of communication is obviously suboptimal. For example, in Fig. FIGREF2, the same highly negative feature (“terrible value”) has been repeated at the beginning of each sentence in the document. Because it encodes each sentence independently, HAN has no choice but to spend most of its attentional budget on the most salient feature every time. As a result, HAN neglects the other aspects of the document. On the other hand, CAHAN is informed about the context, and thus quickly stops spending attention weight on the same highly negative pattern, knowing that is has already been covered. CAHAN is then able to cover the other topics in the document (“seafood”,“scallops” and “mussels”; “entree” and “appetizer”; triple negation in the fourth sentence). As another example, consider the edge case of a document containing the same sentence repeated several times, as shown in Fig. FIGREF3. With HAN, the exact same embedding is produced for each instantiation of the sentence, as a result of the context-blind self-attention mechanism always making the same alignment decisions. However, the context-aware sentence encoder of CAHAN allows it to extract complementary, rather than redundant information, from each instantiation of the sentence. This results in better coverage (“reasonably priced”, “arrived late”), in a richer document representation, and ultimately in a more accurate prediction (positive instead of very positive). One may argue that in basic HAN, the document encoder at level 2 already does capture some notion of context, by assigning importance scores to sentences. However, at level 2, the sentence vectors have already been formed, and it is too late to modify them. Since the document encoder can only rank the sentence representations, it cannot address issues like high redundancy. In that case, important subtopics or details in the document will not be covered, no matter sentence scores. <<</Observed problem>>> <<<Context-aware HAN>>> In this work, we propose and evaluate several modifications of the HAN architecture that allow the sentence encoder at level 1 to make its attentional decisions based on contextual information, allowing it to learn richer document representations. Another significant contribution is the introduction of a bidirectional version of the document encoder, where one RNN processes the document forwards, using the preceding sentences as context, and another one processes it backwards, using the following sentences as context. The remainder of this paper is structured as follows. We start by formally introducing basic HAN (section SECREF2), we then explain our contributions (section SECREF3), and detail our experimental setup (section SECREF4). Finally, we interpret our results and list areas of future development (sections SECREF5 and SECREF7). Related work is reviewed in section SECREF6. <<</Context-aware HAN>>> <<</Introduction>>> <<<HAN>>> The baseline HAN model as introduced by BIBREF5 is shown in Fig. FIGREF6 along with our modifications (disregard the bold lines for the baseline). The sentence and document encoders, used respectively at level 1 and level 2, have different parameters but share the exact same architecture. Thus, in what follows, we only describe the sentence encoder in detail. <<<Notation>>> Next, we use boldface upper case for tensors, upper case for matrices, boldface lower case for vectors, and lower case for scalars. We define a document $\mathbf {X} \in \mathbb {R}^{N \times T_i \times d}$ as a sequence of $N$ sentences $(S_1, \dots , S_N)$. Each sentence $S_i$ is a sequence of $T_i$ $d$-dimensional word vectors $(\mathbf {x}_{i1}, \dots , \mathbf {x}_{iT_i}) \in \mathbb {R}^{T_i \times d}$. <<</Notation>>> <<<Sentence encoder>>> First, the sentence-level bidirectional RNN $f_s$ processes the input sentence $S_i$ and returns a sequence of $T_i$ $2d_s$-dimensional hidden states $(\mathbf {h}_{i1},\dots , \mathbf {h}_{iT_i}) \in \mathbb {R}^{T_i \times 2d_s}$. $f_s$ is composed of two non-stacking RNNs $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ with Gated Recurrent Units BIBREF8, respectively parsing $S_i$ from left to right and right to left: $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ have the same hidden layer dimensionality $d_s$, but different parameters. At each time step $t$, the word annotations they return are concatenated, producing $2d_s$-dimensional annotations that summarize the immediate context surrounding each word: Then, a self-attention mechanism computes the representation $\mathbf {s}_i$ of sentence $S_i$ as a weighted sum of its word annotations: Where the vector of attentional coefficients $\mathbf {\alpha }$ is a softmax-normalized version of the alignment vector $\mathbf {e}$, which itself is obtained by passing the word annotations through a dense layer (parameterized by $W_s \in \mathbb {R}^{2d_s\times 2d_s}$) and comparing the output with a trainable vector $\mathbf {u}_s \in \mathbb {R}^{2d_s}$: $\mathbf {u}_s$ is initialized randomly. It can be interpreted as a “super-word” whose vector contains the ideal combination of latent topics, on average. The closest the annotation of a word is to this ideal representation, the more attention that word will be given. The sentence encoder is applied to all sentences in document $\mathbf {X}$, producing a sequence of $N$ sentence vectors $(\mathbf {s_1},\dots ,\mathbf {s_N}) \in \mathbb {R}^{N\times 2d_s}$. <<</Sentence encoder>>> <<<Document encoder>>> The document encoder is a self-attentional bidirectional GRU-RNN, like the sentence encoder, but it has different parameters. The dimensionality of its hidden states is $2d_d$. The document encoder is applied only once, to the sequence of sentence vectors, to produce the sequence of sentence annotations $(\mathbf {h}_{1}, \dots , \mathbf {h}_{N})$. Then, a self-attention layer outputs the final document vector. <<</Document encoder>>> <<</HAN>>> <<<Proposed architecture: CAHAN>>> As was previously explained, each sentence is encoded independently by HAN, without considering any kind of contextual information. To solve this issue, we inject a context vector $\mathbf {c_i}$ into the self-attention mechanism, to guide the model during the computation of the word alignment coefficients. In effect, Eq. DISPLAY_FORM12 becomes: We propose two approaches for computing $\mathbf {c_i}$, namely CAHAN-SUM and CAHAN-RNN, shown as the two bolded connections in Fig. FIGREF6. <<<Summed context (CAHAN-SUM)>>> We introduce two settings, (1) left-to-right and bidirectional. Whenever there is no preceding/following sentence, i.e., at the beginning/end of a document, the context vector is initialized with zeroes. <<<Left-to-right (LR)>>> In the LR case, the context vector is computed as the sum of the preceding sentence representations: <<</Left-to-right (LR)>>> <<<Bidirectional (BI)>>> In the BI case, we compute two context vectors, respectively by summing the representations of the sentences preceding and following the current sentence $S_i$. These two vectors are passed to two identical context-aware self-attention mechanisms (Eq. DISPLAY_FORM14) with different parameters. The resulting forward and backward sentence representations are then processed respectively by the forward and backward RNNs of the document encoder at level 2, and the resulting annotations are concatenated to produce the final sentence annotations. CAHAN-SUM was inspired by the coverage vectors of seq2seq architectures, which have been shown very effective in addressing under(over)-translation in NMT BIBREF9, and repetition in summarization BIBREF10. Such coverage vectors are typically computed as the sum, over all previous decoder steps, of the attention distribution over the source tokens. However, in our case, we cannot keep track of the attention distribution history, since sentences are unique and cannot be aligned. This is why we work with sentence representations instead. <<</Bidirectional (BI)>>> <<<Centroid version (@!START@$\mu $@!END@)>>> $\overrightarrow{\mathbf {c}_i}$, as defined by Eq. DISPLAY_FORM17, grows larger in magnitude as $i$ increases (the sum has more and more terms), which can blur the alignment decisions for the sentences at the end of a document (LR case), or both at the end and beginning of a document, when reading forwards and backwards (BI case). Therefore, we also experiment with a centroid, rather than sum, context vector: <<</Centroid version (@!START@$\mu $@!END@)>>> <<</Summed context (CAHAN-SUM)>>> <<<Recurrent Context (CAHAN-RNN)>>> Here, we capitalize on the capability of RNNs, especially when equipped with LSTM or GRU units, to keep track of information over long time periods. We simply use as context vector the document encoder annotation at the preceding/following time step. That is, we have, in the LR case: By design, $\mathbf {h}_{i-1}$ summarizes the entire history $(\mathbf {s_1},\dots ,\mathbf {s_{i-1}})$ of sentence vectors, with a preference for the most recent time steps. If the sequence is very long though, even a GRU-RNN will eventually forget about the first elements. However, for the relatively short documents we experiment with (see Table TABREF29), we can assume the annotations of the document encoder to faithfully represent the entire sequence. <<</Recurrent Context (CAHAN-RNN)>>> <<<Gated context>>> In NMT, BIBREF11 introduced a gating mechanism to allow the decoder to balance the contribution of the source and target information in generating the next word. The same idea can be found in numerous other NMT studies, e.g., BIBREF2, BIBREF12, BIBREF13. Inspired by this line of research, we propose a modification of Eq. DISPLAY_FORM14 to let our model explicitly decide how much contextual information it should take into account in making its alignment decisions: $\mathbf {\lambda }$ is produced by a trainable mechanism taking as input the word annotations and the context vector: The sigmoid activation ensures that $\mathbf {\lambda }$ plays a filtering role, by squashing all its entries to $[0,1]$. The gate gives more expressiveness to the attention mechanism. Indeed, contextual information should not always be given the same importance, depending on the situation. E.g., when most of the document has been processed, context is likely to be very important, in order to limit redundancy and increase coverage. However, at the beginning of a document, or in the case of a very short or focused sentence, context might not be useful as only one single topic might be extractable from the sentence anyways. From an optimization perspective, $\mathbf {\lambda }$ also has the desirable effect of regulating the magnitude of the context vector, preventing it from pushing the tanh to regions of very small gradient. This is especially useful with CAHAN-SUM, as in that case, $\mathbf {c}_i$ gets large towards the end/beginning of documents (forwards/backwards reading). <<</Gated context>>> <<<Complexity and sequentiality>>> Assuming that $d \sim 2d_s$ and that $d_s \sim d_d$, which holds in practice under reasonable settings, all matrix multiplications in the network have similar complexity, of order of magnitude $\mathcal {O}(d^2)$. Moreover, since we use GRU-RNNs, there are 6 matrix multiplication per encoder. This number is doubled, as we use bidirectional RNNs. Finally, the two self-attention mechanisms, one at each level, add two multiplications. Therefore, in the HAN baseline architecture, there are a total of 26 matrix multiplications (13 at each level). To that, CAHAN-SUM and CAHAN-RNN simply add one matrix multiplication ($W_c\mathbf {c}_i$ in Eq. DISPLAY_FORM14) in the LR case and two in the BI case. This corresponds to negligible 4% and 8% increases in total computational cost. On top of that, gating adds two multiplications in the LR case ($W_{\lambda _1}\mathbf {h}_{it}$ and $W_{\lambda _2}\mathbf {c}_i$ in Eq. DISPLAY_FORM25) and four in the BI case. All in all, this represents three and six extra multiplications compared to basic HAN, resp. in the LR and BI cases. Again, this corresponds to small increases in computational cost, of 11.5% and 23%, respectively. However, with CAHAN-SUM, the representations of the preceding/following sentences are now required before computing the current sentence representation. With CAHAN-RNN, one even has to wait until the level 2 RNN has processed the preceding/following sentence vectors before being able to encode the current sentence. Therefore, the sentence encoding process, which was parallelizable with basic HAN due to independence, has now become a sequential process. This is why in practice, we observe slightly greater runtime increases, in the range 5-22% (see Table TABREF43). <<</Complexity and sequentiality>>> <<</Proposed architecture: CAHAN>>> <<<Experimental setup>>> <<<Datasets>>> We evaluate the quality of the document embeddings learned by the different variants of CAHAN and the HAN baseline on three of the large-scale document classification datasets introduced by BIBREF14 and used in the original HAN paper BIBREF5. They fall into two categories: topic classification (Yahoo) and fine-grained sentiment analysis (Amazon, Yelp). Dataset statistics are shown in Table TABREF29. Classes are perfectly balanced, for all datasets. <<</Datasets>>> <<<Model configuration>>> This subsection describes the preprocessing and hyperparameter setting we used. <<<Preprocessing and word embeddings>>> For preprocessing (and the HAN baseline), we used the publicly available implementation of BIBREF15, which closely follows the description and details given in the original HAN paper BIBREF5. More precisely, on each dataset, we randomly split the training set into training (90%) and validation (10%). Documents are then tokenized into sentences and sentences are tokenized into tokens. The tokens appearing less than 5 times in the corpus are replaced with a special UNK token. Finally, we pre-train our own word vectors with word2vec BIBREF16 on the training and validation splits. <<</Preprocessing and word embeddings>>> <<<Hyperparameters>>> We do not tune any hyperparameter except the learning rate (see subsection SECREF35). We set the hidden layer dimensionality of the two RNN encoders to $d_s=50$ and $d_d=50$. Thus, the word annotations, sentence vectors, sentence annotations and document vector all have size 100. With regularization in mind, we set the dimensionality of the word embeddings to $d=200$ on the very large datasets (Amazon and Yahoo!) and to $d=100$ on Yelp, as shown in Table TABREF29. We also use a greater batch size of 128 on the large datasets, versus 64 on Yelp. <<</Hyperparameters>>> <<</Model configuration>>> <<<Training details>>> We zero-pad sentences and documents. Like in BIBREF5, to make the most out of each batch, we ensure they are as dense as possible by using a bucketing strategy. More precisely, we build each batch so that it contains documents of approximately the same size, in number of sentences. For regularization, we use dropout BIBREF17 with a rate of 0.5 at each layer. For classification, the document vectors are passed to a dense layer with softmax activation, whose dimensionality is equal to the number of categories to be predicted. Initialization has a significant impact on performance. To make sure the differences we measure are due to differences in the models and not in initial condition, we use the same initialization weights for each model. <<<SGD with cyclical learning rate>>> To minimize the categorical cross-entropy loss, we use the stochastic gradient descent optimizer with a triangular cyclical learning rate schedule and opposite triangular momentum schedule BIBREF18, BIBREF19. Following the authors' recommendations, we use a fixed $[0.85,0.95]$ momentum range, while for the learning rate, we perform a range test on the validation set, for each model, searching the $[0.001,3]$ range. With a triangular schedule, the learning rate linearly increases for a certain number of iterations (half-cycle), and then linearly decreases back to its initial value during the second half of the cycle. Cycles are repeated until training ends. High learning rate values make training faster, by allowing large updates and the use of greater batch sizes while keeping the amount of regularization constant. Also, the cyclical schedule injects beneficial stochastic noise to the gradient updates, which improves generalization BIBREF20. We use cycles of 12 epochs, and an early stopping strategy, monitoring the test loss, with a patience of slightly more than one cycle. We set the maximum number of epochs for all models to 50. <<</SGD with cyclical learning rate>>> <<</Training details>>> <<</Experimental setup>>> <<<Results>>> As can be seen in Table TABREF37, the best version of CAHAN (SUM-BI-$\Sigma $) consistently outperforms the HAN baseline, which shows that taking contextual information into account helps producing better document representations. Also, the two unidirectional variants (LR) slightly underperform the baseline and are clearly inferior to BI, which illustrates the value added by processing the document forwards and backwards, using preceding and following sentences as context. <<<Summing vs. averaging>>> In the unidirectional case, it is surprising to note that CAHAN-SUM-LR-$\mu $ is slightly better than CAHAN-SUM-LR-$\Sigma $, i.e., the centroid-based context vector (Eq. DISPLAY_FORM20) is better than the sum-based one (Eq. DISPLAY_FORM17). Indeed, from an information theory standpoint, it should be the opposite, as summing keeps track of all information whereas averaging is lossy. We hypothesize that towards the end of a document, the sum-based context vector grows large in magnitude, which perturbs the alignment decisions and deteriorates the quality of the sentence vectors. On the other hand, the centroid-based vector, which has constant magnitude, does not suffer from this issue. We further hypothesize that this issue is attenuated in the bidirectional case (CAHAN-SUM-BI-$\mu $ and CAHAN-SUM-BI-$\Sigma $ are on par) due to a counterbalancing phenomenon. Indeed, the last sentences processed by the left-to-right encoder are the first ones processed by the right-to-left encoder. Therefore, through concatenation, the overall quality of the sentence embeddings stays constant. <<</Summing vs. averaging>>> <<<Gating>>> As expected, gating improves performance, especially for the $\Sigma $ variants of CAHAN-SUM (and especially the LR ones). To be noted are significant boosts of 0.45 and 0.24 in accuracy respectively for CAHAN-SUM-LR-$\Sigma $ and CAHAN-SUM-BI-$\Sigma $ on Yelp. On Amazon, gating also offers CAHAN-SUM-LR-$\Sigma $ a nice 0.27 improvement. These positive results give a clue that regulating the magnitude of the context vector $\mathbf {c}_i$ is indeed beneficial. Nevertheless, gating also improves the performance of the $\mu $ variants of CAHAN, which do not suffer from the context vector magnitude issue. This shows that gating is also helpful via giving more expressiveness to the model. For instance, on Amazon, gating boosts the performance of CAHAN-SUM-BI-$\mu $ by 0.12. It is interesting to note that overall, gating is mostly effective on Yelp and Amazon. We attribute this to the difference in task. Sentiment analysis may rely more on contextual information than topic classification. <<</Gating>>> <<<CAHAN-RNN-BI>>> The consistently bad performance of CAHAN-RNN-BI is to be noted. This was unexpected, as an equivalent approach was used by BIBREF6 for dialogue act classification, with significant improvements. We hypothesize that in our case, CAHAN-RNN-BI is not effective because, unlike utterances in a speech transcription, sentences in a document are not ordered in a temporal fashion. In other words, sentences far away from the current sentence are not necessarily less relevant than closer sentences. Thus, considering each sentence equally is better than imposing an implicit time-decay via a RNN. <<</CAHAN-RNN-BI>>> <<<Runtimes>>> We compare the average runtime per iteration of some variants of CAHAN to that of basic HAN in Table TABREF43. For CAHAN-SUM-$\Sigma $, we observe that the unidirectional variant (LR) is 5.7% slower than basic HAN (37 vs. 35ms per iteration), whereas the bidirectional variant (BI) is 23% slower (43 vs. 35 ms). When gating, these number increase to 14.3% and 37% (40 and 48ms vs. 35ms). These differences are not far from our theoretical expectations (see subsection SECREF26), especially for LR. Indeed, recall that based on matrix multiplication counts, we had forecasted increases of 4% and 8% (11.5% and 23% when using gating), respectively for LR and BI. The gap for BI can be explained by a probable bottleneck in the implementation. CAHAN-RNN adds the same number of matrix multiplications as CAHAN-SUM, so we should in principle observe the same increases. However, as was explained in subsection SECREF26, with CAHAN-RNN we have to wait until the level 2 RNN has processed the preceding or preceding/following sentence vectors (LR or BI case) before being able to encode the current sentence. This explains the extra-time needed (40 vs. 37ms and 49 vs. 43ms). <<</Runtimes>>> <<</Results>>> <<<Related work>>> In what follows, we provide a review of the relevant literature. One should note that by context, in this paper, we do not refer to the intra-sentence or internal context vector of seq2seq encoders BIBREF21, BIBREF11, BIBREF13. Rather, we refer to the cross-sentence, external, or document-level context. A few studies only have focused on developing models that take that type of context into account. Most of these studies originate from NMT. We briefly describe them next. BIBREF2 obtain a global context vector by feeding a fixed number of the previous source sentences to HAN. They then compare two ways of injecting it into the encoder-decoder model. First, they propose a warm-start approach, in which the encoder and/or decoder hidden states are initialized with the context vector. Second, they experiment with an auxiliary strategy in which the intra-sentence context vector of the encoder is concatenated with the global context vector and passed either (i) directly to the decoder, or (ii) after going through a filtering gate. However, unlike our mechanism and that of BIBREF11, BIBREF12, BIBREF13, which all feature two coupled gates, the mechanism of BIBREF2 has only one gate. All strategies proposed by BIBREF2 significantly improve performance, but first place is reached by a combination of the warm-start and gated techniques. BIBREF22 use an approach similar to the auxiliary approach of BIBREF2, but they compute the context vector only from the sentence immediately preceding the current source sentence. They then pass it to a dedicated encoder featuring a customized attention mechanism. BIBREF12 and BIBREF23 both extend the Transformer architecture BIBREF24 with a context encoder featuring self-attentional and feed-forward layers. Then, BIBREF12 combine the context representation with the source representation produced by the basic Transformer encoder via a gating mechanism. They do not modify the decoder part of the Transformer. BIBREF23 go one step further by passing the contextual information both to the encoder and the decoder. In both cases, they add a self-attention mechanism over the context representation. For the decoder though, they also replace the residual connection after the context self-attention with a gating mechanism, to limit the influence of the context information on the source information. One piece of work closely related to our study is BIBREF3. The authors also use a hierarchical attention architecture, where at level 1, each paragraph of a document is encoded by a dedicated encoder. All encoders share the same stacking bi-RNN architecture. Moreover, they communicate at each layer to produce context-aware annotations of the words in their paragraphs. More precisely, at a given layer of the stacking RNN, a given encoder is passed the average of the representations learned by the other encoders at the corresponding layer (like with CAHAN-SUM-$\mu $). This context vector is then combined with the hidden states and passed as input to the upper RNN layer. At level 2, the top RNN layer annotations are passed to a word attention mechanism followed by a paragraph attention mechanism. A major difference with our work is that the authors combine the encoder with a decoder, to perform abstractive summarization of long documents, whereas we only focus on the encoding part. The word and paragraph attentional decisions at level 2 are thus made by the decoder. Another significant difference is that the authors use reinforcement learning for training, instead of SGD. Context-aware models have also been proposed in other NLP domains. E.g., for spoken language understanding, BIBREF7 prepend and append the current utterance with two special word vectors respectively summarizing the $C$ preceding and following utterances (respectively), where $C$ is a hyperparameter. This indirectly initializes the hidden states of the left-to-right and right-to-left components of a bidirectional RNN, like with the warm-start approach of BIBREF2. On the other hand, BIBREF6 rely on a mechanism equivalent to LR-CAHAN-RNN. They find that it significantly boosts dialogue act classification accuracy. As discussed in section SECREF5, we hypothesize that CAHAN-RNN is not effective in our application because sentences in a document are not ordered in a temporal manner. <<</Related work>>> <<<Discussion and next steps>>> While bidirectional CAHAN-SUM systematically outperforms HAN, margins are modest. We attribute this to the fact that the datasets used in our experiments contain short documents (see Table TABREF29) featuring simple sentences. Thus, the superior expressiveness of CAHAN is not able to show. To address this issue, we plan in future work to experiment on datasets featuring long documents containing complex sentences. Moreover, the tasks of sentiment and topic classification do not require a deep understanding of the input documents. Even if a given document contains some complex sentences with multiple clauses and subtopics, capturing the polarity of only one simple, unambiguous sentence or pattern may be enough to accurately predict the category of the entire document (e.g., “hated”, “loved”, “definitely recommends”, “will never come back”, etc.). Thus, we hypothesize that when trained to solve such tasks, CAHAN does not learn to use its context-aware capabilities to the fullest extent. One solution, and promising area of future work, would consist in explicitly giving CAHAN knowledge about coverage, diversity, and redundancy. This could be done by modifying the sentence attention mechanism and/or by adding a term to the loss. Another natural next step is to experiment on tasks requiring a deeper understanding of text, such as end-to-end abstractive summarization. Some other ideas for improvement include combining CAHAN-SUM with CAHAN-RNN, and/or the mean and centroid vectors; for CAHAN-SUM, obtaining the centroid vector through a trainable mechanism rather than via pooling; and experimenting with a trainable matrix (instead of vector) in the self-attention at both level 1 and level 2, like in BIBREF25. Finally, the context vector could be seen as an external, general summary of the document, and be pre-computed offline by a dedicated encoder. <<</Discussion and next steps>>> <<<Conclusion>>> In this paper, we proposed several modifications of the HAN architecture that make the sentence encoder context-aware (CAHAN). Results show that taking context into account is beneficial. Specifically, the bidirectional version of the document encoder, that processes the documents forwards and backwards, using the preceding and following sentences as context, outperforms the HAN baseline on all datasets and is superior to the undirectional variant. Moreover, the computational overhead is small. Experiments on tasks requiring a deeper understanding of the input documents should better highlight the superiority of CAHAN. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Experimental setup, Abstract" ], "type": "disordered_section" }
1908.06006
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Bidirectional Context-Aware Hierarchical Attention Network for Document Understanding <<<Abstract>>> The Hierarchical Attention Network (HAN) has made great strides, but it suffers a major limitation: at level 1, each sentence is encoded in complete isolation. In this work, we propose and compare several modifications of HAN in which the sentence encoder is able to make context-aware attentional decisions (CAHAN). Furthermore, we propose a bidirectional document encoder that processes the document forwards and backwards, using the preceding and following sentences as context. Experiments on three large-scale sentiment and topic classification datasets show that the bidirectional version of CAHAN outperforms HAN everywhere, with only a modest increase in computation time. While results are promising, we expect the superiority of CAHAN to be even more evident on tasks requiring a deeper understanding of the input documents, such as abstractive summarization. Code is publicly available. <<</Abstract>>> <<<Introduction>>> Recently, hierarchical architectures have become ubiquitous in NLP. They have been applied to a wide variety of tasks such as language modeling and generation BIBREF0, BIBREF1, neural machine translation (NMT) BIBREF2, summarization BIBREF3, sentiment and topic classification BIBREF4, BIBREF5, and spoken language understanding BIBREF6, BIBREF7, to cite only a few examples. All hierarchical architectures capitalize on the same intuitive idea that the representation of the input text should be learned in a bottom-up fashion by using a different encoder at each granularity level (e.g., words, sentences, paragraphs), where the encoder at level $l+1$ takes as input the output of the encoder at level $l$. One of the earliest and most influential examples is the Hierarchical Attention Network (HAN) of BIBREF5 (see Fig. FIGREF6 and section SECREF2). It is a two-level architecture, where at level 1, each sentence in the document is separately encoded by the same sentence encoder, resulting in a sequence of sentence vectors. That sequence is then processed at level 2 by the document encoder which returns a single vector representing the entire document. The sentence and document encoders are both self-attentional bidirectional Recurrent Neural Networks (RNNs), with different parameters. <<<Observed problem>>> HAN was highly successful and established new state of the art on six large-scale sentiment and topic classification datasets. However, it has a major weakness: at level 1, each sentence is encoded in isolation. That is, while producing the representation of a given sentence in the document, HAN completely ignores the other sentences. This lack of communication is obviously suboptimal. For example, in Fig. FIGREF2, the same highly negative feature (“terrible value”) has been repeated at the beginning of each sentence in the document. Because it encodes each sentence independently, HAN has no choice but to spend most of its attentional budget on the most salient feature every time. As a result, HAN neglects the other aspects of the document. On the other hand, CAHAN is informed about the context, and thus quickly stops spending attention weight on the same highly negative pattern, knowing that is has already been covered. CAHAN is then able to cover the other topics in the document (“seafood”,“scallops” and “mussels”; “entree” and “appetizer”; triple negation in the fourth sentence). As another example, consider the edge case of a document containing the same sentence repeated several times, as shown in Fig. FIGREF3. With HAN, the exact same embedding is produced for each instantiation of the sentence, as a result of the context-blind self-attention mechanism always making the same alignment decisions. However, the context-aware sentence encoder of CAHAN allows it to extract complementary, rather than redundant information, from each instantiation of the sentence. This results in better coverage (“reasonably priced”, “arrived late”), in a richer document representation, and ultimately in a more accurate prediction (positive instead of very positive). One may argue that in basic HAN, the document encoder at level 2 already does capture some notion of context, by assigning importance scores to sentences. However, at level 2, the sentence vectors have already been formed, and it is too late to modify them. Since the document encoder can only rank the sentence representations, it cannot address issues like high redundancy. In that case, important subtopics or details in the document will not be covered, no matter sentence scores. <<</Observed problem>>> <<<Context-aware HAN>>> In this work, we propose and evaluate several modifications of the HAN architecture that allow the sentence encoder at level 1 to make its attentional decisions based on contextual information, allowing it to learn richer document representations. Another significant contribution is the introduction of a bidirectional version of the document encoder, where one RNN processes the document forwards, using the preceding sentences as context, and another one processes it backwards, using the following sentences as context. The remainder of this paper is structured as follows. We start by formally introducing basic HAN (section SECREF2), we then explain our contributions (section SECREF3), and detail our experimental setup (section SECREF4). Finally, we interpret our results and list areas of future development (sections SECREF5 and SECREF7). Related work is reviewed in section SECREF6. <<</Context-aware HAN>>> <<</Introduction>>> <<<HAN>>> The baseline HAN model as introduced by BIBREF5 is shown in Fig. FIGREF6 along with our modifications (disregard the bold lines for the baseline). The sentence and document encoders, used respectively at level 1 and level 2, have different parameters but share the exact same architecture. Thus, in what follows, we only describe the sentence encoder in detail. <<<Notation>>> Next, we use boldface upper case for tensors, upper case for matrices, boldface lower case for vectors, and lower case for scalars. We define a document $\mathbf {X} \in \mathbb {R}^{N \times T_i \times d}$ as a sequence of $N$ sentences $(S_1, \dots , S_N)$. Each sentence $S_i$ is a sequence of $T_i$ $d$-dimensional word vectors $(\mathbf {x}_{i1}, \dots , \mathbf {x}_{iT_i}) \in \mathbb {R}^{T_i \times d}$. <<</Notation>>> <<<Sentence encoder>>> First, the sentence-level bidirectional RNN $f_s$ processes the input sentence $S_i$ and returns a sequence of $T_i$ $2d_s$-dimensional hidden states $(\mathbf {h}_{i1},\dots , \mathbf {h}_{iT_i}) \in \mathbb {R}^{T_i \times 2d_s}$. $f_s$ is composed of two non-stacking RNNs $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ with Gated Recurrent Units BIBREF8, respectively parsing $S_i$ from left to right and right to left: $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ have the same hidden layer dimensionality $d_s$, but different parameters. At each time step $t$, the word annotations they return are concatenated, producing $2d_s$-dimensional annotations that summarize the immediate context surrounding each word: Then, a self-attention mechanism computes the representation $\mathbf {s}_i$ of sentence $S_i$ as a weighted sum of its word annotations: Where the vector of attentional coefficients $\mathbf {\alpha }$ is a softmax-normalized version of the alignment vector $\mathbf {e}$, which itself is obtained by passing the word annotations through a dense layer (parameterized by $W_s \in \mathbb {R}^{2d_s\times 2d_s}$) and comparing the output with a trainable vector $\mathbf {u}_s \in \mathbb {R}^{2d_s}$: $\mathbf {u}_s$ is initialized randomly. It can be interpreted as a “super-word” whose vector contains the ideal combination of latent topics, on average. The closest the annotation of a word is to this ideal representation, the more attention that word will be given. The sentence encoder is applied to all sentences in document $\mathbf {X}$, producing a sequence of $N$ sentence vectors $(\mathbf {s_1},\dots ,\mathbf {s_N}) \in \mathbb {R}^{N\times 2d_s}$. <<</Sentence encoder>>> <<<Document encoder>>> The document encoder is a self-attentional bidirectional GRU-RNN, like the sentence encoder, but it has different parameters. The dimensionality of its hidden states is $2d_d$. The document encoder is applied only once, to the sequence of sentence vectors, to produce the sequence of sentence annotations $(\mathbf {h}_{1}, \dots , \mathbf {h}_{N})$. Then, a self-attention layer outputs the final document vector. <<</Document encoder>>> <<</HAN>>> <<<Proposed architecture: CAHAN>>> As was previously explained, each sentence is encoded independently by HAN, without considering any kind of contextual information. To solve this issue, we inject a context vector $\mathbf {c_i}$ into the self-attention mechanism, to guide the model during the computation of the word alignment coefficients. In effect, Eq. DISPLAY_FORM12 becomes: We propose two approaches for computing $\mathbf {c_i}$, namely CAHAN-SUM and CAHAN-RNN, shown as the two bolded connections in Fig. FIGREF6. <<<Summed context (CAHAN-SUM)>>> We introduce two settings, (1) left-to-right and bidirectional. Whenever there is no preceding/following sentence, i.e., at the beginning/end of a document, the context vector is initialized with zeroes. <<<Left-to-right (LR)>>> In the LR case, the context vector is computed as the sum of the preceding sentence representations: <<</Left-to-right (LR)>>> <<<Bidirectional (BI)>>> In the BI case, we compute two context vectors, respectively by summing the representations of the sentences preceding and following the current sentence $S_i$. These two vectors are passed to two identical context-aware self-attention mechanisms (Eq. DISPLAY_FORM14) with different parameters. The resulting forward and backward sentence representations are then processed respectively by the forward and backward RNNs of the document encoder at level 2, and the resulting annotations are concatenated to produce the final sentence annotations. CAHAN-SUM was inspired by the coverage vectors of seq2seq architectures, which have been shown very effective in addressing under(over)-translation in NMT BIBREF9, and repetition in summarization BIBREF10. Such coverage vectors are typically computed as the sum, over all previous decoder steps, of the attention distribution over the source tokens. However, in our case, we cannot keep track of the attention distribution history, since sentences are unique and cannot be aligned. This is why we work with sentence representations instead. <<</Bidirectional (BI)>>> <<<Centroid version (@!START@$\mu $@!END@)>>> $\overrightarrow{\mathbf {c}_i}$, as defined by Eq. DISPLAY_FORM17, grows larger in magnitude as $i$ increases (the sum has more and more terms), which can blur the alignment decisions for the sentences at the end of a document (LR case), or both at the end and beginning of a document, when reading forwards and backwards (BI case). Therefore, we also experiment with a centroid, rather than sum, context vector: <<</Centroid version (@!START@$\mu $@!END@)>>> <<</Summed context (CAHAN-SUM)>>> <<<Recurrent Context (CAHAN-RNN)>>> Here, we capitalize on the capability of RNNs, especially when equipped with LSTM or GRU units, to keep track of information over long time periods. We simply use as context vector the document encoder annotation at the preceding/following time step. That is, we have, in the LR case: By design, $\mathbf {h}_{i-1}$ summarizes the entire history $(\mathbf {s_1},\dots ,\mathbf {s_{i-1}})$ of sentence vectors, with a preference for the most recent time steps. If the sequence is very long though, even a GRU-RNN will eventually forget about the first elements. However, for the relatively short documents we experiment with (see Table TABREF29), we can assume the annotations of the document encoder to faithfully represent the entire sequence. <<</Recurrent Context (CAHAN-RNN)>>> <<<Gated context>>> In NMT, BIBREF11 introduced a gating mechanism to allow the decoder to balance the contribution of the source and target information in generating the next word. The same idea can be found in numerous other NMT studies, e.g., BIBREF2, BIBREF12, BIBREF13. Inspired by this line of research, we propose a modification of Eq. DISPLAY_FORM14 to let our model explicitly decide how much contextual information it should take into account in making its alignment decisions: $\mathbf {\lambda }$ is produced by a trainable mechanism taking as input the word annotations and the context vector: The sigmoid activation ensures that $\mathbf {\lambda }$ plays a filtering role, by squashing all its entries to $[0,1]$. The gate gives more expressiveness to the attention mechanism. Indeed, contextual information should not always be given the same importance, depending on the situation. E.g., when most of the document has been processed, context is likely to be very important, in order to limit redundancy and increase coverage. However, at the beginning of a document, or in the case of a very short or focused sentence, context might not be useful as only one single topic might be extractable from the sentence anyways. From an optimization perspective, $\mathbf {\lambda }$ also has the desirable effect of regulating the magnitude of the context vector, preventing it from pushing the tanh to regions of very small gradient. This is especially useful with CAHAN-SUM, as in that case, $\mathbf {c}_i$ gets large towards the end/beginning of documents (forwards/backwards reading). <<</Gated context>>> <<<Complexity and sequentiality>>> Assuming that $d \sim 2d_s$ and that $d_s \sim d_d$, which holds in practice under reasonable settings, all matrix multiplications in the network have similar complexity, of order of magnitude $\mathcal {O}(d^2)$. Moreover, since we use GRU-RNNs, there are 6 matrix multiplication per encoder. This number is doubled, as we use bidirectional RNNs. Finally, the two self-attention mechanisms, one at each level, add two multiplications. Therefore, in the HAN baseline architecture, there are a total of 26 matrix multiplications (13 at each level). To that, CAHAN-SUM and CAHAN-RNN simply add one matrix multiplication ($W_c\mathbf {c}_i$ in Eq. DISPLAY_FORM14) in the LR case and two in the BI case. This corresponds to negligible 4% and 8% increases in total computational cost. On top of that, gating adds two multiplications in the LR case ($W_{\lambda _1}\mathbf {h}_{it}$ and $W_{\lambda _2}\mathbf {c}_i$ in Eq. DISPLAY_FORM25) and four in the BI case. All in all, this represents three and six extra multiplications compared to basic HAN, resp. in the LR and BI cases. Again, this corresponds to small increases in computational cost, of 11.5% and 23%, respectively. However, with CAHAN-SUM, the representations of the preceding/following sentences are now required before computing the current sentence representation. With CAHAN-RNN, one even has to wait until the level 2 RNN has processed the preceding/following sentence vectors before being able to encode the current sentence. Therefore, the sentence encoding process, which was parallelizable with basic HAN due to independence, has now become a sequential process. This is why in practice, we observe slightly greater runtime increases, in the range 5-22% (see Table TABREF43). <<</Complexity and sequentiality>>> <<</Proposed architecture: CAHAN>>> <<<Experimental setup>>> <<<Datasets>>> We evaluate the quality of the document embeddings learned by the different variants of CAHAN and the HAN baseline on three of the large-scale document classification datasets introduced by BIBREF14 and used in the original HAN paper BIBREF5. They fall into two categories: topic classification (Yahoo) and fine-grained sentiment analysis (Amazon, Yelp). Dataset statistics are shown in Table TABREF29. Classes are perfectly balanced, for all datasets. <<</Datasets>>> <<<Model configuration>>> This subsection describes the preprocessing and hyperparameter setting we used. <<<Preprocessing and word embeddings>>> For preprocessing (and the HAN baseline), we used the publicly available implementation of BIBREF15, which closely follows the description and details given in the original HAN paper BIBREF5. More precisely, on each dataset, we randomly split the training set into training (90%) and validation (10%). Documents are then tokenized into sentences and sentences are tokenized into tokens. The tokens appearing less than 5 times in the corpus are replaced with a special UNK token. Finally, we pre-train our own word vectors with word2vec BIBREF16 on the training and validation splits. <<</Preprocessing and word embeddings>>> <<<Hyperparameters>>> We do not tune any hyperparameter except the learning rate (see subsection SECREF35). We set the hidden layer dimensionality of the two RNN encoders to $d_s=50$ and $d_d=50$. Thus, the word annotations, sentence vectors, sentence annotations and document vector all have size 100. With regularization in mind, we set the dimensionality of the word embeddings to $d=200$ on the very large datasets (Amazon and Yahoo!) and to $d=100$ on Yelp, as shown in Table TABREF29. We also use a greater batch size of 128 on the large datasets, versus 64 on Yelp. <<</Hyperparameters>>> <<</Model configuration>>> <<<Training details>>> We zero-pad sentences and documents. Like in BIBREF5, to make the most out of each batch, we ensure they are as dense as possible by using a bucketing strategy. More precisely, we build each batch so that it contains documents of approximately the same size, in number of sentences. For regularization, we use dropout BIBREF17 with a rate of 0.5 at each layer. For classification, the document vectors are passed to a dense layer with softmax activation, whose dimensionality is equal to the number of categories to be predicted. Initialization has a significant impact on performance. To make sure the differences we measure are due to differences in the models and not in initial condition, we use the same initialization weights for each model. <<<SGD with cyclical learning rate>>> To minimize the categorical cross-entropy loss, we use the stochastic gradient descent optimizer with a triangular cyclical learning rate schedule and opposite triangular momentum schedule BIBREF18, BIBREF19. Following the authors' recommendations, we use a fixed $[0.85,0.95]$ momentum range, while for the learning rate, we perform a range test on the validation set, for each model, searching the $[0.001,3]$ range. With a triangular schedule, the learning rate linearly increases for a certain number of iterations (half-cycle), and then linearly decreases back to its initial value during the second half of the cycle. Cycles are repeated until training ends. High learning rate values make training faster, by allowing large updates and the use of greater batch sizes while keeping the amount of regularization constant. Also, the cyclical schedule injects beneficial stochastic noise to the gradient updates, which improves generalization BIBREF20. We use cycles of 12 epochs, and an early stopping strategy, monitoring the test loss, with a patience of slightly more than one cycle. We set the maximum number of epochs for all models to 50. <<</SGD with cyclical learning rate>>> <<</Training details>>> <<</Experimental setup>>> <<<Results>>> As can be seen in Table TABREF37, the best version of CAHAN (SUM-BI-$\Sigma $) consistently outperforms the HAN baseline, which shows that taking contextual information into account helps producing better document representations. Also, the two unidirectional variants (LR) slightly underperform the baseline and are clearly inferior to BI, which illustrates the value added by processing the document forwards and backwards, using preceding and following sentences as context. <<<Summing vs. averaging>>> In the unidirectional case, it is surprising to note that CAHAN-SUM-LR-$\mu $ is slightly better than CAHAN-SUM-LR-$\Sigma $, i.e., the centroid-based context vector (Eq. DISPLAY_FORM20) is better than the sum-based one (Eq. DISPLAY_FORM17). Indeed, from an information theory standpoint, it should be the opposite, as summing keeps track of all information whereas averaging is lossy. We hypothesize that towards the end of a document, the sum-based context vector grows large in magnitude, which perturbs the alignment decisions and deteriorates the quality of the sentence vectors. On the other hand, the centroid-based vector, which has constant magnitude, does not suffer from this issue. We further hypothesize that this issue is attenuated in the bidirectional case (CAHAN-SUM-BI-$\mu $ and CAHAN-SUM-BI-$\Sigma $ are on par) due to a counterbalancing phenomenon. Indeed, the last sentences processed by the left-to-right encoder are the first ones processed by the right-to-left encoder. Therefore, through concatenation, the overall quality of the sentence embeddings stays constant. <<</Summing vs. averaging>>> <<<Gating>>> As expected, gating improves performance, especially for the $\Sigma $ variants of CAHAN-SUM (and especially the LR ones). To be noted are significant boosts of 0.45 and 0.24 in accuracy respectively for CAHAN-SUM-LR-$\Sigma $ and CAHAN-SUM-BI-$\Sigma $ on Yelp. On Amazon, gating also offers CAHAN-SUM-LR-$\Sigma $ a nice 0.27 improvement. These positive results give a clue that regulating the magnitude of the context vector $\mathbf {c}_i$ is indeed beneficial. Nevertheless, gating also improves the performance of the $\mu $ variants of CAHAN, which do not suffer from the context vector magnitude issue. This shows that gating is also helpful via giving more expressiveness to the model. For instance, on Amazon, gating boosts the performance of CAHAN-SUM-BI-$\mu $ by 0.12. It is interesting to note that overall, gating is mostly effective on Yelp and Amazon. We attribute this to the difference in task. Sentiment analysis may rely more on contextual information than topic classification. <<</Gating>>> <<<CAHAN-RNN-BI>>> The consistently bad performance of CAHAN-RNN-BI is to be noted. This was unexpected, as an equivalent approach was used by BIBREF6 for dialogue act classification, with significant improvements. We hypothesize that in our case, CAHAN-RNN-BI is not effective because, unlike utterances in a speech transcription, sentences in a document are not ordered in a temporal fashion. In other words, sentences far away from the current sentence are not necessarily less relevant than closer sentences. Thus, considering each sentence equally is better than imposing an implicit time-decay via a RNN. <<</CAHAN-RNN-BI>>> <<<Runtimes>>> We compare the average runtime per iteration of some variants of CAHAN to that of basic HAN in Table TABREF43. For CAHAN-SUM-$\Sigma $, we observe that the unidirectional variant (LR) is 5.7% slower than basic HAN (37 vs. 35ms per iteration), whereas the bidirectional variant (BI) is 23% slower (43 vs. 35 ms). When gating, these number increase to 14.3% and 37% (40 and 48ms vs. 35ms). These differences are not far from our theoretical expectations (see subsection SECREF26), especially for LR. Indeed, recall that based on matrix multiplication counts, we had forecasted increases of 4% and 8% (11.5% and 23% when using gating), respectively for LR and BI. The gap for BI can be explained by a probable bottleneck in the implementation. CAHAN-RNN adds the same number of matrix multiplications as CAHAN-SUM, so we should in principle observe the same increases. However, as was explained in subsection SECREF26, with CAHAN-RNN we have to wait until the level 2 RNN has processed the preceding or preceding/following sentence vectors (LR or BI case) before being able to encode the current sentence. This explains the extra-time needed (40 vs. 37ms and 49 vs. 43ms). <<</Runtimes>>> <<</Results>>> <<<Related work>>> In what follows, we provide a review of the relevant literature. One should note that by context, in this paper, we do not refer to the intra-sentence or internal context vector of seq2seq encoders BIBREF21, BIBREF11, BIBREF13. Rather, we refer to the cross-sentence, external, or document-level context. A few studies only have focused on developing models that take that type of context into account. Most of these studies originate from NMT. We briefly describe them next. BIBREF2 obtain a global context vector by feeding a fixed number of the previous source sentences to HAN. They then compare two ways of injecting it into the encoder-decoder model. First, they propose a warm-start approach, in which the encoder and/or decoder hidden states are initialized with the context vector. Second, they experiment with an auxiliary strategy in which the intra-sentence context vector of the encoder is concatenated with the global context vector and passed either (i) directly to the decoder, or (ii) after going through a filtering gate. However, unlike our mechanism and that of BIBREF11, BIBREF12, BIBREF13, which all feature two coupled gates, the mechanism of BIBREF2 has only one gate. All strategies proposed by BIBREF2 significantly improve performance, but first place is reached by a combination of the warm-start and gated techniques. BIBREF22 use an approach similar to the auxiliary approach of BIBREF2, but they compute the context vector only from the sentence immediately preceding the current source sentence. They then pass it to a dedicated encoder featuring a customized attention mechanism. BIBREF12 and BIBREF23 both extend the Transformer architecture BIBREF24 with a context encoder featuring self-attentional and feed-forward layers. Then, BIBREF12 combine the context representation with the source representation produced by the basic Transformer encoder via a gating mechanism. They do not modify the decoder part of the Transformer. BIBREF23 go one step further by passing the contextual information both to the encoder and the decoder. In both cases, they add a self-attention mechanism over the context representation. For the decoder though, they also replace the residual connection after the context self-attention with a gating mechanism, to limit the influence of the context information on the source information. One piece of work closely related to our study is BIBREF3. The authors also use a hierarchical attention architecture, where at level 1, each paragraph of a document is encoded by a dedicated encoder. All encoders share the same stacking bi-RNN architecture. Moreover, they communicate at each layer to produce context-aware annotations of the words in their paragraphs. More precisely, at a given layer of the stacking RNN, a given encoder is passed the average of the representations learned by the other encoders at the corresponding layer (like with CAHAN-SUM-$\mu $). This context vector is then combined with the hidden states and passed as input to the upper RNN layer. At level 2, the top RNN layer annotations are passed to a word attention mechanism followed by a paragraph attention mechanism. A major difference with our work is that the authors combine the encoder with a decoder, to perform abstractive summarization of long documents, whereas we only focus on the encoding part. The word and paragraph attentional decisions at level 2 are thus made by the decoder. Another significant difference is that the authors use reinforcement learning for training, instead of SGD. Context-aware models have also been proposed in other NLP domains. E.g., for spoken language understanding, BIBREF7 prepend and append the current utterance with two special word vectors respectively summarizing the $C$ preceding and following utterances (respectively), where $C$ is a hyperparameter. This indirectly initializes the hidden states of the left-to-right and right-to-left components of a bidirectional RNN, like with the warm-start approach of BIBREF2. On the other hand, BIBREF6 rely on a mechanism equivalent to LR-CAHAN-RNN. They find that it significantly boosts dialogue act classification accuracy. As discussed in section SECREF5, we hypothesize that CAHAN-RNN is not effective in our application because sentences in a document are not ordered in a temporal manner. <<</Related work>>> <<<Discussion and next steps>>> While bidirectional CAHAN-SUM systematically outperforms HAN, margins are modest. We attribute this to the fact that the datasets used in our experiments contain short documents (see Table TABREF29) featuring simple sentences. Thus, the superior expressiveness of CAHAN is not able to show. To address this issue, we plan in future work to experiment on datasets featuring long documents containing complex sentences. Moreover, the tasks of sentiment and topic classification do not require a deep understanding of the input documents. Even if a given document contains some complex sentences with multiple clauses and subtopics, capturing the polarity of only one simple, unambiguous sentence or pattern may be enough to accurately predict the category of the entire document (e.g., “hated”, “loved”, “definitely recommends”, “will never come back”, etc.). Thus, we hypothesize that when trained to solve such tasks, CAHAN does not learn to use its context-aware capabilities to the fullest extent. One solution, and promising area of future work, would consist in explicitly giving CAHAN knowledge about coverage, diversity, and redundancy. This could be done by modifying the sentence attention mechanism and/or by adding a term to the loss. Another natural next step is to experiment on tasks requiring a deeper understanding of text, such as end-to-end abstractive summarization. Some other ideas for improvement include combining CAHAN-SUM with CAHAN-RNN, and/or the mean and centroid vectors; for CAHAN-SUM, obtaining the centroid vector through a trainable mechanism rather than via pooling; and experimenting with a trainable matrix (instead of vector) in the self-attention at both level 1 and level 2, like in BIBREF25. Finally, the context vector could be seen as an external, general summary of the document, and be pre-computed offline by a dedicated encoder. <<</Discussion and next steps>>> <<<Conclusion>>> In this paper, we proposed several modifications of the HAN architecture that make the sentence encoder context-aware (CAHAN). Results show that taking context into account is beneficial. Specifically, the bidirectional version of the document encoder, that processes the documents forwards and backwards, using the preceding and following sentences as context, outperforms the HAN baseline on all datasets and is superior to the undirectional variant. Moreover, the computational overhead is small. Experiments on tasks requiring a deeper understanding of the input documents should better highlight the superiority of CAHAN. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Related work, Conclusion" ], "type": "disordered_section" }
1908.06006
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Bidirectional Context-Aware Hierarchical Attention Network for Document Understanding <<<Abstract>>> The Hierarchical Attention Network (HAN) has made great strides, but it suffers a major limitation: at level 1, each sentence is encoded in complete isolation. In this work, we propose and compare several modifications of HAN in which the sentence encoder is able to make context-aware attentional decisions (CAHAN). Furthermore, we propose a bidirectional document encoder that processes the document forwards and backwards, using the preceding and following sentences as context. Experiments on three large-scale sentiment and topic classification datasets show that the bidirectional version of CAHAN outperforms HAN everywhere, with only a modest increase in computation time. While results are promising, we expect the superiority of CAHAN to be even more evident on tasks requiring a deeper understanding of the input documents, such as abstractive summarization. Code is publicly available. <<</Abstract>>> <<<Introduction>>> Recently, hierarchical architectures have become ubiquitous in NLP. They have been applied to a wide variety of tasks such as language modeling and generation BIBREF0, BIBREF1, neural machine translation (NMT) BIBREF2, summarization BIBREF3, sentiment and topic classification BIBREF4, BIBREF5, and spoken language understanding BIBREF6, BIBREF7, to cite only a few examples. All hierarchical architectures capitalize on the same intuitive idea that the representation of the input text should be learned in a bottom-up fashion by using a different encoder at each granularity level (e.g., words, sentences, paragraphs), where the encoder at level $l+1$ takes as input the output of the encoder at level $l$. One of the earliest and most influential examples is the Hierarchical Attention Network (HAN) of BIBREF5 (see Fig. FIGREF6 and section SECREF2). It is a two-level architecture, where at level 1, each sentence in the document is separately encoded by the same sentence encoder, resulting in a sequence of sentence vectors. That sequence is then processed at level 2 by the document encoder which returns a single vector representing the entire document. The sentence and document encoders are both self-attentional bidirectional Recurrent Neural Networks (RNNs), with different parameters. <<<Observed problem>>> HAN was highly successful and established new state of the art on six large-scale sentiment and topic classification datasets. However, it has a major weakness: at level 1, each sentence is encoded in isolation. That is, while producing the representation of a given sentence in the document, HAN completely ignores the other sentences. This lack of communication is obviously suboptimal. For example, in Fig. FIGREF2, the same highly negative feature (“terrible value”) has been repeated at the beginning of each sentence in the document. Because it encodes each sentence independently, HAN has no choice but to spend most of its attentional budget on the most salient feature every time. As a result, HAN neglects the other aspects of the document. On the other hand, CAHAN is informed about the context, and thus quickly stops spending attention weight on the same highly negative pattern, knowing that is has already been covered. CAHAN is then able to cover the other topics in the document (“seafood”,“scallops” and “mussels”; “entree” and “appetizer”; triple negation in the fourth sentence). As another example, consider the edge case of a document containing the same sentence repeated several times, as shown in Fig. FIGREF3. With HAN, the exact same embedding is produced for each instantiation of the sentence, as a result of the context-blind self-attention mechanism always making the same alignment decisions. However, the context-aware sentence encoder of CAHAN allows it to extract complementary, rather than redundant information, from each instantiation of the sentence. This results in better coverage (“reasonably priced”, “arrived late”), in a richer document representation, and ultimately in a more accurate prediction (positive instead of very positive). One may argue that in basic HAN, the document encoder at level 2 already does capture some notion of context, by assigning importance scores to sentences. However, at level 2, the sentence vectors have already been formed, and it is too late to modify them. Since the document encoder can only rank the sentence representations, it cannot address issues like high redundancy. In that case, important subtopics or details in the document will not be covered, no matter sentence scores. <<</Observed problem>>> <<<Context-aware HAN>>> In this work, we propose and evaluate several modifications of the HAN architecture that allow the sentence encoder at level 1 to make its attentional decisions based on contextual information, allowing it to learn richer document representations. Another significant contribution is the introduction of a bidirectional version of the document encoder, where one RNN processes the document forwards, using the preceding sentences as context, and another one processes it backwards, using the following sentences as context. The remainder of this paper is structured as follows. We start by formally introducing basic HAN (section SECREF2), we then explain our contributions (section SECREF3), and detail our experimental setup (section SECREF4). Finally, we interpret our results and list areas of future development (sections SECREF5 and SECREF7). Related work is reviewed in section SECREF6. <<</Context-aware HAN>>> <<</Introduction>>> <<<HAN>>> The baseline HAN model as introduced by BIBREF5 is shown in Fig. FIGREF6 along with our modifications (disregard the bold lines for the baseline). The sentence and document encoders, used respectively at level 1 and level 2, have different parameters but share the exact same architecture. Thus, in what follows, we only describe the sentence encoder in detail. <<<Notation>>> Next, we use boldface upper case for tensors, upper case for matrices, boldface lower case for vectors, and lower case for scalars. We define a document $\mathbf {X} \in \mathbb {R}^{N \times T_i \times d}$ as a sequence of $N$ sentences $(S_1, \dots , S_N)$. Each sentence $S_i$ is a sequence of $T_i$ $d$-dimensional word vectors $(\mathbf {x}_{i1}, \dots , \mathbf {x}_{iT_i}) \in \mathbb {R}^{T_i \times d}$. <<</Notation>>> <<<Sentence encoder>>> First, the sentence-level bidirectional RNN $f_s$ processes the input sentence $S_i$ and returns a sequence of $T_i$ $2d_s$-dimensional hidden states $(\mathbf {h}_{i1},\dots , \mathbf {h}_{iT_i}) \in \mathbb {R}^{T_i \times 2d_s}$. $f_s$ is composed of two non-stacking RNNs $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ with Gated Recurrent Units BIBREF8, respectively parsing $S_i$ from left to right and right to left: $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ have the same hidden layer dimensionality $d_s$, but different parameters. At each time step $t$, the word annotations they return are concatenated, producing $2d_s$-dimensional annotations that summarize the immediate context surrounding each word: Then, a self-attention mechanism computes the representation $\mathbf {s}_i$ of sentence $S_i$ as a weighted sum of its word annotations: Where the vector of attentional coefficients $\mathbf {\alpha }$ is a softmax-normalized version of the alignment vector $\mathbf {e}$, which itself is obtained by passing the word annotations through a dense layer (parameterized by $W_s \in \mathbb {R}^{2d_s\times 2d_s}$) and comparing the output with a trainable vector $\mathbf {u}_s \in \mathbb {R}^{2d_s}$: $\mathbf {u}_s$ is initialized randomly. It can be interpreted as a “super-word” whose vector contains the ideal combination of latent topics, on average. The closest the annotation of a word is to this ideal representation, the more attention that word will be given. The sentence encoder is applied to all sentences in document $\mathbf {X}$, producing a sequence of $N$ sentence vectors $(\mathbf {s_1},\dots ,\mathbf {s_N}) \in \mathbb {R}^{N\times 2d_s}$. <<</Sentence encoder>>> <<<Document encoder>>> The document encoder is a self-attentional bidirectional GRU-RNN, like the sentence encoder, but it has different parameters. The dimensionality of its hidden states is $2d_d$. The document encoder is applied only once, to the sequence of sentence vectors, to produce the sequence of sentence annotations $(\mathbf {h}_{1}, \dots , \mathbf {h}_{N})$. Then, a self-attention layer outputs the final document vector. <<</Document encoder>>> <<</HAN>>> <<<Proposed architecture: CAHAN>>> As was previously explained, each sentence is encoded independently by HAN, without considering any kind of contextual information. To solve this issue, we inject a context vector $\mathbf {c_i}$ into the self-attention mechanism, to guide the model during the computation of the word alignment coefficients. In effect, Eq. DISPLAY_FORM12 becomes: We propose two approaches for computing $\mathbf {c_i}$, namely CAHAN-SUM and CAHAN-RNN, shown as the two bolded connections in Fig. FIGREF6. <<<Summed context (CAHAN-SUM)>>> We introduce two settings, (1) left-to-right and bidirectional. Whenever there is no preceding/following sentence, i.e., at the beginning/end of a document, the context vector is initialized with zeroes. <<<Left-to-right (LR)>>> In the LR case, the context vector is computed as the sum of the preceding sentence representations: <<</Left-to-right (LR)>>> <<<Bidirectional (BI)>>> In the BI case, we compute two context vectors, respectively by summing the representations of the sentences preceding and following the current sentence $S_i$. These two vectors are passed to two identical context-aware self-attention mechanisms (Eq. DISPLAY_FORM14) with different parameters. The resulting forward and backward sentence representations are then processed respectively by the forward and backward RNNs of the document encoder at level 2, and the resulting annotations are concatenated to produce the final sentence annotations. CAHAN-SUM was inspired by the coverage vectors of seq2seq architectures, which have been shown very effective in addressing under(over)-translation in NMT BIBREF9, and repetition in summarization BIBREF10. Such coverage vectors are typically computed as the sum, over all previous decoder steps, of the attention distribution over the source tokens. However, in our case, we cannot keep track of the attention distribution history, since sentences are unique and cannot be aligned. This is why we work with sentence representations instead. <<</Bidirectional (BI)>>> <<<Centroid version (@!START@$\mu $@!END@)>>> $\overrightarrow{\mathbf {c}_i}$, as defined by Eq. DISPLAY_FORM17, grows larger in magnitude as $i$ increases (the sum has more and more terms), which can blur the alignment decisions for the sentences at the end of a document (LR case), or both at the end and beginning of a document, when reading forwards and backwards (BI case). Therefore, we also experiment with a centroid, rather than sum, context vector: <<</Centroid version (@!START@$\mu $@!END@)>>> <<</Summed context (CAHAN-SUM)>>> <<<Recurrent Context (CAHAN-RNN)>>> Here, we capitalize on the capability of RNNs, especially when equipped with LSTM or GRU units, to keep track of information over long time periods. We simply use as context vector the document encoder annotation at the preceding/following time step. That is, we have, in the LR case: By design, $\mathbf {h}_{i-1}$ summarizes the entire history $(\mathbf {s_1},\dots ,\mathbf {s_{i-1}})$ of sentence vectors, with a preference for the most recent time steps. If the sequence is very long though, even a GRU-RNN will eventually forget about the first elements. However, for the relatively short documents we experiment with (see Table TABREF29), we can assume the annotations of the document encoder to faithfully represent the entire sequence. <<</Recurrent Context (CAHAN-RNN)>>> <<<Gated context>>> In NMT, BIBREF11 introduced a gating mechanism to allow the decoder to balance the contribution of the source and target information in generating the next word. The same idea can be found in numerous other NMT studies, e.g., BIBREF2, BIBREF12, BIBREF13. Inspired by this line of research, we propose a modification of Eq. DISPLAY_FORM14 to let our model explicitly decide how much contextual information it should take into account in making its alignment decisions: $\mathbf {\lambda }$ is produced by a trainable mechanism taking as input the word annotations and the context vector: The sigmoid activation ensures that $\mathbf {\lambda }$ plays a filtering role, by squashing all its entries to $[0,1]$. The gate gives more expressiveness to the attention mechanism. Indeed, contextual information should not always be given the same importance, depending on the situation. E.g., when most of the document has been processed, context is likely to be very important, in order to limit redundancy and increase coverage. However, at the beginning of a document, or in the case of a very short or focused sentence, context might not be useful as only one single topic might be extractable from the sentence anyways. From an optimization perspective, $\mathbf {\lambda }$ also has the desirable effect of regulating the magnitude of the context vector, preventing it from pushing the tanh to regions of very small gradient. This is especially useful with CAHAN-SUM, as in that case, $\mathbf {c}_i$ gets large towards the end/beginning of documents (forwards/backwards reading). <<</Gated context>>> <<<Complexity and sequentiality>>> Assuming that $d \sim 2d_s$ and that $d_s \sim d_d$, which holds in practice under reasonable settings, all matrix multiplications in the network have similar complexity, of order of magnitude $\mathcal {O}(d^2)$. Moreover, since we use GRU-RNNs, there are 6 matrix multiplication per encoder. This number is doubled, as we use bidirectional RNNs. Finally, the two self-attention mechanisms, one at each level, add two multiplications. Therefore, in the HAN baseline architecture, there are a total of 26 matrix multiplications (13 at each level). To that, CAHAN-SUM and CAHAN-RNN simply add one matrix multiplication ($W_c\mathbf {c}_i$ in Eq. DISPLAY_FORM14) in the LR case and two in the BI case. This corresponds to negligible 4% and 8% increases in total computational cost. On top of that, gating adds two multiplications in the LR case ($W_{\lambda _1}\mathbf {h}_{it}$ and $W_{\lambda _2}\mathbf {c}_i$ in Eq. DISPLAY_FORM25) and four in the BI case. All in all, this represents three and six extra multiplications compared to basic HAN, resp. in the LR and BI cases. Again, this corresponds to small increases in computational cost, of 11.5% and 23%, respectively. However, with CAHAN-SUM, the representations of the preceding/following sentences are now required before computing the current sentence representation. With CAHAN-RNN, one even has to wait until the level 2 RNN has processed the preceding/following sentence vectors before being able to encode the current sentence. Therefore, the sentence encoding process, which was parallelizable with basic HAN due to independence, has now become a sequential process. This is why in practice, we observe slightly greater runtime increases, in the range 5-22% (see Table TABREF43). <<</Complexity and sequentiality>>> <<</Proposed architecture: CAHAN>>> <<<Experimental setup>>> <<<Datasets>>> We evaluate the quality of the document embeddings learned by the different variants of CAHAN and the HAN baseline on three of the large-scale document classification datasets introduced by BIBREF14 and used in the original HAN paper BIBREF5. They fall into two categories: topic classification (Yahoo) and fine-grained sentiment analysis (Amazon, Yelp). Dataset statistics are shown in Table TABREF29. Classes are perfectly balanced, for all datasets. <<</Datasets>>> <<<Model configuration>>> This subsection describes the preprocessing and hyperparameter setting we used. <<<Preprocessing and word embeddings>>> For preprocessing (and the HAN baseline), we used the publicly available implementation of BIBREF15, which closely follows the description and details given in the original HAN paper BIBREF5. More precisely, on each dataset, we randomly split the training set into training (90%) and validation (10%). Documents are then tokenized into sentences and sentences are tokenized into tokens. The tokens appearing less than 5 times in the corpus are replaced with a special UNK token. Finally, we pre-train our own word vectors with word2vec BIBREF16 on the training and validation splits. <<</Preprocessing and word embeddings>>> <<<Hyperparameters>>> We do not tune any hyperparameter except the learning rate (see subsection SECREF35). We set the hidden layer dimensionality of the two RNN encoders to $d_s=50$ and $d_d=50$. Thus, the word annotations, sentence vectors, sentence annotations and document vector all have size 100. With regularization in mind, we set the dimensionality of the word embeddings to $d=200$ on the very large datasets (Amazon and Yahoo!) and to $d=100$ on Yelp, as shown in Table TABREF29. We also use a greater batch size of 128 on the large datasets, versus 64 on Yelp. <<</Hyperparameters>>> <<</Model configuration>>> <<<Training details>>> We zero-pad sentences and documents. Like in BIBREF5, to make the most out of each batch, we ensure they are as dense as possible by using a bucketing strategy. More precisely, we build each batch so that it contains documents of approximately the same size, in number of sentences. For regularization, we use dropout BIBREF17 with a rate of 0.5 at each layer. For classification, the document vectors are passed to a dense layer with softmax activation, whose dimensionality is equal to the number of categories to be predicted. Initialization has a significant impact on performance. To make sure the differences we measure are due to differences in the models and not in initial condition, we use the same initialization weights for each model. <<<SGD with cyclical learning rate>>> To minimize the categorical cross-entropy loss, we use the stochastic gradient descent optimizer with a triangular cyclical learning rate schedule and opposite triangular momentum schedule BIBREF18, BIBREF19. Following the authors' recommendations, we use a fixed $[0.85,0.95]$ momentum range, while for the learning rate, we perform a range test on the validation set, for each model, searching the $[0.001,3]$ range. With a triangular schedule, the learning rate linearly increases for a certain number of iterations (half-cycle), and then linearly decreases back to its initial value during the second half of the cycle. Cycles are repeated until training ends. High learning rate values make training faster, by allowing large updates and the use of greater batch sizes while keeping the amount of regularization constant. Also, the cyclical schedule injects beneficial stochastic noise to the gradient updates, which improves generalization BIBREF20. We use cycles of 12 epochs, and an early stopping strategy, monitoring the test loss, with a patience of slightly more than one cycle. We set the maximum number of epochs for all models to 50. <<</SGD with cyclical learning rate>>> <<</Training details>>> <<</Experimental setup>>> <<<Results>>> As can be seen in Table TABREF37, the best version of CAHAN (SUM-BI-$\Sigma $) consistently outperforms the HAN baseline, which shows that taking contextual information into account helps producing better document representations. Also, the two unidirectional variants (LR) slightly underperform the baseline and are clearly inferior to BI, which illustrates the value added by processing the document forwards and backwards, using preceding and following sentences as context. <<<Summing vs. averaging>>> In the unidirectional case, it is surprising to note that CAHAN-SUM-LR-$\mu $ is slightly better than CAHAN-SUM-LR-$\Sigma $, i.e., the centroid-based context vector (Eq. DISPLAY_FORM20) is better than the sum-based one (Eq. DISPLAY_FORM17). Indeed, from an information theory standpoint, it should be the opposite, as summing keeps track of all information whereas averaging is lossy. We hypothesize that towards the end of a document, the sum-based context vector grows large in magnitude, which perturbs the alignment decisions and deteriorates the quality of the sentence vectors. On the other hand, the centroid-based vector, which has constant magnitude, does not suffer from this issue. We further hypothesize that this issue is attenuated in the bidirectional case (CAHAN-SUM-BI-$\mu $ and CAHAN-SUM-BI-$\Sigma $ are on par) due to a counterbalancing phenomenon. Indeed, the last sentences processed by the left-to-right encoder are the first ones processed by the right-to-left encoder. Therefore, through concatenation, the overall quality of the sentence embeddings stays constant. <<</Summing vs. averaging>>> <<<Gating>>> As expected, gating improves performance, especially for the $\Sigma $ variants of CAHAN-SUM (and especially the LR ones). To be noted are significant boosts of 0.45 and 0.24 in accuracy respectively for CAHAN-SUM-LR-$\Sigma $ and CAHAN-SUM-BI-$\Sigma $ on Yelp. On Amazon, gating also offers CAHAN-SUM-LR-$\Sigma $ a nice 0.27 improvement. These positive results give a clue that regulating the magnitude of the context vector $\mathbf {c}_i$ is indeed beneficial. Nevertheless, gating also improves the performance of the $\mu $ variants of CAHAN, which do not suffer from the context vector magnitude issue. This shows that gating is also helpful via giving more expressiveness to the model. For instance, on Amazon, gating boosts the performance of CAHAN-SUM-BI-$\mu $ by 0.12. It is interesting to note that overall, gating is mostly effective on Yelp and Amazon. We attribute this to the difference in task. Sentiment analysis may rely more on contextual information than topic classification. <<</Gating>>> <<<CAHAN-RNN-BI>>> The consistently bad performance of CAHAN-RNN-BI is to be noted. This was unexpected, as an equivalent approach was used by BIBREF6 for dialogue act classification, with significant improvements. We hypothesize that in our case, CAHAN-RNN-BI is not effective because, unlike utterances in a speech transcription, sentences in a document are not ordered in a temporal fashion. In other words, sentences far away from the current sentence are not necessarily less relevant than closer sentences. Thus, considering each sentence equally is better than imposing an implicit time-decay via a RNN. <<</CAHAN-RNN-BI>>> <<<Runtimes>>> We compare the average runtime per iteration of some variants of CAHAN to that of basic HAN in Table TABREF43. For CAHAN-SUM-$\Sigma $, we observe that the unidirectional variant (LR) is 5.7% slower than basic HAN (37 vs. 35ms per iteration), whereas the bidirectional variant (BI) is 23% slower (43 vs. 35 ms). When gating, these number increase to 14.3% and 37% (40 and 48ms vs. 35ms). These differences are not far from our theoretical expectations (see subsection SECREF26), especially for LR. Indeed, recall that based on matrix multiplication counts, we had forecasted increases of 4% and 8% (11.5% and 23% when using gating), respectively for LR and BI. The gap for BI can be explained by a probable bottleneck in the implementation. CAHAN-RNN adds the same number of matrix multiplications as CAHAN-SUM, so we should in principle observe the same increases. However, as was explained in subsection SECREF26, with CAHAN-RNN we have to wait until the level 2 RNN has processed the preceding or preceding/following sentence vectors (LR or BI case) before being able to encode the current sentence. This explains the extra-time needed (40 vs. 37ms and 49 vs. 43ms). <<</Runtimes>>> <<</Results>>> <<<Related work>>> In what follows, we provide a review of the relevant literature. One should note that by context, in this paper, we do not refer to the intra-sentence or internal context vector of seq2seq encoders BIBREF21, BIBREF11, BIBREF13. Rather, we refer to the cross-sentence, external, or document-level context. A few studies only have focused on developing models that take that type of context into account. Most of these studies originate from NMT. We briefly describe them next. BIBREF2 obtain a global context vector by feeding a fixed number of the previous source sentences to HAN. They then compare two ways of injecting it into the encoder-decoder model. First, they propose a warm-start approach, in which the encoder and/or decoder hidden states are initialized with the context vector. Second, they experiment with an auxiliary strategy in which the intra-sentence context vector of the encoder is concatenated with the global context vector and passed either (i) directly to the decoder, or (ii) after going through a filtering gate. However, unlike our mechanism and that of BIBREF11, BIBREF12, BIBREF13, which all feature two coupled gates, the mechanism of BIBREF2 has only one gate. All strategies proposed by BIBREF2 significantly improve performance, but first place is reached by a combination of the warm-start and gated techniques. BIBREF22 use an approach similar to the auxiliary approach of BIBREF2, but they compute the context vector only from the sentence immediately preceding the current source sentence. They then pass it to a dedicated encoder featuring a customized attention mechanism. BIBREF12 and BIBREF23 both extend the Transformer architecture BIBREF24 with a context encoder featuring self-attentional and feed-forward layers. Then, BIBREF12 combine the context representation with the source representation produced by the basic Transformer encoder via a gating mechanism. They do not modify the decoder part of the Transformer. BIBREF23 go one step further by passing the contextual information both to the encoder and the decoder. In both cases, they add a self-attention mechanism over the context representation. For the decoder though, they also replace the residual connection after the context self-attention with a gating mechanism, to limit the influence of the context information on the source information. One piece of work closely related to our study is BIBREF3. The authors also use a hierarchical attention architecture, where at level 1, each paragraph of a document is encoded by a dedicated encoder. All encoders share the same stacking bi-RNN architecture. Moreover, they communicate at each layer to produce context-aware annotations of the words in their paragraphs. More precisely, at a given layer of the stacking RNN, a given encoder is passed the average of the representations learned by the other encoders at the corresponding layer (like with CAHAN-SUM-$\mu $). This context vector is then combined with the hidden states and passed as input to the upper RNN layer. At level 2, the top RNN layer annotations are passed to a word attention mechanism followed by a paragraph attention mechanism. A major difference with our work is that the authors combine the encoder with a decoder, to perform abstractive summarization of long documents, whereas we only focus on the encoding part. The word and paragraph attentional decisions at level 2 are thus made by the decoder. Another significant difference is that the authors use reinforcement learning for training, instead of SGD. Context-aware models have also been proposed in other NLP domains. E.g., for spoken language understanding, BIBREF7 prepend and append the current utterance with two special word vectors respectively summarizing the $C$ preceding and following utterances (respectively), where $C$ is a hyperparameter. This indirectly initializes the hidden states of the left-to-right and right-to-left components of a bidirectional RNN, like with the warm-start approach of BIBREF2. On the other hand, BIBREF6 rely on a mechanism equivalent to LR-CAHAN-RNN. They find that it significantly boosts dialogue act classification accuracy. As discussed in section SECREF5, we hypothesize that CAHAN-RNN is not effective in our application because sentences in a document are not ordered in a temporal manner. <<</Related work>>> <<<Discussion and next steps>>> While bidirectional CAHAN-SUM systematically outperforms HAN, margins are modest. We attribute this to the fact that the datasets used in our experiments contain short documents (see Table TABREF29) featuring simple sentences. Thus, the superior expressiveness of CAHAN is not able to show. To address this issue, we plan in future work to experiment on datasets featuring long documents containing complex sentences. Moreover, the tasks of sentiment and topic classification do not require a deep understanding of the input documents. Even if a given document contains some complex sentences with multiple clauses and subtopics, capturing the polarity of only one simple, unambiguous sentence or pattern may be enough to accurately predict the category of the entire document (e.g., “hated”, “loved”, “definitely recommends”, “will never come back”, etc.). Thus, we hypothesize that when trained to solve such tasks, CAHAN does not learn to use its context-aware capabilities to the fullest extent. One solution, and promising area of future work, would consist in explicitly giving CAHAN knowledge about coverage, diversity, and redundancy. This could be done by modifying the sentence attention mechanism and/or by adding a term to the loss. Another natural next step is to experiment on tasks requiring a deeper understanding of text, such as end-to-end abstractive summarization. Some other ideas for improvement include combining CAHAN-SUM with CAHAN-RNN, and/or the mean and centroid vectors; for CAHAN-SUM, obtaining the centroid vector through a trainable mechanism rather than via pooling; and experimenting with a trainable matrix (instead of vector) in the self-attention at both level 1 and level 2, like in BIBREF25. Finally, the context vector could be seen as an external, general summary of the document, and be pre-computed offline by a dedicated encoder. <<</Discussion and next steps>>> <<<Conclusion>>> In this paper, we proposed several modifications of the HAN architecture that make the sentence encoder context-aware (CAHAN). Results show that taking context into account is beneficial. Specifically, the bidirectional version of the document encoder, that processes the documents forwards and backwards, using the preceding and following sentences as context, outperforms the HAN baseline on all datasets and is superior to the undirectional variant. Moreover, the computational overhead is small. Experiments on tasks requiring a deeper understanding of the input documents should better highlight the superiority of CAHAN. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Related work, Conclusion" ], "type": "disordered_section" }
1909.02776
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Features in Extractive Supervised Single-document Summarization: Case of Persian News <<<Abstract>>> Text summarization has been one of the most challenging areas of research in NLP. Much effort has been made to overcome this challenge by using either the abstractive or extractive methods. Extractive methods are more popular, due to their simplicity compared with the more elaborate abstractive methods. In extractive approaches, the system will not generate sentences. Instead, it learns how to score sentences within the text by using some textual features and subsequently selecting those with the highest-rank. Therefore, the core objective is ranking and it highly depends on the document. This dependency has been unnoticed by many state-of-the-art solutions. In this work, the features of the document are integrated into vectors of every sentence. In this way, the system becomes informed about the context, increases the precision of the learned model and consequently produces comprehensive and brief summaries. <<</Abstract>>> <<<Introduction>>> From the early days of artificial intelligence, automatically summarizing a text was an interesting task for many researchers. Followed by the advance of the World Wide Web and the advent of concepts such as Social networks, Big Data, and Cloud computing among others, text summarization became a crucial task in many applications BIBREF0, BIBREF1, BIBREF2. For example, it is essential, in many search engines and text retrieval systems to display a portion of each result entry which is representative of the whole text BIBREF3, BIBREF4. It is also becoming essential for managers and the general public to gain the gist of news and articles immediately, in order to save time, while being inundated with information on social media BIBREF5. Researchers have approached this challenge from various perspectives and have obtained some promising results BIBREF6, BIBREF7. However, this area continues to present more research challenges and has a long path to maturity. One method of investigating this challenge, is (supervised) extractive summarization. Extractive implementations use a ranking mechanism and select top-n-ranked sentences as the summary BIBREF8. Sentences of a document are represented as vectors of features. Using summarization corpora, a rank will be assigned to each sentence, based on its presence in several human-written summaries (golden summaries). The system should then learn how to use those features to predict the rank of sentences in any given text. Various machine learning approaches such as regression and classification algorithms are used to perform the ranking task BIBREF9, BIBREF10. As far as our knowledge goes, in all current implementations, sets of sentence vectors of every document are merged together to compose a larger set, which is then passed to the learning model as a matrix. In this approach, the locality of ranks is disregarded. In other words, the rank of sentences is highly relative to the context and document. A sentence might be ranked high in one document while being ranked lower in another. As a result, merging sentences of a whole dataset into a matrix removes document boundaries and a main source of information will be lost. We addressed this issue by taking certain features of documents into account, such as its length, topical category and so on in addition to some new sentence features that also reflect document properties. Thus, more information will be provided to the model, and ranking could be done with respect to local features of the document. Our experiments show that this rectification leads to improvement in both the performance of the learned model and the quality of produced summaries. We also represent a new baseline for the evaluation of extractive text summarizers which can be used to measure the performance of any summarizing method more accurately. The remainder of this paper is organized as follows. (Section SECREF2) reviews related works. (Section SECREF3) presents the proposed method and evaluation measures. (Section SECREF5) discusses how the experiments are set up. The results are discussed in (Section SECREF5), and finally (Section SECREF6) concludes the paper. <<</Introduction>>> <<<Related works>>> Text summarization has been widely studied by both academic and enterprise disciplines. Text summarization methods may be classified into different types. Based on input type, there are single-document BIBREF11, BIBREF12 vs multi-document summarization methods BIBREF13, BIBREF14, BIBREF15. Based on language, there are mono-lingual, bilingual and multi-lingual methods BIBREF16. There are also “query focused” methods in which a summary relevant to a given query is produced BIBREF17. From the perspective of procedure, however, there are two main approaches: abstractive vs extractive BIBREF18. Abstractive approaches try to generate a new short text based on the concepts understood from the original text BIBREF19. This usually requires a full pass through NLP pipeline and is faced with many complexities and challenges BIBREF20. The abstractive approach relies on linguistic methods to examine and interpret the text in order to find new concepts and expressions. The output is a new shorter text which consists of the most important information from the original text document BIBREF8. Extractive approaches, on the other hand, select a few sentences from the document based on some measures in order to place them in a summary BIBREF8. A broad range of methods has been examined in this approach, including graph-based BIBREF8, BIBREF21, unsupervised BIBREF21, BIBREF22 and supervised (corpus-based) methods BIBREF9, BIBREF23, BIBREF24. In supervised methods, training data is generally needed to select important content from the documents. In these methods, usually, the problem is reduced to a classification or regression problem, and machine learning techniques applied to the dataset of documents and their gold summaries represented by some features. Support Vector Machines (SVM) BIBREF25 and neural networks BIBREF26 are more popular sentence classification algorithms. The key step in extractive summarization is to determine the importance of sentences in the document BIBREF27. Previous studies examine the ordinal position of sentences BIBREF28, BIBREF29, length of sentences BIBREF9, the ratio of nouns, the Ratio of Verbs, Ratio of Adjectives, Ratio of Adverbs BIBREF30, the Ratio of Numerical entities BIBREF31, BIBREF32 and Cue Words BIBREF28. Gupta and Lehal in their survey of text summarization techniques list the following groups of features: content-based, title-based, location-based, length-based, proper noun and upper-case word-based, font-based, specific phrase-based, and features based on sentence similarity to other sentences in a text BIBREF8. Previous studies use different sentence features such as terms from keywords/key phrases, terms from user queries, frequency of words, and position of words/sentences for text summarization BIBREF33. However, in most cases, selection and weighting of features are an important matter of debate. Some works have been carried out with respect to this BIBREF34, but none, to the best of our knowledge, has shown that target attribute is highly related to the scope of the document. It is occasionally mentioned but not included in practice. For instance, Ferreira et al studied various combinations of sentence scoring methods on three types of documents in BIBREF6 and BIBREF31 and concluded that the weight of features varies, dependent on the properties of context: “the effectiveness of sentence scoring methods for automatic extractive text summarization algorithms depends on the kind of text one wants to summarize, the length of documents, the kind of language used, and their structure.”. JY Yeh et al in BIBREF35 utilized a Genetic Algorithm (GA) to find the weight of features for calculating sentence scores. However, their following statement implies that performance of weights is generally dependent to genre, that could be seen as a feature of context: “It cannot be guaranteed that the score function whose feature weights are obtained by GA definitely performs well for the test corpus; nevertheless, if the genre of the test corpus is close to that of the training corpus, we can make a prediction that the score function will work well.” BIBREF35. Berenjkoub et al studied the effectiveness of various subsets of features in summarization of distinct sections of scientific papers BIBREF36. They showed that some features work well only in some specific portion of text, for example, on the abstract section, while others perform better on the methodology section. This could be considered to be a consequence of differences in the structure and context of each section. All the above studies imply the significance of document context in ranking. Nevertheless, it has not been given enough attention in the NLP community, and even sometimes is neglected. For instance, authors in BIBREF30 suggest the use of a wide range of various features. Among these, seventeen part-of-speech based sentences features have been introduced, all of which are sentence-normalized, but not document-normalized, i.e. they count the ratio of a syntactic unit e.g. verbs, divided by the number of words in a sentence. Such features do not consider the total number of those units, e.g. verbs, in the whole document. Our work contributes to this line of research and includes document features in the learning and ranking processes. <<</Related works>>> <<<Incorporating Document Features>>> As a way to investigate the need for document features in sentence ranking (as explained in the introduction and related works), we introduced several document-level features and incorporated them in the summarization process. These features are listed under subsection (SECREF4). Although stages of our method do not differ from general supervised extractive summarization, the whole process is explained in order to clarify the method of investigation. Every supervised summarization has two phases. The first is the “Learning Phase”, a corpus of ideal summaries is used to train the system how to rank sentences. The second is the “Summarization Phase”, where the system applies its learning gained from the first phase, in order to rank the sentences of a new given text. A process of selection is then performed to form a summary. Each of these phases has several intricacies which are briefly described in the following sections. <<<Learning Phase>>> The input to this phase is a dataset of documents, each of which is associated with several human-written summaries. The output is a learned model with a good level of accuracy that is able to reliably predict the rank of sentences, in almost the same way that a human may rank them. To accomplish this, it is necessary to first perform normalization and transform various forms of phrases into their canonical form. Then, every text should be tokenized to sentences, and further tokenized to words. Another prerequisite is to remove stop words. The following subtasks should be carried out next. <<<Feature Extraction>>> Foremost, it is necessary to represent each sentence with those features that have the most distinguishing effect on the prediction of the rank. Many features have been examined in the literature. We entitle some as “document-aware” because they do implicitly represent some information about a document. However, other features have been used, that say nothing about the document in which they appeared. We call them “document-unaware”. In the previous sections, we argued that this lack of information might be misleading for the system, especially when we train it with sample sentences from different documents. Thus, we modified some document-unaware features and derived new features that cover document properties. We also examined the effect of incorporating explicit features of a document into vectors of its sentences. The following sub-sections describe the features mentioned above in more detail. <<<Document-unaware Features>>> Ordinal position: It is shown that inclusion of sentence, in summary, is relevant to its position in the document or even in a paragraph. Intuitively, sentences at the beginning or the end of a text are more likely to be included in the summary. Depending on how it is defined, this feature might be document-unaware or not. For example, in BIBREF29 and BIBREF37 it is defined as $\frac{5}{5}$ for the first sentence, $\frac{4}{5}$ for the second, and so on to $\frac{1}{5}$ for fifth and zero for remaining sentences. In another research conducted by Wong et al. BIBREF9, it is defined as $\frac{1}{sentence\ number}$. With such a definition, we may have several sentences, for example, with position=$\frac{1}{5}$ in the training set, these may not have the same sense of position. While a sentence position=$\frac{1}{5}$ means “among the firsts” in a document with 40 sentences, it has a totally different meaning of “in the middle”, in another document containing 10 sentences. Thus, a useful feature formula should involve differences of documents which may change the meaning of information within it. In our experiments, we used the definition of BIBREF9. A document-aware version of position will be introduced in (SECREF6). Length of sentence: the intuition behind this feature is that sentences of too long or too short length are less likely to be included in the summary. Like sentence position, this feature is also subject to the wrong definition that makes it document-unaware. For example, in BIBREF9 it is defined as a number of words in a sentence. Such a definition does not take into account that a sentence with, say 15 words may be considered long if all other sentences of document have fewer words. Another sentence with the same number of words may be regarded as short, because other sentences in that document have more than 15 words. This might occur due to different writing styles. However, we included this in our experiments to compare its effect with that of its document-aware counterpart, which will be listed in (SECREF6). The Ratio of Nouns: is defined in BIBREF30 as the number of nouns divided by total number of words in the sentence, after stop-words are removed. Three other features, Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs are defined in the same manner and proved to have a positive effect on ranking performance. From our perspective, however, a sentence with a ratio of nouns =0.5, for example, in a document containing many nouns, must be discriminated in the training set from another sentence with the same ratio of nouns, that appeared in another document having fewer nouns. This feature does not represent how many nouns are there in the document, which is important in sentence ranking. The same discussion goes on to justify the need to consider the number of verbs, adjectives, and adverbs in the document. The impact of these features is examined in our experiments and compared to that of their document-aware counterparts. The Ratio of Numerical entities: assuming that sentences containing more numerical data are probably giving us more information, this feature may help us in ranking BIBREF31, BIBREF32. For calculation, we count the occurrences of numbers and digits proportional to the length of sentence. This feature must be less weighted if almost all sentences of a document have numerical data. However, it does not count numbers and digits in other sentences of the document. Cue Words: if a sentence contains special phrases such as “in conclusion”, “overall”, “to summarize”, “in a nutshell” and so forth, its selection as a part of the summary is more probable than others. The number of these phrases is counted for this feature. <<</Document-unaware Features>>> <<<Document-aware Features>>> Cosine position: As mentioned in (SECREF5) a good definition of position should take into account document length. A well-known formula used in the literature BIBREF38, BIBREF7 is in which index is an integer representing the order of sentences and T is the total number of sentences in document. This feature ranges from 0 to 1, the closer to the beginning or to the end, the higher value this feature will take. $\alpha $ is a tuning parameter. As it increases, the value of this feature will be distributed more equally over sentences. In this manner, equal values of this feature in the training set represent a uniform notion of position in a document, so it becomes document-aware. Relative Length: the intuition behind this feature is explained in (SECREF5). A discussion went there that a simple count of words does not take into account that a sentence with a certain number of words may be considered long or short, based on the other sentences appeared the document. Taking this into consideration, we divided the number of words in the sentence by the average length of sentences in the document. More formally, the formula is: in which n is number of sentences in the document and $s_i$ is the i’th sentence of it. Values greater than 1 could be interpreted as long and vice versa. TF-ISF: this feature counts the frequency of terms in a document and assigns higher values to sentences having more frequent terms. It also discounts terms which appear in more sentences. Since it is well explained in the literature, we have not included details and formula which are in references BIBREF34 and BIBREF39. Nonetheless, the aspect that matters in our discussion is that both frequency and inverse sentence frequency are terms which involve properties of context, and consequently are document-aware. POS features: Here we introduce another way to include the ratio of part of speech (POS) units in features and keep them document-normalized. To do this, the number of occurrences of each POS unit should be divided by the number of them in the document, instead of that occurring in a sentence. The formal definition of the new document-aware features are as follows: <<</Document-aware Features>>> <<<Explicit Document Features>>> In order to further investigate how effective are document specific features in sentence ranking, we defined several features for documents. These features are then calculated for each document and repeated in the feature vector of every sentence of that document. Their formal definition is described below and their effect is examined in the result and discussion section (SECREF5): Document sentences: An important property of a document that affects summarization is the total number of sentences participating in sentence ranking. As this number grows, a summarizer should be more selective and precise. Also, some sentence features such as cue words, maybe more weighted for longer documents. In addition, the main contextual information is probably more distributed over sentences. In such a case even lower values of other features should be considered important. Document words: the number of words in the document is another notion of document length. Since the number of sentences alone is not enough to represent document length, this feature should also be considered. Topical category: different topics such as political, economic, etc. have different writing styles and this might affect sentence ranking. For instance, numerical entities may appear more in economic or sport reports than in religious or social news. Therefore the weight of this attribute should be more or less, based on a document’s category. So it needs to be included. An overview of our feature set is represented by example in figure FIGREF15. Column ID is just for enumeration and column Target is explained in the next section. <<</Explicit Document Features>>> <<</Feature Extraction>>> <<<Target Assignment>>> Every feature vector needs a target value from which the system should learn how to rank sentences. The value of target is usually determined based on golden summaries. If a sentence is included in a majority of human-written extracts, its target is near to 1. In contrast, it would be closer to 0 if the sentence could not be found in any human-made summaries. In some datasets, like the one we used, golden summaries are not absolutely extractive, and they are not composed of exact copies of sentences in the original text. In such cases, a measure of similarity between the sentence whose target we are looking for, and each ideal summaries’ sentence will be calculated. This results in real values between 0 and 1 for this attribute. Section (SECREF4) includes more details about target assignment. <<</Target Assignment>>> <<<Training Model>>> Since target attribute values vary between zero and one, we opted to use regression methods for the learning task. To build a training and a test set, a global matrix is composed in which every row corresponds to a sentence in the corpus and each column corresponds to a feature. The last column is for target attribute which will be omitted in the test set. It might be required to perform scaling on certain columns, depending on its corresponding feature and range of values. In cases where the dataset is large, the total number of sentences which are not included in golden summaries, and consequently have lower targets, is many times larger than the number of included sentences. This might lead the regression bias toward lower target values. To avoid this, dataset balancing is needed. That is to leave aside a portion of not included sentences and not to feed them to learner model. Lastly, in this phase, the regression model should be fitted on training set and be evaluated on a test set as described in sections (SECREF4) and (SECREF5). <<</Training Model>>> <<</Learning Phase>>> <<<Summarization Phase>>> Having acquired a model that can precisely rank sentences, we can apply it to any new given text and use ranked sentences in order to create a summary. This summarization process could also be executed on dataset texts, in order to evaluate how precisely our method resembles human-written summaries. In this section, we briefly describe the summarization process. The evaluation process is explained in section (SECREF22). <<<Sentence Ranking>>> In comparison with learning phase, in which a global matrix was used, this time a local matrix is composed whose rows correspond with the sentences of the input text. If during learning, any scaling was performed on features, they should be carried out here in the same manner. The matrix is then fed to the regressor obtained in the previous phase, and a rank value between zero and one will be predicted for each sentence. <<</Sentence Ranking>>> <<<Sentence Selection>>> By sorting sentences based on their ranks, the most appropriate sentences for being included in summary will be determined. To preserve readability, however, it is important to place them in the summary in the same order they appeared in the input document. Another consideration is the cut-off length. How many of the top sentences should we select for summary? The answer should be as simple as a constant number, a percentage of total sentences, or it could be determined by more advanced heuristics. We allowed cut-off length to be an input parameter. This allows us, in the evaluation phase, to produce summaries of dataset documents in the same length as golden summaries. This makes the comparison more equitable. <<</Sentence Selection>>> <<</Summarization Phase>>> <<<Evaluation Measures>>> In this section, some measures are described to evaluate the performance of both phases explained in the previous section: the learning phase and summarization phase. The former is evaluated using common regression metrics such as mean square error (MSE) and coefficient of determination (R2). The latter is carried out using ROUGE which is a well-known metric for evaluating summarization systems. Mean Square Error (MSE) is the average of squared errors in all estimated targets. An ideal regressor tends to make this measure as near as possible to zero. Though, an exact zero for MSE is not desirable, because it is suspected to be due to over fitting. The coefficient of determination is another metric for evaluating how well a regression model is fitted to data. It ranges from $-\infty $ to 1. As it approaches 1, “goodness-of-fit” is increased, while negative values show that the mean of data is a better estimator for target BIBREF40. ROUGE is proposed in BIBREF41 as an evaluation metric for summaries. It matches n-grams in both system produced summaries and reference summaries and returns the percentage of matching in terms of precision, recall and f-measure. There is a variety of ROUGE family metrics, namely ROUGE-1, ROUGE-2, and ROUGE-L. In ROUGE-1 the overlap of 1-grams, each word, is calculated. In ROUGE-2 the bigrams are considered as units of comparison. The ROUGE-L uses the Longest Common Subsequence (LCS) to measure resemblance. Nevertheless, we found that ROUGE assessments are always relatively high, even for a summary that is produced perfunctorily. Hence, we also designed a random summarizer that selects random sentences for the summary, and evaluated it by ROUGE. This could be used as a baseline for comparison. <<</Evaluation Measures>>> <<</Incorporating Document Features>>> <<<Experiments>>> Two experiments were set up to verify our hypothesis: “sentence ranking is highly dependent to document, and features must also represent context”. The first experiment involves document-unaware features (listed in section SECREF5) alongside TF-ISF. In the second experiment, document-aware features were used instead of document-unaware ones. We also set up a random summarizer based on a random regressor that acts as a baseline for comparisons. More details are recorded in section (SECREF25). A good experimental study should be as reproducible as possible. Here we explain the technical details that are more specific to our dataset, to allow the interested user to set up the same experiments for further research. <<<Dataset>>> We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences. <<</Dataset>>> <<<Extracting Features and Scaling>>> All features introduced in section SECREF4 are calculated. Pre-processing, sentence and word tokenization, stop words removal, and part of speech tagging is performed using the Hazm library BIBREF43. The majority of features have a range between zero and one. Other features are passed to a min-max scaler to transform into the same range. For the category feature which is nominal, the one-hot-encoding method applied and six flag features used instead. <<</Extracting Features and Scaling>>> <<</Experiments>>> <<<Results and Discussion>>> In section (SECREF22) MSE, R2 and ROUGE scores are remarked as evaluation measures. The results of our experiments are reported below in terms of these measures. For better comparison, we also ran another experiment in which the random regressor was used for ranking sentences and producing summaries. Table TABREF28 shows and compares MSE and R2 reported from these experiments. The results show that in experiment 2, the mean squared error is reduced and the r2 score is increased. This means that using document-aware features leads to a more accurate learned model, proving our hypothesis about the relationship between document features and target ranks. ROUGE scores are displayed separately in terms of precision, recall and f-measure in Figures FIGREF29 to FIGREF31 respectively. F-measure scores are displayed in the figure FIGREF29, comparing ROUGE-1, ROUGE-2 and ROUGE-L. Figures FIGREF30 and FIGREF31 allow comparison of precision and recall scores. The higher values gained in experiment 2, confirm that document-aware features perform better than unaware features. These results are also interpretable from viewpoint of entropy-based decision tree methods. In learning phase, impurity of features within the whole dataset will be measured, and features having higher information gain will take place in upper levels of tree. But in summarization phase, within which decisions have to be made within a single document, impurity of those features may be low, causing less effective decisions and precision's. By incorporating document features, we help model to use different features (thus different trees) for different documents. Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer. <<</Results and Discussion>>> <<<Conclusion>>> This paper has discussed that in supervised extractive summarization, we cannot learn to rank by considering dataset sentences as independent educational examples. The rank of sentences is dependent on each other within a document. To overcome this issue, we suggested incorporating document features explicitly in the feature vector of sentences. We also suggested using features that take into account the properties of document. We named this kind of features as document-aware. Conducted experiments demonstrated the benefit of adding explicit document features, as well as document-aware features, both in model precision and summary quality. For future work, more document-aware features can be examined. It is also possible to run the same experiments on an English (or any other language) dataset, if available. Another clue for study is measuring degree of entropy difference between dataset and single documents, in a standard dataset. Our source code is hosted on GitHub and is published for later reference, further experiments and reproducing results. A web interface and a Telegram bot is also implemented as demo. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Introduction" ], "type": "disordered_section" }
1909.09018
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Corporate IT-Support Help-Desk Process Hybrid-Automation Solution with Machine Learning Approach <<<Abstract>>> Comprehensive IT support teams in large scale organizations require more man power for handling engagement and requests of employees from different channels on a 24×7 basis. Automated email technical queries help desk is proposed to have instant real-time quick solutions and email categorisation. Email topic modelling with various machine learning, deep-learning approaches are compared with different features for a scalable, generalised solution along with sure-shot static rules. Email's title, body, attachment, OCR text, and some feature engineered custom features are given as input elements. XGBoost cascaded hierarchical models, Bi-LSTM model with word embeddings perform well showing 77.3 overall accuracy For the real world corporate email data set. By introducing the thresholding techniques, the overall automation system architecture provides 85.6 percentage of accuracy for real world corporate emails. Combination of quick fixes, static rules, ML categorization as a low cost inference solution reduces 81 percentage of the human effort in the process of automation and real time implementation. <<</Abstract>>> <<<Introduction>>> In an organization, the Information Technology (IT) support help desk operation is an important unit which handles the IT services of a business. Many large scale organizations would have a comprehensive IT support team to handle engagement and requests with employees on a 24$\times $7 basis. As any routinized tasks, most processes of the support help desk unit are considered repetitive in nature BIBREF0. Some may occur on a daily basis and others may occur more frequently. Many support engineers and agent would spend time on these repetitive task such as entering information to an application, resetting passwords, unlocking applications, creating credentials, activating services, preparing documentation, etc. The industry has now come realize that many repetitive business processes and tasks can be automated by using Robotic Process Automation (RPA) bots or robotic processes automotive software bots BIBREF1. The idea is to take the repetitive workload and hand it over to the RPA bots so that the employees could focus on more value adding tasks and decision making to the organization. The RPA bot would also help to reduce the human errors and make processes more efficient, which would finally intent results in cost saving and productivity increase. Our proposed automated approach is not only focused on automating repetitive tasks but also looking at historical data, enabling IT support desk process to identify unforeseen insights and patterns. Analyzing the data from various sources such as email communications, service request information generated from support ticketing applications and even conversational data from chats has helped us to identify the type of Service Requests (SR) raised and their respective solutions, as well as fixes done by the support agents. This approach has helped us create a classification model to identify the issue types and provide quick fixes and resolutions from the collected data. <<</Introduction>>> <<<Related Work>>> WrÃblewska has conducted a project on the topic of RPA of unstructured data which was focused on building an Artificial Intelligence (AI) system dedicated to tasks regarding the processing of formal documents used in different kinds of business procedures BIBREF2. His approach was introduced to automate the debt collecting process. Possible applications of Machine Learning (ML) methods to improve the efficacy of these processes were described. In the case study done by Aguirre, it was concluded that companies should consider RPA to be more suitable for high volume standardized tasks that are rule-driven, with no requirement for subjective judgement, creativity or interpretation skills BIBREF3. Back office business processes such as accounts payable, accounts receivable, billing, travel and expenses, fixed assets and human resource administration are good candidates for RPA. Extreme multi-class and multi-label text classification problems are solved by the methodology named Hierarchical Label Set Expansion (HLSE) BIBREF4. This paper presents the deep Learning architecture devoted to text classification, in which the data labels are regularized, the hierarchical label set is defined and different word embeddings are used BIBREF3, BIBREF5, BIBREF6. The traditional model performed better than the the deep learning models for 8,841 emails collected over 3 years, because this particular classification task carried out by Haoran may not require the ordered sequence representation of tokens that deep learning models provide BIBREF7. This paper claims that a bagged voting model surpasses the performance of any individual models. In their survey, Kamran and other researchers analyzed text feature extractions BIBREF8, BIBREF9, dimentionality reduction methods, existing algorithms and techniques, evaluation methods and limitations BIBREF6 and advantages based on applications. Paramesh et al and Seongwook et al compare the different classification algorithms such as multinomial naive bayes logistic regression, K-Nearest neighbour and Support Vector Machines (SVM) on real-world IT infrastructure ticket classifier system data, using different evaluation metrics in their research BIBREF10, BIBREF11. They claimed that SVM to have performed well on all the data samples. Random forest (RF) or naive bayes (NB) performed best in terms of correctly uncovering human intuitions. Hartmann et al and his team present in their study that RF exhibits high performance in sentiment classification research done on 41 social media data sets covering major social media platforms, where the SVM never outperforms the RF BIBREF12. Cognitive RPA is efficiently undertaken as a low cost solution with Microsoft Azure Language Understanding Intelligent Service (LUIS) BIBREF8 and Azure machine learning studio. Section III of this paper elaborates the process of automation. The section IV explains about the email classification approach, and the section V illustrates the results and their respective analysis. Finally, section VI contains the conclusion of the results. <<</Related Work>>> <<<Method>>> We are proposing a hybrid-process automation, in which we are introducing the automation architecture while adopting the manual process methodology. Incoming emails, that cannot be classified or understood by the knowledge base of the automation system will be sent for manual classification solution. <<<Manual Process>>> Providing technical support for large firms around the world has many challenges such as coordinating a vast amounts of mails and matching experts with employees who are in need of that expertise. When a technical issue is raised from a base level employee who works with applications, it is sent to the middle level and then to the higher level management of the respective regional branches throughout the hierarchical business architecture. Once it is approved by the branch manager, the issue email is forwarded to the technical coordinator to categorize the issue based on the priority level and technical requirements. Technical coordinator is responsible for the issues raised from the regional branches all over the world. Each regional branch is given a unique name such as New York, Sydney, London, Beijing and Toronto mentioned as Category1 (cat1). Category1 is identified by looking at the email address of the sender. Each regional branch has different plant applications that need different experts' consultation. Plant applications such as SAP, Darwin and infrastructure are mentioned as Category2 (cat2). The possible plot of the issue emails such as computer, manufacturing, userID, userunlock, financial, planning, purchasing issue generated by employees working in various plant applications across various regions are mentioned as Category3. Mapping table is created with the plants placed in the regional offices and the issues created by the plants. Category1, Category2, Category3 contains 84, 8 and 77 unique categories to be classified. Table I shows some examples for each categories. Once all three categories are finalized by the technical coordinator, email tickets will be created and assigned to the admin-groups. Respective technical people in the admin-groups will provide consultancy and solve the issues. Not only one technician can handle issues assigned to many different admin groups allocated to him, but also particular admin category can be handled by many technicians as a group as well. <<</Manual Process>>> <<<Proposed Automation System>>> In addition to replacing the technical coordinator role with AI bot to classify the raised email-issue tickets for respective admin groups, we propose instant quick fixes for some emails in an automated manner. High level workflow is described in Fig. 1. The AI bot has three main stages Quick fixes Static rules Email classifier All the incoming mails are preprocessed for better quality of inputs. Signatures, greetings, Uniform Resource Locators (URL) are removed. Key body is extracted from the forwarded mails by digging deep into the mail contents. If an email contains attachments, Optical Character Recognition (OCR) is used to extract the text contents from the attachments. <<<Quickfixes>>> Microsoft LUIS is used for instant quick fixes to provide solution based on prioritized emails. Fig. 2 shows the bot framework LUIS architecture that handles the quick fixes. Quick fixes are trained with most occurring samples that need quick solutions. LUIS is a model that artificial intelligence applications use to predict the intention of phrases spoke. There are 3 main key phases categorized as defining phase, training phase and publishing phase. Natural language is extremely flexible with LUIS. Intents are the type of defined words that are supported by utterances. An action the user wants to perform can be defined by an intent. Fig. 3 elaborates the intent matching breakdown mechanism. Entities are identified form the sentences. Suitable entity will be selected for generating tickets. If an incoming email is identified with the matched intent, cat1, cat2, cat3 will be allocated. Tickets will be created for admin-groups. The issue will be solved using automated messages through a chat bot solution. If the issue is solved, then the ticket will be closed by the quick fixes. If it is too complicated for the knowledge of the BOT then it creates ticket for adminGroup for the assistance of consultants. The emails identified by static rules and keywords are classified with the highest accuracy. The knowledge base of static rules and keywords are gathered using feature engineering and the insights from the technical coordinator. Remaining emails are sent to a complex ensemble machine learning model to be classified. Different types of emails are treated in a different way for efficient execution and to reduce the error. <<</Quickfixes>>> <<<First mail>>> Fig. 4 shows the flow of email categorization response for new incoming emails. If an incoming mail is a fresh new mail, it is initially subjected to cleaning. OCR will extract the texts from the attachment depending on the attachments' availability. Cat1 is assigned according to the knowledge of the database and sender details. According to the priority, emails are passed through LUIS. Thereafter if LUIS fails to solve the issue ML model will assign the cat2, cat3, Admin group for ticket creation. <<</First mail>>> <<<Forwarded mail>>> If incoming mail is a continuation of previous email, it is directly handled by LUIS question and answer self automated support. Then it follows the normal procedure of categorization. Fig. 5 clearly illustrates the flow. Fig. 6 explains the overall architecture. Static rules are mentioned as T-codes. Every categorized mails has to be assigned to respective consultant denoted as assignTo. <<</Forwarded mail>>> <<</Proposed Automation System>>> <<</Method>>> <<<Email classifier using machine learning>>> <<<Preprocessing>>> Preprocessing is necessary to increase the accuracy of a text classification model, because it avoids the classification model focusing attention on unwanted sentences and intents. Emails are fed into Microsoft-Bot services. It handles the headers and outputs the processed channel-data in JavaScript Object Notation (JSON) format. The channel data summarizes the information such like sender, receiver, body, subject and important metadata. Regular expression (regex) can be used for searching strings by defining a search pattern. Regex findings are created to remove unwanted words from the channel data queries for further processing of the emails. OCR has to be accurate in detecting text in an image. Microsoft-OCR is used for text recognition of this automation process. It extracts the recognized characters into a machine-usable character stream. Accuracy of the text recognition depends on the image quality such as blurry images, small text size, complex background, shadows and handwritten text. Since most of the image attachments are computer generated images and screen shots of error messages, Microsoft-OCR capabilities fits for the use case. 260000 emails are taken from past history. Extracted preprocessed data from Microsoft-Bot and OCR services are saved as Comma-separated Values (CSV) files. It is further processed before feeding to machine learning model. Unwanted words are removed from the context using nltk library stopwords and manually collected stopwords. URLs, punctuation marks are removed. Every word is tokenized, lemmatized and normalized, i.e. title, body, OCR, from, to, CC, Cat1, Cat2, and Cat3. <<</Preprocessing>>> <<<Feature selection>>> Since the sender and receiver varies with time because of new employees' arrivals and old employees' resignations. In order to handle this fluctuating situation, To, CC, From columns are dropped from the input data. Cat1 is known from the email address. Cat2, Cat3 for specific cat1 is described in the table1. Cat2 and Cat3 are merged and defined as target category for classification. Nearly 180 custom features are created based on the plant's availability and region mapping. It is embedded to understand the availability of plant and the issue for the given region denoted as Unique-Category. Based on mapping table (extension of table1), custom features ensures that whether the plant application (cat2) and the technical issue (cat3) belongs to the regional plant (cat1). By the analysis made from the existing samples and from the human semantic knowledge of the technical coordinator, it is sensed that not only the title of the email is enough to predict the category, but also the attachment and body play a major role. <<</Feature selection>>> <<<Machine learning approach>>> Even though labelled data set was provided, initially unsupervised learning algorithm K-Nearest Neighbor (KNN) clustering was applied to the data set to observe the possibility of clusters BIBREF13. Since number of unique categories of the target field (Unique-Cat) is 77, there are many common words between categories. It is too confusing and not showing promising categories and accuracies. Multi class multi label classification supervised algorithms such as random forest, XGBoost are used as benchmarks. <<<Random forest>>> Random Forest is a bagging Algorithm, an ensemble learning method for classification that operates by constructing a multitude of decision trees at training time and outputting the class that has highest mean majority vote of the classesBIBREF14. <<</Random forest>>> <<<XGBoost>>> XGBoost is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. It is used commonly in the classification problems involving unstructured dataBIBREF5. <<</XGBoost>>> <<<Hierarchical Model>>> Since the number of target labels are high, achieving the higher accuracy is difficult, while keeping all the categories under same feature selection method. Some categories performs well with lower TF-IDF vectorizing range and higher n grams features even though they showed lower accuracy in the overall single model. Therefore, hierarchical machine learning models are built to classify 31 categories in the first classification model and remaining categories are named as low-accu and predicted as one category. In the next model, predicted low-accu categories are again classified into 47 categories. Comparatively this hierarchical model works well since various feature selection methods are used for various categoriesBIBREF5. <<</Hierarchical Model>>> <<</Machine learning approach>>> <<<Deep learning approach>>> <<<LSTM>>> Long short term memory is an artificial neural network architecture which outperforms most of the machine learning algorithms. In the deep learning approach, feature selection is done in neurons weight matrix by itself. Bidirectional long short term memory (LSTM) is used with glove word embedding to predict the categoriesBIBREF15. <<</LSTM>>> <<<BERT>>> Even though Bert is the state of the art model, for the considered data set it hasn't shown-up with the maximum breach of accuracy for the expected automationBIBREF16. When we consider the commercial model for the inference, having a dedicated Kubernetes cluster with high performance computer is costly. So complex models with high computation power are not considered as abetter solution. <<</BERT>>> <<</Deep learning approach>>> <<<Threshold Selection>>> In order to classify only higher confident emails, the thresholds for each and every 73 categories are defined. For an incoming email, the probability of assigning each category will be calculated. Best category will be selected based on the maximum probability out of those 73 probabilities. By looking at overall F-score, thresholding decisions are made. For the low accuracy categories (accuracy less than 75 percentage) higher threshold level is set. For middle accuracy categories (accuracy less than 90 percentage) min probability of correctly classified samples are taken. Higher accuracy categories (accuracy greater than 90 percentage) are left free with 0 threshold to classify all the incoming emails. The threshold techniques as a bottle neck decreases the number of samples classified by the autonomous process, but it increases the accuracy of the classified samples. The proposed thresholds satisfy the expected manual workload reduction as well as the accuracy percentage. In this paper Randomforest, XGBoost, LSTM, Bidirectional LSTM with embeddings are analyzed with different input features. Complex deep-learning models such like transformers are not used in order to go for low cost inference solution. Train set and test set are divided as 80:20 percentage. Precision, recall, F-score are taken as evaluation metrics. <<</Threshold Selection>>> <<</Email classifier using machine learning>>> <<<Results and Analysis>>> Automation of quick email replies for technical queries increase the overall efficiency of day to day processes by 3 percentage. Even though replacing the manual Human email-assigner entirely with AI bot is not possible, yet the automation ML model handles 61 percentage of incoming emails correctly. It is reducing massive human effort per day. For generalization purpose email's title, body, attachments are considered in increasing accuracy, while ignoring sender, receiver, carbon copy information. Table II shows the accuracy percentages for different models with different feature selection methods. An accuracy of 77.3 percentage was obtained without any thresholding techniques for 73 classes multiclasss multi label classification problem. With threshold adjustments for each categories, it was increased to 85.6 percentage. Increasing threshold values results in reducing the number of mails classified by ML-model. It is necessary to handle limited number of high confident emails by the ML-model due to ensure the promising accuracy levels. Feature Engineering for custom feature selection and, Hierarchical cascade modelling increases the accuracy of the XGBoost machine learning model to reach accuracy of the LSTM models. By cascading model1 (mod1) with 83.2 accuracy for 31 classes and model2 (mod2) with 71.1 accuracy for 47 low-accuracy classes, overall hierarchical model exhibited 76.5 accuracy. All the accuracy terms refers F-score. Selected keywords were used as static rules accurate classification. Since accuracy is considerably satisfactory for the automation process, the system was deployed. The incorrectly classified mails are handled manually after the proper notification by the technical consultant. Fig. 7 Shows emails classified by the ML, static rules and manual process represented in daily basis. Incoming emails per day varies between 30 to 120. It clearly illustrates the effect of retraining. After 10-April, the percentages of emails classified per day was increased as well as accuracy. Fig. 8 shows average monthly analysis of incoming mails after each retraining. Average Monthly incoming mails are calculated as 1467 per month by considering a 4 months period. Initial training was done on august 2018 with 170,000 samples, model was able to classify nearly 50 percentage of incoming emails. After the second retraining on january 2019 with 200,000 sample, model classified 58 percentage of incoming mails per month. Third retraining was done on April 2019 with 260000 samples. Results stated that nearly 61 percentage of incoming mails were handled by ML model. Nearly 20 percentage of incoming emails were handled by static rules. Automation bot was proved to handle 81 percentage of the total incoming mails per month including ML and static rules, leading to efficient human-machine interaction, Instant problem solving and fast process. <<</Results and Analysis>>> <<<Conclusion>>> Quick fixes from Microsoft LUIS Bot framework provides instant solutions for the raised email queries. Input text features of emails such as title, body, attachment OCR text and the feature engineered custom features all together outperform for the considered real word email data set. Sure-shot Static rules and hierarchical machine learning model with statistically calculated threshold enhances the accuracy of the overall system to an acceptable percentage. Bidirectional LSTM with word embedding techniques are implemented finally with thresholding techniques. Less complex Machine learning models lead to low cost virtual machine solutions for serving. Robotic Process Automation Architecture reduces human effort of email support desk by 81 percentage while having a reasonable accuracy of 85.6 percentage. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Results and Analysis" ], "type": "disordered_section" }
1909.09018
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Corporate IT-Support Help-Desk Process Hybrid-Automation Solution with Machine Learning Approach <<<Abstract>>> Comprehensive IT support teams in large scale organizations require more man power for handling engagement and requests of employees from different channels on a 24×7 basis. Automated email technical queries help desk is proposed to have instant real-time quick solutions and email categorisation. Email topic modelling with various machine learning, deep-learning approaches are compared with different features for a scalable, generalised solution along with sure-shot static rules. Email's title, body, attachment, OCR text, and some feature engineered custom features are given as input elements. XGBoost cascaded hierarchical models, Bi-LSTM model with word embeddings perform well showing 77.3 overall accuracy For the real world corporate email data set. By introducing the thresholding techniques, the overall automation system architecture provides 85.6 percentage of accuracy for real world corporate emails. Combination of quick fixes, static rules, ML categorization as a low cost inference solution reduces 81 percentage of the human effort in the process of automation and real time implementation. <<</Abstract>>> <<<Introduction>>> In an organization, the Information Technology (IT) support help desk operation is an important unit which handles the IT services of a business. Many large scale organizations would have a comprehensive IT support team to handle engagement and requests with employees on a 24$\times $7 basis. As any routinized tasks, most processes of the support help desk unit are considered repetitive in nature BIBREF0. Some may occur on a daily basis and others may occur more frequently. Many support engineers and agent would spend time on these repetitive task such as entering information to an application, resetting passwords, unlocking applications, creating credentials, activating services, preparing documentation, etc. The industry has now come realize that many repetitive business processes and tasks can be automated by using Robotic Process Automation (RPA) bots or robotic processes automotive software bots BIBREF1. The idea is to take the repetitive workload and hand it over to the RPA bots so that the employees could focus on more value adding tasks and decision making to the organization. The RPA bot would also help to reduce the human errors and make processes more efficient, which would finally intent results in cost saving and productivity increase. Our proposed automated approach is not only focused on automating repetitive tasks but also looking at historical data, enabling IT support desk process to identify unforeseen insights and patterns. Analyzing the data from various sources such as email communications, service request information generated from support ticketing applications and even conversational data from chats has helped us to identify the type of Service Requests (SR) raised and their respective solutions, as well as fixes done by the support agents. This approach has helped us create a classification model to identify the issue types and provide quick fixes and resolutions from the collected data. <<</Introduction>>> <<<Related Work>>> WrÃblewska has conducted a project on the topic of RPA of unstructured data which was focused on building an Artificial Intelligence (AI) system dedicated to tasks regarding the processing of formal documents used in different kinds of business procedures BIBREF2. His approach was introduced to automate the debt collecting process. Possible applications of Machine Learning (ML) methods to improve the efficacy of these processes were described. In the case study done by Aguirre, it was concluded that companies should consider RPA to be more suitable for high volume standardized tasks that are rule-driven, with no requirement for subjective judgement, creativity or interpretation skills BIBREF3. Back office business processes such as accounts payable, accounts receivable, billing, travel and expenses, fixed assets and human resource administration are good candidates for RPA. Extreme multi-class and multi-label text classification problems are solved by the methodology named Hierarchical Label Set Expansion (HLSE) BIBREF4. This paper presents the deep Learning architecture devoted to text classification, in which the data labels are regularized, the hierarchical label set is defined and different word embeddings are used BIBREF3, BIBREF5, BIBREF6. The traditional model performed better than the the deep learning models for 8,841 emails collected over 3 years, because this particular classification task carried out by Haoran may not require the ordered sequence representation of tokens that deep learning models provide BIBREF7. This paper claims that a bagged voting model surpasses the performance of any individual models. In their survey, Kamran and other researchers analyzed text feature extractions BIBREF8, BIBREF9, dimentionality reduction methods, existing algorithms and techniques, evaluation methods and limitations BIBREF6 and advantages based on applications. Paramesh et al and Seongwook et al compare the different classification algorithms such as multinomial naive bayes logistic regression, K-Nearest neighbour and Support Vector Machines (SVM) on real-world IT infrastructure ticket classifier system data, using different evaluation metrics in their research BIBREF10, BIBREF11. They claimed that SVM to have performed well on all the data samples. Random forest (RF) or naive bayes (NB) performed best in terms of correctly uncovering human intuitions. Hartmann et al and his team present in their study that RF exhibits high performance in sentiment classification research done on 41 social media data sets covering major social media platforms, where the SVM never outperforms the RF BIBREF12. Cognitive RPA is efficiently undertaken as a low cost solution with Microsoft Azure Language Understanding Intelligent Service (LUIS) BIBREF8 and Azure machine learning studio. Section III of this paper elaborates the process of automation. The section IV explains about the email classification approach, and the section V illustrates the results and their respective analysis. Finally, section VI contains the conclusion of the results. <<</Related Work>>> <<<Method>>> We are proposing a hybrid-process automation, in which we are introducing the automation architecture while adopting the manual process methodology. Incoming emails, that cannot be classified or understood by the knowledge base of the automation system will be sent for manual classification solution. <<<Manual Process>>> Providing technical support for large firms around the world has many challenges such as coordinating a vast amounts of mails and matching experts with employees who are in need of that expertise. When a technical issue is raised from a base level employee who works with applications, it is sent to the middle level and then to the higher level management of the respective regional branches throughout the hierarchical business architecture. Once it is approved by the branch manager, the issue email is forwarded to the technical coordinator to categorize the issue based on the priority level and technical requirements. Technical coordinator is responsible for the issues raised from the regional branches all over the world. Each regional branch is given a unique name such as New York, Sydney, London, Beijing and Toronto mentioned as Category1 (cat1). Category1 is identified by looking at the email address of the sender. Each regional branch has different plant applications that need different experts' consultation. Plant applications such as SAP, Darwin and infrastructure are mentioned as Category2 (cat2). The possible plot of the issue emails such as computer, manufacturing, userID, userunlock, financial, planning, purchasing issue generated by employees working in various plant applications across various regions are mentioned as Category3. Mapping table is created with the plants placed in the regional offices and the issues created by the plants. Category1, Category2, Category3 contains 84, 8 and 77 unique categories to be classified. Table I shows some examples for each categories. Once all three categories are finalized by the technical coordinator, email tickets will be created and assigned to the admin-groups. Respective technical people in the admin-groups will provide consultancy and solve the issues. Not only one technician can handle issues assigned to many different admin groups allocated to him, but also particular admin category can be handled by many technicians as a group as well. <<</Manual Process>>> <<<Proposed Automation System>>> In addition to replacing the technical coordinator role with AI bot to classify the raised email-issue tickets for respective admin groups, we propose instant quick fixes for some emails in an automated manner. High level workflow is described in Fig. 1. The AI bot has three main stages Quick fixes Static rules Email classifier All the incoming mails are preprocessed for better quality of inputs. Signatures, greetings, Uniform Resource Locators (URL) are removed. Key body is extracted from the forwarded mails by digging deep into the mail contents. If an email contains attachments, Optical Character Recognition (OCR) is used to extract the text contents from the attachments. <<<Quickfixes>>> Microsoft LUIS is used for instant quick fixes to provide solution based on prioritized emails. Fig. 2 shows the bot framework LUIS architecture that handles the quick fixes. Quick fixes are trained with most occurring samples that need quick solutions. LUIS is a model that artificial intelligence applications use to predict the intention of phrases spoke. There are 3 main key phases categorized as defining phase, training phase and publishing phase. Natural language is extremely flexible with LUIS. Intents are the type of defined words that are supported by utterances. An action the user wants to perform can be defined by an intent. Fig. 3 elaborates the intent matching breakdown mechanism. Entities are identified form the sentences. Suitable entity will be selected for generating tickets. If an incoming email is identified with the matched intent, cat1, cat2, cat3 will be allocated. Tickets will be created for admin-groups. The issue will be solved using automated messages through a chat bot solution. If the issue is solved, then the ticket will be closed by the quick fixes. If it is too complicated for the knowledge of the BOT then it creates ticket for adminGroup for the assistance of consultants. The emails identified by static rules and keywords are classified with the highest accuracy. The knowledge base of static rules and keywords are gathered using feature engineering and the insights from the technical coordinator. Remaining emails are sent to a complex ensemble machine learning model to be classified. Different types of emails are treated in a different way for efficient execution and to reduce the error. <<</Quickfixes>>> <<<First mail>>> Fig. 4 shows the flow of email categorization response for new incoming emails. If an incoming mail is a fresh new mail, it is initially subjected to cleaning. OCR will extract the texts from the attachment depending on the attachments' availability. Cat1 is assigned according to the knowledge of the database and sender details. According to the priority, emails are passed through LUIS. Thereafter if LUIS fails to solve the issue ML model will assign the cat2, cat3, Admin group for ticket creation. <<</First mail>>> <<<Forwarded mail>>> If incoming mail is a continuation of previous email, it is directly handled by LUIS question and answer self automated support. Then it follows the normal procedure of categorization. Fig. 5 clearly illustrates the flow. Fig. 6 explains the overall architecture. Static rules are mentioned as T-codes. Every categorized mails has to be assigned to respective consultant denoted as assignTo. <<</Forwarded mail>>> <<</Proposed Automation System>>> <<</Method>>> <<<Email classifier using machine learning>>> <<<Preprocessing>>> Preprocessing is necessary to increase the accuracy of a text classification model, because it avoids the classification model focusing attention on unwanted sentences and intents. Emails are fed into Microsoft-Bot services. It handles the headers and outputs the processed channel-data in JavaScript Object Notation (JSON) format. The channel data summarizes the information such like sender, receiver, body, subject and important metadata. Regular expression (regex) can be used for searching strings by defining a search pattern. Regex findings are created to remove unwanted words from the channel data queries for further processing of the emails. OCR has to be accurate in detecting text in an image. Microsoft-OCR is used for text recognition of this automation process. It extracts the recognized characters into a machine-usable character stream. Accuracy of the text recognition depends on the image quality such as blurry images, small text size, complex background, shadows and handwritten text. Since most of the image attachments are computer generated images and screen shots of error messages, Microsoft-OCR capabilities fits for the use case. 260000 emails are taken from past history. Extracted preprocessed data from Microsoft-Bot and OCR services are saved as Comma-separated Values (CSV) files. It is further processed before feeding to machine learning model. Unwanted words are removed from the context using nltk library stopwords and manually collected stopwords. URLs, punctuation marks are removed. Every word is tokenized, lemmatized and normalized, i.e. title, body, OCR, from, to, CC, Cat1, Cat2, and Cat3. <<</Preprocessing>>> <<<Feature selection>>> Since the sender and receiver varies with time because of new employees' arrivals and old employees' resignations. In order to handle this fluctuating situation, To, CC, From columns are dropped from the input data. Cat1 is known from the email address. Cat2, Cat3 for specific cat1 is described in the table1. Cat2 and Cat3 are merged and defined as target category for classification. Nearly 180 custom features are created based on the plant's availability and region mapping. It is embedded to understand the availability of plant and the issue for the given region denoted as Unique-Category. Based on mapping table (extension of table1), custom features ensures that whether the plant application (cat2) and the technical issue (cat3) belongs to the regional plant (cat1). By the analysis made from the existing samples and from the human semantic knowledge of the technical coordinator, it is sensed that not only the title of the email is enough to predict the category, but also the attachment and body play a major role. <<</Feature selection>>> <<<Machine learning approach>>> Even though labelled data set was provided, initially unsupervised learning algorithm K-Nearest Neighbor (KNN) clustering was applied to the data set to observe the possibility of clusters BIBREF13. Since number of unique categories of the target field (Unique-Cat) is 77, there are many common words between categories. It is too confusing and not showing promising categories and accuracies. Multi class multi label classification supervised algorithms such as random forest, XGBoost are used as benchmarks. <<<Random forest>>> Random Forest is a bagging Algorithm, an ensemble learning method for classification that operates by constructing a multitude of decision trees at training time and outputting the class that has highest mean majority vote of the classesBIBREF14. <<</Random forest>>> <<<XGBoost>>> XGBoost is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. It is used commonly in the classification problems involving unstructured dataBIBREF5. <<</XGBoost>>> <<<Hierarchical Model>>> Since the number of target labels are high, achieving the higher accuracy is difficult, while keeping all the categories under same feature selection method. Some categories performs well with lower TF-IDF vectorizing range and higher n grams features even though they showed lower accuracy in the overall single model. Therefore, hierarchical machine learning models are built to classify 31 categories in the first classification model and remaining categories are named as low-accu and predicted as one category. In the next model, predicted low-accu categories are again classified into 47 categories. Comparatively this hierarchical model works well since various feature selection methods are used for various categoriesBIBREF5. <<</Hierarchical Model>>> <<</Machine learning approach>>> <<<Deep learning approach>>> <<<LSTM>>> Long short term memory is an artificial neural network architecture which outperforms most of the machine learning algorithms. In the deep learning approach, feature selection is done in neurons weight matrix by itself. Bidirectional long short term memory (LSTM) is used with glove word embedding to predict the categoriesBIBREF15. <<</LSTM>>> <<<BERT>>> Even though Bert is the state of the art model, for the considered data set it hasn't shown-up with the maximum breach of accuracy for the expected automationBIBREF16. When we consider the commercial model for the inference, having a dedicated Kubernetes cluster with high performance computer is costly. So complex models with high computation power are not considered as abetter solution. <<</BERT>>> <<</Deep learning approach>>> <<<Threshold Selection>>> In order to classify only higher confident emails, the thresholds for each and every 73 categories are defined. For an incoming email, the probability of assigning each category will be calculated. Best category will be selected based on the maximum probability out of those 73 probabilities. By looking at overall F-score, thresholding decisions are made. For the low accuracy categories (accuracy less than 75 percentage) higher threshold level is set. For middle accuracy categories (accuracy less than 90 percentage) min probability of correctly classified samples are taken. Higher accuracy categories (accuracy greater than 90 percentage) are left free with 0 threshold to classify all the incoming emails. The threshold techniques as a bottle neck decreases the number of samples classified by the autonomous process, but it increases the accuracy of the classified samples. The proposed thresholds satisfy the expected manual workload reduction as well as the accuracy percentage. In this paper Randomforest, XGBoost, LSTM, Bidirectional LSTM with embeddings are analyzed with different input features. Complex deep-learning models such like transformers are not used in order to go for low cost inference solution. Train set and test set are divided as 80:20 percentage. Precision, recall, F-score are taken as evaluation metrics. <<</Threshold Selection>>> <<</Email classifier using machine learning>>> <<<Results and Analysis>>> Automation of quick email replies for technical queries increase the overall efficiency of day to day processes by 3 percentage. Even though replacing the manual Human email-assigner entirely with AI bot is not possible, yet the automation ML model handles 61 percentage of incoming emails correctly. It is reducing massive human effort per day. For generalization purpose email's title, body, attachments are considered in increasing accuracy, while ignoring sender, receiver, carbon copy information. Table II shows the accuracy percentages for different models with different feature selection methods. An accuracy of 77.3 percentage was obtained without any thresholding techniques for 73 classes multiclasss multi label classification problem. With threshold adjustments for each categories, it was increased to 85.6 percentage. Increasing threshold values results in reducing the number of mails classified by ML-model. It is necessary to handle limited number of high confident emails by the ML-model due to ensure the promising accuracy levels. Feature Engineering for custom feature selection and, Hierarchical cascade modelling increases the accuracy of the XGBoost machine learning model to reach accuracy of the LSTM models. By cascading model1 (mod1) with 83.2 accuracy for 31 classes and model2 (mod2) with 71.1 accuracy for 47 low-accuracy classes, overall hierarchical model exhibited 76.5 accuracy. All the accuracy terms refers F-score. Selected keywords were used as static rules accurate classification. Since accuracy is considerably satisfactory for the automation process, the system was deployed. The incorrectly classified mails are handled manually after the proper notification by the technical consultant. Fig. 7 Shows emails classified by the ML, static rules and manual process represented in daily basis. Incoming emails per day varies between 30 to 120. It clearly illustrates the effect of retraining. After 10-April, the percentages of emails classified per day was increased as well as accuracy. Fig. 8 shows average monthly analysis of incoming mails after each retraining. Average Monthly incoming mails are calculated as 1467 per month by considering a 4 months period. Initial training was done on august 2018 with 170,000 samples, model was able to classify nearly 50 percentage of incoming emails. After the second retraining on january 2019 with 200,000 sample, model classified 58 percentage of incoming mails per month. Third retraining was done on April 2019 with 260000 samples. Results stated that nearly 61 percentage of incoming mails were handled by ML model. Nearly 20 percentage of incoming emails were handled by static rules. Automation bot was proved to handle 81 percentage of the total incoming mails per month including ML and static rules, leading to efficient human-machine interaction, Instant problem solving and fast process. <<</Results and Analysis>>> <<<Conclusion>>> Quick fixes from Microsoft LUIS Bot framework provides instant solutions for the raised email queries. Input text features of emails such as title, body, attachment OCR text and the feature engineered custom features all together outperform for the considered real word email data set. Sure-shot Static rules and hierarchical machine learning model with statistically calculated threshold enhances the accuracy of the overall system to an acceptable percentage. Bidirectional LSTM with word embedding techniques are implemented finally with thresholding techniques. Less complex Machine learning models lead to low cost virtual machine solutions for serving. Robotic Process Automation Architecture reduces human effort of email support desk by 81 percentage while having a reasonable accuracy of 85.6 percentage. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Method" ], "type": "disordered_section" }
1909.09018
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Corporate IT-Support Help-Desk Process Hybrid-Automation Solution with Machine Learning Approach <<<Abstract>>> Comprehensive IT support teams in large scale organizations require more man power for handling engagement and requests of employees from different channels on a 24×7 basis. Automated email technical queries help desk is proposed to have instant real-time quick solutions and email categorisation. Email topic modelling with various machine learning, deep-learning approaches are compared with different features for a scalable, generalised solution along with sure-shot static rules. Email's title, body, attachment, OCR text, and some feature engineered custom features are given as input elements. XGBoost cascaded hierarchical models, Bi-LSTM model with word embeddings perform well showing 77.3 overall accuracy For the real world corporate email data set. By introducing the thresholding techniques, the overall automation system architecture provides 85.6 percentage of accuracy for real world corporate emails. Combination of quick fixes, static rules, ML categorization as a low cost inference solution reduces 81 percentage of the human effort in the process of automation and real time implementation. <<</Abstract>>> <<<Introduction>>> In an organization, the Information Technology (IT) support help desk operation is an important unit which handles the IT services of a business. Many large scale organizations would have a comprehensive IT support team to handle engagement and requests with employees on a 24$\times $7 basis. As any routinized tasks, most processes of the support help desk unit are considered repetitive in nature BIBREF0. Some may occur on a daily basis and others may occur more frequently. Many support engineers and agent would spend time on these repetitive task such as entering information to an application, resetting passwords, unlocking applications, creating credentials, activating services, preparing documentation, etc. The industry has now come realize that many repetitive business processes and tasks can be automated by using Robotic Process Automation (RPA) bots or robotic processes automotive software bots BIBREF1. The idea is to take the repetitive workload and hand it over to the RPA bots so that the employees could focus on more value adding tasks and decision making to the organization. The RPA bot would also help to reduce the human errors and make processes more efficient, which would finally intent results in cost saving and productivity increase. Our proposed automated approach is not only focused on automating repetitive tasks but also looking at historical data, enabling IT support desk process to identify unforeseen insights and patterns. Analyzing the data from various sources such as email communications, service request information generated from support ticketing applications and even conversational data from chats has helped us to identify the type of Service Requests (SR) raised and their respective solutions, as well as fixes done by the support agents. This approach has helped us create a classification model to identify the issue types and provide quick fixes and resolutions from the collected data. <<</Introduction>>> <<<Related Work>>> WrÃblewska has conducted a project on the topic of RPA of unstructured data which was focused on building an Artificial Intelligence (AI) system dedicated to tasks regarding the processing of formal documents used in different kinds of business procedures BIBREF2. His approach was introduced to automate the debt collecting process. Possible applications of Machine Learning (ML) methods to improve the efficacy of these processes were described. In the case study done by Aguirre, it was concluded that companies should consider RPA to be more suitable for high volume standardized tasks that are rule-driven, with no requirement for subjective judgement, creativity or interpretation skills BIBREF3. Back office business processes such as accounts payable, accounts receivable, billing, travel and expenses, fixed assets and human resource administration are good candidates for RPA. Extreme multi-class and multi-label text classification problems are solved by the methodology named Hierarchical Label Set Expansion (HLSE) BIBREF4. This paper presents the deep Learning architecture devoted to text classification, in which the data labels are regularized, the hierarchical label set is defined and different word embeddings are used BIBREF3, BIBREF5, BIBREF6. The traditional model performed better than the the deep learning models for 8,841 emails collected over 3 years, because this particular classification task carried out by Haoran may not require the ordered sequence representation of tokens that deep learning models provide BIBREF7. This paper claims that a bagged voting model surpasses the performance of any individual models. In their survey, Kamran and other researchers analyzed text feature extractions BIBREF8, BIBREF9, dimentionality reduction methods, existing algorithms and techniques, evaluation methods and limitations BIBREF6 and advantages based on applications. Paramesh et al and Seongwook et al compare the different classification algorithms such as multinomial naive bayes logistic regression, K-Nearest neighbour and Support Vector Machines (SVM) on real-world IT infrastructure ticket classifier system data, using different evaluation metrics in their research BIBREF10, BIBREF11. They claimed that SVM to have performed well on all the data samples. Random forest (RF) or naive bayes (NB) performed best in terms of correctly uncovering human intuitions. Hartmann et al and his team present in their study that RF exhibits high performance in sentiment classification research done on 41 social media data sets covering major social media platforms, where the SVM never outperforms the RF BIBREF12. Cognitive RPA is efficiently undertaken as a low cost solution with Microsoft Azure Language Understanding Intelligent Service (LUIS) BIBREF8 and Azure machine learning studio. Section III of this paper elaborates the process of automation. The section IV explains about the email classification approach, and the section V illustrates the results and their respective analysis. Finally, section VI contains the conclusion of the results. <<</Related Work>>> <<<Method>>> We are proposing a hybrid-process automation, in which we are introducing the automation architecture while adopting the manual process methodology. Incoming emails, that cannot be classified or understood by the knowledge base of the automation system will be sent for manual classification solution. <<<Manual Process>>> Providing technical support for large firms around the world has many challenges such as coordinating a vast amounts of mails and matching experts with employees who are in need of that expertise. When a technical issue is raised from a base level employee who works with applications, it is sent to the middle level and then to the higher level management of the respective regional branches throughout the hierarchical business architecture. Once it is approved by the branch manager, the issue email is forwarded to the technical coordinator to categorize the issue based on the priority level and technical requirements. Technical coordinator is responsible for the issues raised from the regional branches all over the world. Each regional branch is given a unique name such as New York, Sydney, London, Beijing and Toronto mentioned as Category1 (cat1). Category1 is identified by looking at the email address of the sender. Each regional branch has different plant applications that need different experts' consultation. Plant applications such as SAP, Darwin and infrastructure are mentioned as Category2 (cat2). The possible plot of the issue emails such as computer, manufacturing, userID, userunlock, financial, planning, purchasing issue generated by employees working in various plant applications across various regions are mentioned as Category3. Mapping table is created with the plants placed in the regional offices and the issues created by the plants. Category1, Category2, Category3 contains 84, 8 and 77 unique categories to be classified. Table I shows some examples for each categories. Once all three categories are finalized by the technical coordinator, email tickets will be created and assigned to the admin-groups. Respective technical people in the admin-groups will provide consultancy and solve the issues. Not only one technician can handle issues assigned to many different admin groups allocated to him, but also particular admin category can be handled by many technicians as a group as well. <<</Manual Process>>> <<<Proposed Automation System>>> In addition to replacing the technical coordinator role with AI bot to classify the raised email-issue tickets for respective admin groups, we propose instant quick fixes for some emails in an automated manner. High level workflow is described in Fig. 1. The AI bot has three main stages Quick fixes Static rules Email classifier All the incoming mails are preprocessed for better quality of inputs. Signatures, greetings, Uniform Resource Locators (URL) are removed. Key body is extracted from the forwarded mails by digging deep into the mail contents. If an email contains attachments, Optical Character Recognition (OCR) is used to extract the text contents from the attachments. <<<Quickfixes>>> Microsoft LUIS is used for instant quick fixes to provide solution based on prioritized emails. Fig. 2 shows the bot framework LUIS architecture that handles the quick fixes. Quick fixes are trained with most occurring samples that need quick solutions. LUIS is a model that artificial intelligence applications use to predict the intention of phrases spoke. There are 3 main key phases categorized as defining phase, training phase and publishing phase. Natural language is extremely flexible with LUIS. Intents are the type of defined words that are supported by utterances. An action the user wants to perform can be defined by an intent. Fig. 3 elaborates the intent matching breakdown mechanism. Entities are identified form the sentences. Suitable entity will be selected for generating tickets. If an incoming email is identified with the matched intent, cat1, cat2, cat3 will be allocated. Tickets will be created for admin-groups. The issue will be solved using automated messages through a chat bot solution. If the issue is solved, then the ticket will be closed by the quick fixes. If it is too complicated for the knowledge of the BOT then it creates ticket for adminGroup for the assistance of consultants. The emails identified by static rules and keywords are classified with the highest accuracy. The knowledge base of static rules and keywords are gathered using feature engineering and the insights from the technical coordinator. Remaining emails are sent to a complex ensemble machine learning model to be classified. Different types of emails are treated in a different way for efficient execution and to reduce the error. <<</Quickfixes>>> <<<First mail>>> Fig. 4 shows the flow of email categorization response for new incoming emails. If an incoming mail is a fresh new mail, it is initially subjected to cleaning. OCR will extract the texts from the attachment depending on the attachments' availability. Cat1 is assigned according to the knowledge of the database and sender details. According to the priority, emails are passed through LUIS. Thereafter if LUIS fails to solve the issue ML model will assign the cat2, cat3, Admin group for ticket creation. <<</First mail>>> <<<Forwarded mail>>> If incoming mail is a continuation of previous email, it is directly handled by LUIS question and answer self automated support. Then it follows the normal procedure of categorization. Fig. 5 clearly illustrates the flow. Fig. 6 explains the overall architecture. Static rules are mentioned as T-codes. Every categorized mails has to be assigned to respective consultant denoted as assignTo. <<</Forwarded mail>>> <<</Proposed Automation System>>> <<</Method>>> <<<Email classifier using machine learning>>> <<<Preprocessing>>> Preprocessing is necessary to increase the accuracy of a text classification model, because it avoids the classification model focusing attention on unwanted sentences and intents. Emails are fed into Microsoft-Bot services. It handles the headers and outputs the processed channel-data in JavaScript Object Notation (JSON) format. The channel data summarizes the information such like sender, receiver, body, subject and important metadata. Regular expression (regex) can be used for searching strings by defining a search pattern. Regex findings are created to remove unwanted words from the channel data queries for further processing of the emails. OCR has to be accurate in detecting text in an image. Microsoft-OCR is used for text recognition of this automation process. It extracts the recognized characters into a machine-usable character stream. Accuracy of the text recognition depends on the image quality such as blurry images, small text size, complex background, shadows and handwritten text. Since most of the image attachments are computer generated images and screen shots of error messages, Microsoft-OCR capabilities fits for the use case. 260000 emails are taken from past history. Extracted preprocessed data from Microsoft-Bot and OCR services are saved as Comma-separated Values (CSV) files. It is further processed before feeding to machine learning model. Unwanted words are removed from the context using nltk library stopwords and manually collected stopwords. URLs, punctuation marks are removed. Every word is tokenized, lemmatized and normalized, i.e. title, body, OCR, from, to, CC, Cat1, Cat2, and Cat3. <<</Preprocessing>>> <<<Feature selection>>> Since the sender and receiver varies with time because of new employees' arrivals and old employees' resignations. In order to handle this fluctuating situation, To, CC, From columns are dropped from the input data. Cat1 is known from the email address. Cat2, Cat3 for specific cat1 is described in the table1. Cat2 and Cat3 are merged and defined as target category for classification. Nearly 180 custom features are created based on the plant's availability and region mapping. It is embedded to understand the availability of plant and the issue for the given region denoted as Unique-Category. Based on mapping table (extension of table1), custom features ensures that whether the plant application (cat2) and the technical issue (cat3) belongs to the regional plant (cat1). By the analysis made from the existing samples and from the human semantic knowledge of the technical coordinator, it is sensed that not only the title of the email is enough to predict the category, but also the attachment and body play a major role. <<</Feature selection>>> <<<Machine learning approach>>> Even though labelled data set was provided, initially unsupervised learning algorithm K-Nearest Neighbor (KNN) clustering was applied to the data set to observe the possibility of clusters BIBREF13. Since number of unique categories of the target field (Unique-Cat) is 77, there are many common words between categories. It is too confusing and not showing promising categories and accuracies. Multi class multi label classification supervised algorithms such as random forest, XGBoost are used as benchmarks. <<<Random forest>>> Random Forest is a bagging Algorithm, an ensemble learning method for classification that operates by constructing a multitude of decision trees at training time and outputting the class that has highest mean majority vote of the classesBIBREF14. <<</Random forest>>> <<<XGBoost>>> XGBoost is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. It is used commonly in the classification problems involving unstructured dataBIBREF5. <<</XGBoost>>> <<<Hierarchical Model>>> Since the number of target labels are high, achieving the higher accuracy is difficult, while keeping all the categories under same feature selection method. Some categories performs well with lower TF-IDF vectorizing range and higher n grams features even though they showed lower accuracy in the overall single model. Therefore, hierarchical machine learning models are built to classify 31 categories in the first classification model and remaining categories are named as low-accu and predicted as one category. In the next model, predicted low-accu categories are again classified into 47 categories. Comparatively this hierarchical model works well since various feature selection methods are used for various categoriesBIBREF5. <<</Hierarchical Model>>> <<</Machine learning approach>>> <<<Deep learning approach>>> <<<LSTM>>> Long short term memory is an artificial neural network architecture which outperforms most of the machine learning algorithms. In the deep learning approach, feature selection is done in neurons weight matrix by itself. Bidirectional long short term memory (LSTM) is used with glove word embedding to predict the categoriesBIBREF15. <<</LSTM>>> <<<BERT>>> Even though Bert is the state of the art model, for the considered data set it hasn't shown-up with the maximum breach of accuracy for the expected automationBIBREF16. When we consider the commercial model for the inference, having a dedicated Kubernetes cluster with high performance computer is costly. So complex models with high computation power are not considered as abetter solution. <<</BERT>>> <<</Deep learning approach>>> <<<Threshold Selection>>> In order to classify only higher confident emails, the thresholds for each and every 73 categories are defined. For an incoming email, the probability of assigning each category will be calculated. Best category will be selected based on the maximum probability out of those 73 probabilities. By looking at overall F-score, thresholding decisions are made. For the low accuracy categories (accuracy less than 75 percentage) higher threshold level is set. For middle accuracy categories (accuracy less than 90 percentage) min probability of correctly classified samples are taken. Higher accuracy categories (accuracy greater than 90 percentage) are left free with 0 threshold to classify all the incoming emails. The threshold techniques as a bottle neck decreases the number of samples classified by the autonomous process, but it increases the accuracy of the classified samples. The proposed thresholds satisfy the expected manual workload reduction as well as the accuracy percentage. In this paper Randomforest, XGBoost, LSTM, Bidirectional LSTM with embeddings are analyzed with different input features. Complex deep-learning models such like transformers are not used in order to go for low cost inference solution. Train set and test set are divided as 80:20 percentage. Precision, recall, F-score are taken as evaluation metrics. <<</Threshold Selection>>> <<</Email classifier using machine learning>>> <<<Results and Analysis>>> Automation of quick email replies for technical queries increase the overall efficiency of day to day processes by 3 percentage. Even though replacing the manual Human email-assigner entirely with AI bot is not possible, yet the automation ML model handles 61 percentage of incoming emails correctly. It is reducing massive human effort per day. For generalization purpose email's title, body, attachments are considered in increasing accuracy, while ignoring sender, receiver, carbon copy information. Table II shows the accuracy percentages for different models with different feature selection methods. An accuracy of 77.3 percentage was obtained without any thresholding techniques for 73 classes multiclasss multi label classification problem. With threshold adjustments for each categories, it was increased to 85.6 percentage. Increasing threshold values results in reducing the number of mails classified by ML-model. It is necessary to handle limited number of high confident emails by the ML-model due to ensure the promising accuracy levels. Feature Engineering for custom feature selection and, Hierarchical cascade modelling increases the accuracy of the XGBoost machine learning model to reach accuracy of the LSTM models. By cascading model1 (mod1) with 83.2 accuracy for 31 classes and model2 (mod2) with 71.1 accuracy for 47 low-accuracy classes, overall hierarchical model exhibited 76.5 accuracy. All the accuracy terms refers F-score. Selected keywords were used as static rules accurate classification. Since accuracy is considerably satisfactory for the automation process, the system was deployed. The incorrectly classified mails are handled manually after the proper notification by the technical consultant. Fig. 7 Shows emails classified by the ML, static rules and manual process represented in daily basis. Incoming emails per day varies between 30 to 120. It clearly illustrates the effect of retraining. After 10-April, the percentages of emails classified per day was increased as well as accuracy. Fig. 8 shows average monthly analysis of incoming mails after each retraining. Average Monthly incoming mails are calculated as 1467 per month by considering a 4 months period. Initial training was done on august 2018 with 170,000 samples, model was able to classify nearly 50 percentage of incoming emails. After the second retraining on january 2019 with 200,000 sample, model classified 58 percentage of incoming mails per month. Third retraining was done on April 2019 with 260000 samples. Results stated that nearly 61 percentage of incoming mails were handled by ML model. Nearly 20 percentage of incoming emails were handled by static rules. Automation bot was proved to handle 81 percentage of the total incoming mails per month including ML and static rules, leading to efficient human-machine interaction, Instant problem solving and fast process. <<</Results and Analysis>>> <<<Conclusion>>> Quick fixes from Microsoft LUIS Bot framework provides instant solutions for the raised email queries. Input text features of emails such as title, body, attachment OCR text and the feature engineered custom features all together outperform for the considered real word email data set. Sure-shot Static rules and hierarchical machine learning model with statistically calculated threshold enhances the accuracy of the overall system to an acceptable percentage. Bidirectional LSTM with word embedding techniques are implemented finally with thresholding techniques. Less complex Machine learning models lead to low cost virtual machine solutions for serving. Robotic Process Automation Architecture reduces human effort of email support desk by 81 percentage while having a reasonable accuracy of 85.6 percentage. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Conclusion, Email classifier using machine learning" ], "type": "disordered_section" }
1911.03154
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> How to Do Simultaneous Translation Better with Consecutive Neural Machine Translation? <<<Abstract>>> Despite the success of neural machine translation (NMT), simultaneous neural machine translation (SNMT), the task of translating in real time before a full sentence has been observed, remains challenging due to the syntactic structure difference and simultaneity requirements. In this paper, we propose a general framework to improve simultaneous translation with a pretrained consecutive neural machine translation (CNMT) model. Our framework contains two parts: prefix translation that utilizes a pretrained CNMT model to better translate source prefixes and a stopping criterion that determines when to stop the prefix translation. Experiments on three translation corpora and two language pairs show the efficacy of the proposed framework on balancing the quality and latency in simultaneous translation. <<</Abstract>>> <<<Introduction>>> Simultaneous translation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, the task of producing a partial translation of a sentence before the whole input sentence ends, is useful in many scenarios including outbound tourism, international summit and multilateral negotiations. Different from the consecutive translation in which translation quality alone matters, simultaneous translation trades off between translation quality and latency. The syntactic structure difference between the source and target language makes simultaneous translation more challenging. For example, when translating from a verb-final (SOV) language (e.g., Japanese) to a verb-media (SVO) language (e.g., English), the verb appears much later in the source sequence than in the target language. Some premature translations can lead to significant loss in quality BIBREF5. Recently, a number of researchers have endeavored to explore methods for simultaneous translation in the context of NMT BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some of them propose sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5, BIBREF10. These approaches are either memory inefficient during training BIBREF5 or hard to implement BIBREF10. Others utilize a full-sentence base model to perform simultaneous translation by modifications to the encoder and the decoding process. To match the incremental source context, they replace the bidirectional encoder with a left-to-right encoder BIBREF3, BIBREF11, BIBREF4, BIBREF12 or recompute the encoder hidden states BIBREF13. On top of that, heuristic algorithms BIBREF3, BIBREF14 or a READ/WRITE model trained with reinforcement learning BIBREF11, BIBREF4, BIBREF12 or supervised learning BIBREF13 are used to decide, at every step, whether to wait for the next source token or output a target token. However, these models either cannot directly use a pretrained vanilla CNMT model with bidirectional encoder as the base model or work in a sub-optimal way in the decoding stage. In this paper, we study the problem of how to do simultaneous translation better with a pretrained vanilla CNMT model. We formulate simultaneous translation as two nested loops: an outer loop that updates input buffer with newly observed source tokens and an inner loop that translates source tokens in the buffer updated at each outer step. For the outer loop, the input buffer can be updated by an ASR system with an arbitrary update schedule. For the inner loop, we perform prefix translation using the pretrained CNMT model with dynamically built encoder and decoder hidden states. We also design two novel stopping criteria for the inner loop: Length and EOS (LE) controller that stops with heuristics, and Trainable (TN) controller that learns to stop with a better quality and latency balance. We evaluate our method on IWSLT16 German-English (DE-EN) translation in both directions, WMT15 English-German (EN-DE) translation in both directions, and NIST Chinese-to-English (ZH$\rightarrow $EN) translation. The result shows our method consistently improves over the de-facto baselines, and achieves low latency and reasonable BLEU scores. <<</Introduction>>> <<<Background>>> Given a set of source–target sentence pairs $\left\langle \mathbf {x}_m,\mathbf {y}^*_m\right\rangle _{m=1}^M$, a consecutive NMT model can be trained by maximizing the log-likelihood of the target sentence from its entire source side context: where $\phi $ is a set of model parameters. At inference time, the NMT model first encodes a source language sentence $\mathbf {x}=\lbrace x_1,...,x_{T_\eta }\rbrace $ with its encoder and passes the encoded representations $\mathbf {h}=\lbrace h_1,...,h_{T_\eta }\rbrace $ to a greedy decoder. Then the greedy decoder generates a translated sentence in the target language by sequentially choosing the most likely token at each step $t$: The distribution of next target word is defined as: where $z_{t}$ is the decoder hidden state at position $t$. In consecutive NMT, once obtained, the encoder hidden states $\mathbf {h}$ and the decoder hidden state $z_t$ are not updated anymore and will be reused during the entire decoding process. <<</Background>>> <<<Simultaneous NMT>>> In SNMT, we receive streaming input tokens, and learn to translate them in real-time. We formulate simultaneous translation as two nested loops: the outer loop that updates an input buffer with newly observed source tokens and the inner loop that translates source tokens in the buffer updated at each outer step. More precisely, suppose at the end of an outer step $s-1$, the input buffer is $\mathbf {x}^{s-1} = \lbrace x_1, ..., x_{\eta \left[ s-1\right]}\rbrace $, and the output buffer is $\mathbf {y}^{s-1} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Then at outer step $s$, the system translates with the following steps: The system observes $c_s > 0$ new source tokens and updates the input buffer to be $\mathbf {x}^{s} = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ where $\eta \left[ s\right]=\eta \left[ s-1\right]+c_s$. Then, the system starts inner loop translation and writes $w_s>=0$ target tokens to the output buffer. The output buffer is updated to be $\mathbf {y}^{s} = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $ where $\tau \left[ s\right]=\tau \left[ s-1\right]+w_s$. The simultaneous decoding process continues until no more source tokens are added in the outer loop. We define the last outer step as the terminal outer step $S$, and other outer steps as non-terminal outer steps. For the outer loop, we make no assumption about the value of $c_s$, while all previous work assumes $c_s=1$. This setting is more realistic because a) increasing $c_s$ can reduce the number of outer steps, thus reducing computation cost; b) in a real speech translation application, an ASR system may generate multiple tokens at a time. For the inner loop, we adapt a pretrained vanilla CNMT model to perform partial translation with two important concerns: Prefix translation: given a source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and a target prefix $\mathbf {y}^s_{\tau \left[ s-1\right]} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, how to predict the remaining target tokens? Stopping criterion: since the NMT model is trained with full sentences, how to design the stopping criterion for it when translating partial source sentcnes? <<<Prefix Translation>>> At an outer step $s$, given encoder hidden states $\mathbf {h}^s$ for source prefix $\mathbf {x}^s= \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ for target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s= \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, we perform prefix translation sequentially with a greedy decoder: where $t$ starts from $t=\tau \left[ s-1\right]+1$. The prefix translation terminates when a stopping criterion meets, yielding a translation $\mathbf {y}^s = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $. However, a major problem comes from the above translation method: how can we obtain the encoder hidden states $\mathbf {h}^s$ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ at the beginning of prefix translation? In CNMT, the encoder hidden states and previous decoder hidden states are reused at each decoding time step. Different from CNMT, SNMT is fed with an incremental source side context. On the encoder side, we can address this by either reusing previous encoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF12: or dynamically re-building all encoder hidden states BIBREF5: On the decoder side, since the encoder hidden states have been updated from $\mathbf {h}^{s-1}$ to $\mathbf {h}^s$, we can choose to reuse previous decoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF5: or rebuild all previous decoder hidden states from current encoder hidden states $\mathbf {h}^s$ with force decoding: To better predict the remaining target tokens, we rebuild all encoder and decoder hidden states following Eq. DISPLAY_FORM11 and DISPLAY_FORM13 at the beginning of prefix translation. This strategy ensures that all encoder and decoder hidden states are obtained by attending to the same source tokens, which is consistent with how encoder and decoder hidden states are computed at training time. Besides, these attainable source tokens are all available source context at current time. Compared with using Eq. DISPLAY_FORM10 or DISPLAY_FORM12, our method can potentially better utilize the available source context. <<</Prefix Translation>>> <<<Stopping Criterion>>> In consecutive NMT, the decoding algorithm such as greedy decoding or beam search terminates when the translator predicts an EOS token or the length of the translation meets a predefined threshold: where $\text{maxlen}$, $u$ and $v$ are all hyper-parameters. In fairseq-py, they set $\text{maxlen}=+\infty $, $u=0$ and $v=200$ at inference time by default. The decoding for most source sentences terminates when the translator predicts the EOS token. In simultaneous decoding, since we use a NMT model pretrained on full sentences to translate partial source sentences, it tends to predict EOS when the source context has been fully translated. However, such strategy could be too aggressive for simultaneous translation. Fig. FIGREF18 shows such an example. At outer step 2, the translator predicts “you EOS", emiting target token “you". However, “you" is not the expected translation for “你" in the context of “你好。". The right decision is that prefix translation at outer step 2 should stop without emitting any words. To alleviate such problems and do better simultaneous translation with pretrained CNMT model, we propose two novel stopping criteria for prefix translation. <<<Length and EOS Control>>> In consecutive translation, the decoding process stops mainly when predicting EOS. In contrast, for prefix translation at non-terminal outer step, we use both length and EOS to stop the prefix translation process. We achieve this by setting the hyper-parameters in Eq. DISPLAY_FORM15 as $\text{maxlen}=+\infty $, $u=1$ and $v=-d$, where $d$ is a non-negative integer. The hyper-parameter $d$ determines the translation latency of the system. More specifically, before prefix translation at outer step $s$, we have source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Prefix translation terminates at inner step $w_s$ when predicting an EOS token or satisfying: We call this stopping criterion as Length and EOS (LE) stopping controller. <<</Length and EOS Control>>> <<<Learning When to Stop>>> Although simple and easy to implement, LE controller lacks the capability to learn the optimal timing with which to stop prefix translation. Therefore, we design a small trainable network called Trainable (TN) stopping controller to learn when to stop prefix translation for non-terminal outer step. Fig. FIGREF22 shows the illustration. At each inner decoding step $k$ for non-terminal outer step $s$, the TN controller utilizes a stochastic policy $\pi _\theta $ parameterized by a neural network to make the binary decision on whether to stop translation at current stage: where $z_{\tau \left[ s-1\right]+k}^s$ is the current decoder hidden state. The prefix translation stops if the TN controller predicts $a_{\tau \left[ s-1\right]+k}=1$. The controller function $f_\theta $ can take on a variety of forms, and for simplicity we implement with a feedforward network with two hidden layers, followed by a softmax layer. To train the TN controller, we freeze the NMT model with pretrained parameters, and optimize the TN network with policy gradient for reward maximization $\mathcal {J}= \mathbb {E}_{\pi _{\theta }}(\sum _{t=1}^{T_\tau } r_t )$. With a trained TN controller, prefix translation stops at inner decoding step $w_s$ when predicting an EOS token or satisfying: In the following, we talk about the details of the reward function and the training detail with policy gradient. <<<Reward>>> To trade-off between translation quality and latency, we define the reward function at inner decoding step $k$ of outer step $s$ as: where $t=\tau \left[ s-1\right]+k$, and $r_t^Q$ and $r_t^D$ are rewards related to quality and delay, respectively. $\alpha \ge 0$ is a hyper-parameter that we adjust to balance the trade-off between translation quality and delay. Similar to BIBREF4, we utilize sentence-level BLEU BIBREF15, BIBREF16 with reward shaping BIBREF17 as the reward for quality: where is the intermediate reward. Note that the higher the values of BLEU are, the more rewards the TN controller receives. Following BIBREF4, BIBREF5, we use average lagging (AL) as the reward for latency: where $l(t)$ is the number of observed source tokens when generating the $t$-th target token, $t_e= \mathop {\rm argmin}_{t}{(l(t)=|\mathbf {x}|)}$ denotes the earliest point when the system observes the full source sentence, $\lambda =\frac{|\mathbf {y}|}{|\mathbf {x}|}$ represents the target-to-source length ratio and $d^* \ge 0$ is a hyper-parameter called target delay that indicates the desired system latency. Note that the lower the values of AL are, the more rewards the TN controller receives. <<</Reward>>> <<<Policy Gradient>>> We train the TN controller with policy gradientBIBREF18, and the gradients are: where $R_t=\sum _{i=t}^{T_\tau } r_i$ is the cumulative future rewards for the current decision. We can adopt any sampling approach to estimate the expected gradient. In our experiments, we randomly sample multiple action trajectories from the current policy $\pi _{\theta }$ and estimate the gradient with the collected accumulated reward. We try the variance reduction techniques by subtracting a baseline average reward estimated by a linear regression model from $R_t$ and find that it does not help to improve the performance. Therefore, we just normalize the reward in each mini batch without using baseline reward for simplicity. <<</Policy Gradient>>> <<</Learning When to Stop>>> <<</Stopping Criterion>>> <<</Simultaneous NMT>>> <<<Experiments>>> <<<Settings>>> <<<Dataset>>> We compare our approach with the baselines on WMT15 German-English (DE-EN) translation in both directions. This is also the most widely used dataset to evaluate SNMT's performance BIBREF3, BIBREF4, BIBREF5, BIBREF10, BIBREF13. To further evaluate our approach's efficacy in trading off translation quality and latency on other language pair and spoken language, we also conduct experiments with the proposed LE and TN method on NIST Chinese-to-English (ZH$\rightarrow $EN) translation and IWSLT16 German-English (DE-EN) translation in both directions. For WMT15, we use newstest2014 for validation and newstest2015 for test. For NIST, we use MT02 for validation, and MT05, MT06, MT08 for test. For IWSLT16, we use tst13 for validation and tst14 for test. Table TABREF32 shows the details. All the data is tokenized and segmented into subword symbols using byte-pair encoding BIBREF19 to restrict the size of the vocabulary. We use 40,000 joint merge operations on WMT15, and 24,000 on IWSLT16. For NIST, we use 30,000 merge operations for source and target side separately. Without explicitly mention, we simulate simultaneous translation scenario at inference time with these datasets by assuming that the system observes one new source token at each outer step, i.e., $c_s=1$. <<</Dataset>>> <<<Pretrained NMT Model>>> We use Transformer BIBREF8 trained with maximum likelihood estimation as the pretrained CNMT model and implement our method based on fairseq-py. We follow the setting in transformer_iwslt_de_en for IWSLT16 dataset, and transformer_wmt_en_de for WMT15 and NIST dataset. Fairseq-py adds an EOS token for all source sentences during training and inference. Therefore, to be consistent with the CNMT model implemented with fairseq-py, we also add an EOS token at the end of the source prefix for prefix translation. <<</Pretrained NMT Model>>> <<<TN Controller>>> To train the TN controller, we use a mini-batch size of 8,16,16 and sample 5,10,10 trajectories for each sentence pair in a batch for IWSLT16, WMT15 and NIST, respectively. We set the number of newly observed source tokens at each outer step to be 1 during the training for simplicity. We set $\alpha $ to be $0.04$, and $d^*$ to be $2,5,8$. All our TN controllers are trained with policy gradient using Adam optimizer BIBREF20 with 30,000 updates. We select the last model as our final TN controller. <<</TN Controller>>> <<<Baseline>>> We compare our model against three baselines that utilize a pretrained CNMT model to perform simultaneous translation: test_time_waitk: the test-time waitk simultaneous decoding algorithm proposed in BIBREF5, i.e., using a full-sentence model but decoding it with a waitk policy. We report the results when $k=1,3,5,7,9$. SL: the SL model proposed in BIBREF13, which learns an adaptive READ/WRITE policy from oracle READ/WRITE sequences generated with heuristics. We report the results $\rho =0.65,0.6,0.55,0.5,0.45,0.4$. BIBREF4: the adaptation of BIBREF4's two-staged full-sentence model + reinforcement learning on Transformer by BIBREF5. We report the results when using $CW=2,5,8$ as the target delay. We report the result with $d=0,2,4,6,8$ for our proposed LE method and $d^*=2,5,8$ for our proposed TN method. For all baselines, we cite the results reported in BIBREF13. Since they did not mention the details of data preprocessing, we cannot compare the BLEU and AL scores directly with theirs. Therefore, we normalize the BLEU and AL scores with its corresponding upper bound, i.e. the BLEU and AL scores obtained when the pretrained Transformer performs standard greedy decoding (Greedy). <<</Baseline>>> <<</Settings>>> <<<Results>>> We compare our method with the baselines on the test set of WMT15 EN$\rightarrow $DE and DE$\rightarrow $EN translation tasks. Fig. FIGREF40 shows the result. The points closer to the upper left corner indicate better overall performance, namely low latency and high quality. In all these figures, we observe that, as latency increases, all methods improve in quality. The TN stopping controller significantly outperforms all the baseline systems in both translation tasks, demonstrating that it indeed learns the appropriate timing to stop prefix translation. The LE controller outperforms the baselines on WMT15 EN$\rightarrow $DE translation at high latency region and performs similarly or worse on other cases. We show the model's efficacy in trading off quality and latency on other language pair and spoken language in Fig. FIGREF41. The TN controller obtains better performance on all translation tasks, especially at the low latency region. For example, on IWSLT16 EN$\rightarrow $ DE translation, it is +$2.5$ to +$3.3$ BLEU ahead of the LE method. TN also obtains promising translation quality with acceptable latency: with a lag of $<7$ tokens, TN obtains 96.95%, 97.20% and 94.03% BLEU with respect to consecutive greedy decoding for IWSLT16 EN$\rightarrow $DE, IWSLT16 DE$\rightarrow $EN and NIST ZH$\rightarrow $EN translation, respectively. <<</Results>>> <<<Analyze>>> We analyze the effect of different ways (Eq. DISPLAY_FORM10-DISPLAY_FORM13) to obtain the encoder and decoder hidden states at the beginning of prefix translation with the LE controller. Fig. FIGREF42 shows the result. We try three variants: a) dynamically rebuild all encoder/decoder hidden states (none); b) reuse decoder hidden states and rebuild all encoder hidden states (decoder); c) reuse previous encoder hidden states and rebuild all decoder hidden states (encoder). The left Y axis and X axis show BLEU-vs-AL curve. We observe that if reusing previous encoder hidden states (encoder), the translation fails. We ascribe this to the discrepancy between training and decoding for the encoder. We also observe that when $d=0,2$, reusing decoder hidden states (decoder) obtain negative AL. To analyze this, we plot the translation to reference length ratio versus AL curve with the right Y axis and X axis. It shows that with decoder, the decoding process stops too early and generates too short translations. Therefore, to avoid such problem and to be consistent with the training process of the CNMT model, it is important to dynamically rebuild all encoder/decoder hidden states for prefix translation. Since we make no assumption about the $c_s$, i.e., the number of newly observed source tokens at each outer step, we test the effect of different $c_s$ at this section. Fig. FIGREF43 shows the result with the LE and TN controllers on the test set of WMT15 EN$\rightarrow $DE translation. We observe that as $c_s$ increases, both LE and TN trend to improve in quality and worsen in latency. When $c_s=1$, LE controller obtains the best balance between quality and latency. In contrast, TN controller obtains similar quality and latency balance with different $c_s$, demonstrating that TN controller successfully learns the right timing to stop regardless of the input update schedule. We also analyze the TN controller's adaptability by monitoring the initial delay, i.e., the number of observed source tokens before emitting the first target token, on the test set of WMT15 EN$\rightarrow $DE translation, as shown in Fig. FIGREF52. $d^*$ is the target delay measured with AL (used in Eq. DISPLAY_FORM29). It demonstrates that the TN controller has a lot of variance in it's initial delay. The distribution of initial delay changes with different target delay: with higher target delay, the average initial delay is larger. For most sentences, the initial delay is within $1-7$. In speech translation, listeners are also concerned with long silences during which no translation occurs. Following BIBREF4, BIBREF5, we use Consecutive Wait (CW) to measure this: Fig. FIGREF54 shows the BLEU-vs-CW plots for our proposed two algorithms. The TN controller has higher CW than the LE controller. This is because TN controller prefers consecutive updating output buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 3\ 0\ 0\ 0\ 0\ 0\ 5\ 0\ 0\ 0\ 0\ 4\ ...$) while the LE controller often updates its output buffer following the input buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 1\ 1\ 1\ 1\ 1\ 1\ ...$ when $d=4$). Although larger than LE, the CW for TN ($< 6$) is acceptable for most speech translation scenarios. <<</Analyze>>> <<<Translation Examples>>> Fig. FIGREF55 shows three translation examples with the LE and TN controllers on the test set of NIST ZH$\rightarrow $EN and WMT15 EN$\rightarrow $DE translation. In manual inspection of these examples and others, we find that the TN controller learns a conservative timing for stopping prefix translation. For example, in example 2, our method outputs translation “wu bangguo attended the signing ceremony” when observing “吴邦国 出席 签字 仪式 并”, instead of a more radical translation “wu bangguo attended the signing ceremony and”. Such strategy helps to alleviate the problem of premature translation, i.e., translating before observing enough future context. <<</Translation Examples>>> <<</Experiments>>> <<<Related Work>>> A number of works in simultaneous translation divide the translation process into two stages. A segmentation component first divides the incoming text into segments, and then each segment is translated by a translator independently or with previous context. The segmentation boundaries can be predicted by prosodic pauses detected in speech BIBREF0, BIBREF21, linguistic cues BIBREF22, BIBREF23, or a classifier based on alignment information BIBREF24, BIBREF25 and translation accuracy BIBREF1, BIBREF2, BIBREF26. Some authors have recently endeavored to perform simultaneous translation in the context of NMT. BIBREF3, BIBREF14, BIBREF5 introduce a manually designed criterion to control when to translate. BIBREF11, BIBREF4, BIBREF12 extend the criterion into a trainable agent in a reinforcement learning framework. However, these work either develop sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5 or fail to use a pretrained consecutive NMT model in an optimal way BIBREF3, BIBREF14, BIBREF11, BIBREF4, BIBREF12, BIBREF13. In contrast, our work is significantly different from theirs in the way of using pretrained consecutive NMT model to perform simultaneous translation and the design of the two stopping criteria. <<</Related Work>>> <<<Conclusion>>> We have presented a novel framework for improving simultaneous translation with a pretrained consecutive NMT model. The basic idea is to translate partial source sentence with the pretrained consecutive NMT model and stops the translation with two novel stopping criteria. Extensive experiments demonstrate that our method outperforms the state-of-the-art baselines in balancing between translation quality and latency. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Related Work, Abstract" ], "type": "disordered_section" }
1909.05360
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction <<<Abstract>>> We propose a joint event and temporal relation extraction model with shared representation learning and structured prediction. The proposed method has two advantages over existing work. First, it improves event representation by allowing the event and relation modules to share the same contextualized embeddings and neural representation learner. Second, it avoids error propagation in the conventional pipeline systems by leveraging structured inference and learning methods to assign both the event labels and the temporal relation labels jointly. Experiments show that the proposed method can improve both event extraction and temporal relation extraction over state-of-the-art systems, with the end-to-end F1 improved by 10% and 6.8% on two benchmark datasets respectively. <<</Abstract>>> <<<Introduction>>> The extraction of temporal relations among events is an important natural language understanding (NLU) task that can benefit many downstream tasks such as question answering, information retrieval, and narrative generation. The task can be modeled as building a graph for a given text, whose nodes represent events and edges are labeled with temporal relations correspondingly. Figure FIGREF1 illustrates such a graph for the text shown therein. The nodes assassination, slaughtered, rampage, war, and Hutu are the candidate events, and different types of edges specify different temporal relations between them: assassination is BEFORE rampage, rampage INCLUDES slaughtered, and the relation between slaughtered and war is VAGUE. Since “Hutu” is actually not an event, a system is expected to annotate the relations between “Hutu” and all other nodes in the graph as NONE (i.e., no relation). As far as we know, all existing systems treat this task as a pipeline of two separate subtasks, i.e., event extraction and temporal relation classification, and they also assume that gold events are given when training the relation classifier BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Specifically, they built end-to-end systems that extract events first and then predict temporal relations between them (Fig. FIGREF1). In these pipeline models, event extraction errors will propagate to the relation classification step and cannot be corrected afterwards. Our first contribution is the proposal of a joint model that extracts both events and temporal relations simultaneously (see Fig. FIGREF1). The motivation is that if we train the relation classifier with NONE relations between non-events, then it will potentially have the capability of correcting event extraction mistakes. For instance in Fig. FIGREF1, if the relation classifier predicts NONE for (Hutu, war) with a high confidence, then this is a strong signal that can be used by the event classifier to infer that at least one of them is not an event. Our second contribution is that we improve event representations by sharing the same contextualized embeddings and neural representation learner between the event extraction and temporal relation extraction modules for the first time. On top of the shared embeddings and neural representation learner, the proposed model produces a graph-structured output representing all the events and relations in the given sentences. A valid graph prediction in this context should satisfy two structural constraints. First, the temporal relation should always be NONE between two non-events or between one event and one non-event. Second, for those temporal relations among events, no loops should exist due to the transitive property of time (e.g., if A is before B and B is before C, then A must be before C). The validity of a graph is guaranteed by solving an integer linear programming (ILP) optimization problem with those structural constraints, and our joint model is trained by structural support vector machines (SSVM) in an end-to-end fashion. Results show that, according to the end-to-end $F_1$ score for temporal relation extraction, the proposed method improves CAEVO BIBREF3 by 10% on TB-Dense, and improves CogCompTime BIBREF6 by 6.8% on MATRES. We further show ablation studies to confirm that the proposed joint model with shared representations and structured learning is very effective for this task. <<</Introduction>>> <<<Related Work>>> In this section we briefly summarize the existing work on event extraction and temporal relation extraction. To the best of our knowledge, there is no prior work on joint event and relation extraction, so we will review joint entity and relation extraction works instead. Existing event extraction methods in the temporal relation domain, as in the TempEval3 workshop BIBREF2, all use conventional machine learning models (logistic regression, SVM, or Max-entropy) with hand-engineered features (e.g., ClearTK BIBREF7 and NavyTime BIBREF8). While other domains have shown progress on event extraction using neural methods BIBREF9, BIBREF10, BIBREF11, recent progress in the temporal relation domain is focused more on the setting where gold events are provided. Therefore, we first show the performance of a neural event extractor on this task, although it is not our main contribution. Early attempts on temporal relation extraction use local pair-wise classification with hand-engineered features BIBREF12, BIBREF0, BIBREF13, BIBREF14. Later efforts, such as ClearTK BIBREF7, UTTime BIBREF15, NavyTime BIBREF8, and CAEVO BIBREF3 improve earlier work with better linguistic and syntactic rules. BIBREF16, BIBREF4, BIBREF17 explore structured learning for this task, and more recently, neural methods have also been shown effective BIBREF18, BIBREF19, BIBREF20, BIBREF5. In practice, we need to extract both events and those temporal relations among them from raw text. All the works above treat this as two subtasks that are solved in a pipeline. To the best of our knowledge, there has been no existing work on joint event-temporal relation extraction. However, the idea of “joint” has been studied for entity-relation extraction in many works. BIBREF21 frame their joint model as table filling tasks, map tabular representation into sequential predictions with heuristic rules, and construct global loss to compute the best joint predictions. BIBREF22 define a global structure for joint entity and relation extraction, encode local and global features based on domain and linguistic knowledge. and leverage beam-search to find global optimal assignments for entities and relations. BIBREF23 leverage LSTM architectures to jointly predict both entity and relations, but fall short on ensuring prediction consistency. BIBREF24 combine the benefits of both neural net and global optimization with beam search. Motivated by these works, we propose an end-to-end trainable neural structured support vector machine (neural SSVM) model to simultaneously extract events and their relations from text and ensure the global structure via ILP constraints. Next, we will describe in detail our proposed method. <<</Related Work>>> <<<Joint Event-Relation Extraction Model>>> In this section we first provide an overview of our neural SSVM model, and then describe each component in our framework in detail (i.e., the multi-tasking neural scoring module, and how inference and learning are performed). We denote the set of all possible relation labels (including NONE) as $\mathcal {R}$, all event candidates (both events and non-events) as $\mathcal {E}$, and all relation candidates as $\mathcal {E}\mathcal {E}$. <<<Neural SSVM>>> Our neural SSVM adapts the SSVM loss as: where $\bar{S}^n_{\mathcal {E}} = S(\hat{y}^n_\mathcal {E}; x^n) - S(y^n_\mathcal {E};x^n)$ and $\bar{S}^n_{\mathcal {R}} = S(\hat{y}^n_\mathcal {R}; x^n) - S(y^n_\mathcal {R};x^n)$ ; $\Phi $ denotes model parameters, $n$ indexes instances, $M^n = |\mathcal {E}|^n + |\mathcal {E}\mathcal {E}|^n$ denotes the total number of relations $|\mathcal {E}|^n$ and events $|\mathcal {E}\mathcal {E}|^n$ in instance $n$. $y^n,\hat{y}^n$ denote the gold and predicted global assignments of events and relations for instance $n$—each of which consists of either one hot vector representing true and predicted relation labels $y_{\mathcal {R}}^n, \hat{y}_{\mathcal {R}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}\mathcal {E}|}$, or entity labels $y_{\mathcal {E}}^n, \hat{y}_{\mathcal {E}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}|}$. A maximum a posteriori probability (MAP) inference is needed to find $\hat{y}^n$, which we formulate as an interger linear programming (ILP) problem and describe more details in Section SECREF12. $\Delta (y^n, \hat{y}^n)$ is a distance measurement between the gold and the predicted assignments; we simply use the Hamming distance. $C$ and $C_{\mathcal {E}}$ are the hyper-parameters to balance the losses between event, relation and the regularizer, and $S(y^n_\mathcal {E};x^n), S(y^n_\mathcal {R};x^n)$ are scoring functions, which we design a multi-tasking neural architecture to learn. The intuition behind the SSVM loss is that it requires the score of gold output structure $y^n$ to be greater than the score of the best output structure under the current model $\hat{y}^n$ with a margin $\Delta (y^n, \hat{y}^n)$ or else there will be some loss. The training objective is to minimize the loss. The major difference between our neural-SSVM and the traditional SSVM model is the scoring function. Traditional SSVM uses a linear function over hand-crafted features to compute the scores, whereas we propose to use a recurrent neural network to estimate the scoring function and train the entire architecture end-to-end. <<</Neural SSVM>>> <<<Multi-Tasking Neural Scoring Function>>> The recurrent neural network (RNN) architecture has been widely adopted by prior temporal extraction work to encode context information BIBREF18, BIBREF19, BIBREF20. Motivated by these works, we adopt a RNN-based scoring function for both event and relation prediction in order to learn features in a data driven way and capture long-term contexts in the input. In Fig. FIGREF6, we skip the input layer for simplicity. The bottom layer corresponds to contextualized word representations denoted as $v_k$. We use ($i, j$) $\in \mathcal {E}\mathcal {E}$ to denote a candidate relation and $i \in \mathcal {E}$ to indicate a candidate event in the input sentences of length N. We fix word embeddings computed by a pre-trained BERT-base model BIBREF27. They are then fed into a BiLSTM layer to further encode task-specific contextual information. Both event and relation tasks share this layer. The event scorer is illustrated by the left two branches following the BiLSTM layer. We simply concatenate both forward and backward hidden vectors to encode the context of each token. As for the relation scorer shown in the right branches, for each pair ($i,j$) we take the forward and backward hidden vectors corresponding to them, $f_i, b_i, f_j, b_j$, and concatenate them with linguistic features as in previous event relation prediction research. We denote linguistic features as $L_{i,j}$ and only use simple features provided in the original datasets: token distance, tense, and polarity of events. Finally, all hidden vectors and linguistic features are concatenated to form the input to compute the probability of being an event or a softmax distribution over all possible relation labels—which we refer to as the RNN-based scoring function in the following sections. <<</Multi-Tasking Neural Scoring Function>>> <<<MAP Inference>>> A MAP inference is needed both during training to obtain $\hat{y}^n$ in the loss function (Equation DISPLAY_FORM8), as well as during the test time to get globally coherent assignments. We formulate the inference problem as an ILP problem. The inference framework is established by constructing a global objective function using scores from local scorers and imposing several global constraints: 1) one-label assignment, 2) event-relation consistency, and 3) symmetry and transitivity as in BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF4. <<<Objective Function>>> The objective function of the global inference is to find the global assignment that has the highest probability under the current model, as specified in Equation DISPLAY_FORM14: where $y^e_k$ is a binary indicator of whether the $k$-th candidate is an event or not, and $y^r_{i,j}$ is a binary indicator specifying whether the global prediction of the relation between $(i,j)$ is $r \in \mathcal {R}$. $S(y^e_k,x), \forall e \in \lbrace 0, 1\rbrace $ and $S(y^r_{i,j},x), \forall r \in \mathcal {R}$ are the scoring functions obtained from the event and relation scoring functions, respectively. The output of the global inference $\bf {\hat{y}}$ is a collection of optimal label assignments for all events and relation candidates in a fixed context. $C_{\mathcal {E}}$ is a hyper-parameter controlling weights between relation and event. The constraint that follows immediately from the objective function is that the global inference should only assign one label for all entities and relations. <<</Objective Function>>> <<<Constraints>>> We introduce several additional constraints to ensure the resulting optimal output graph forms a valid and plausible event graph. <<<Event-Relation Consistency.>>> Event and relation prediction consistency is defined with the following property: a pair of input tokens have a positive temporal relation if and only if both tokens are events. The following global constraints will satisfy this property, where $e^P_i$ denotes an event and $e^N_i$ denotes a non-event token. $r^P_{i,j}$ indicates positive relations: BEFORE, AFTER, SIMULTANEOUS, INCLUDES, IS_INCLUDED, VAGUE and $r^N_{i,j}$ indicate a negative relation, i.e., NONE. A formal proof of this property can be found in Appendix A. <<</Event-Relation Consistency.>>> <<<Symmetry and Transitivity Constraint.>>> We also explore the symmetry and transitivity constraints of relations. They are specified as follows: Intuitively, the symmetry constraint forces two pairs of events with flipping orders to have reversed relations. For example, if $r_{i,j}$ = BEFORE, then $r_{j,i}$ = AFTER. The transitivity constraint rules that if ($i,j$), ($j,k$) and ($i,k$) pairs exist in the graph, the label (relation) prediction of ($i,k$) pair has to fall into the transitivity set specifyed by ($i,j$) and ($j,k$) pairs. The full transitivity table can be found in BIBREF25. <<</Symmetry and Transitivity Constraint.>>> <<</Constraints>>> <<</MAP Inference>>> <<<Learning>>> We begin by experimenting with optimizing SSVM loss directly, but model performance degrades. Therefore, we develop a two-state learning approach which first trains a pipeline version of the joint model without feedback from global constraints. In other words, the local neural scoring functions are optimized with cross-entropy loss using gold events and relation candidates that are constructed directly from the outputs of the event model. During the second stage, we switch to the global SSVM loss function in Equation DISPLAY_FORM8 and re-optimize the network to adjust for global properties. We will provide more details in Section SECREF4. <<</Learning>>> <<</Joint Event-Relation Extraction Model>>> <<<Implementation Details>>> In this section we describe implementation details of the baselines and our four models to build an end-to-end event temporal relation extraction system with an emphasis on the structured joint model. In Section SECREF6 we will compare and contrast them and show why our proposed structured joint model works the best. <<<Baselines>>> We run two event and relation extraction systems, CAEVO BIBREF3 and CogCompTime BIBREF6, on TB-Dense and MATRES, respectively. These two methods both leverage conventional learning algorithms (i.e., MaxEnt and averaged perceptron, respectively) based on manually designed features to obtain separate models for events and temporal relations, and conduct end-to-end relation extraction as a pipeline. Note BIBREF3 does not report event and end-to-end temporal relation extraction performances, so we calculate the scores per our implementation. <<</Baselines>>> <<<End-to-End Event Temporal Relation Extraction>>> <<<Single-Task Model.>>> The most basic way to build an end-to-end system is to train separate event detection and relation prediction models with gold labels, as we mentioned in our introduction. In other words, the BiLSTM layer is not shared as in Fig. FIGREF6. During evaluation and test time, we use the outputs from the event detection model to construct relation candidates and apply the relation prediction model to make the final prediction. <<</Single-Task Model.>>> <<<Multi-Task Model.>>> This is the same as the single-task model except that the BiLSTM layer is now shared for both event and relation tasks. Note that both single-task and multi-task models are not trained to tackle the NONE relation directly. They both rely on the predictions of the event model to annotate relations as either positive pairs or NONE. <<</Multi-Task Model.>>> <<<Pipeline Joint Model.>>> This shares the same architecture as the multi-task model, except that during training, we use the predictions of the event model to construct relation candidates to train the relation model. This strategy will generate NONE pairs during training if one argument of the relation candidate is not an event. These NONE pairs will help the relation model to distinguish negative relations from positive ones, and thus become more robust to event prediction errors. We train this model with gold events and relation candidates during the first several epochs in order to obtain a relatively accurate event model and switch to a pipeline version afterwards inspired by BIBREF23. <<</Pipeline Joint Model.>>> <<<Structured Joint Model.>>> This is described in detail in Section SECREF3. However, we experience difficulties in training the model with SSVM loss from scratch. This is due to large amounts of non-event tokens, and the model is not capable of distinguishing them in the beginning. We thus adopt a two-stage learning procedure where we take the best pipeline joint model and re-optimize it with the SSVM loss. To restrict the search space for events in the ILP inference of the SSVM loss, we use the predicted probabilities from the event detection model to filter out non-events since the event model has a strong performance, as shown in Section SECREF6. Note that this is very different from the pipeline model where events are first predicted and relations are constructed with predicted events. Here, we only leverage an additional hyper-parameter $T_{evt}$ to filter out highly unlikely event candidates. Both event and relation labels are assigned simutaneously during the global inference with ILP, as specified in Section SECREF12. We also filter out tokens with POS tags that do not appear in the training set as most of the events are either nouns or verbs in TB-Dense, and all events are verbs in MATRES. <<</Structured Joint Model.>>> <<<Hyper-Parameters.>>> All single-task, multi-task and pipeline joint models are trained by minimizing cross-entropy loss. We observe that model performances vary significantly with dropout ratio, hidden layer dimensions of the BiLSTM model and entity weight in the loss function (with relation weight fixed at 1.0). We leverage a pre-trained BERT model to compute word embedding and all MLP scoring functions have one hidden layer. In the SSVM loss function, we fix the value of $C = 1$, but fine-tune $C_\mathcal {E}$ in the objective function in Equation DISPLAY_FORM14. Hyper-parameters are chosen using a standard development set for TB-Dense and a random holdout-set based on an 80/20 split of training data for MATRES. To solve ILP in the inference process, we leverage an off-the-shelf solver provided by Gurobi optimizer; i.e. the best solutions from the Gurobi optimizer are inputs to the global training. The best combination of hyper-parameters can be found in Table 9 in our appendix. <<</Hyper-Parameters.>>> <<</End-to-End Event Temporal Relation Extraction>>> <<</Implementation Details>>> <<<Experimental Setup>>> In this section we first provide a brief overview of temporal relation data and describe the specific datasets used in this paper. We also explain the evaluation metrics at the end. <<<Temporal Relation Data>>> Temporal relation corpora such as TimeBank BIBREF32 and RED BIBREF33 facilitate the research in temporal relation extraction. The common issue in these corpora is missing annotations. Collecting densely annotated temporal relation corpora with all events and relations fully annotated is reported to be a challenging task as annotators could easily overlook some facts BIBREF34, BIBREF35, BIBREF3, BIBREF4, which made both modeling and evaluation extremely difficult in previous event temporal relation research. The TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences, and it has been widely evaluated on this task BIBREF3, BIBREF4, BIBREF19, BIBREF5. Recent data construction efforts such as MATRES BIBREF25 further enhance the data quality by using a multi-axis annotation scheme and adopting a start-point of events to improve inter-annotator agreements. We use TB-Dense and MATRES in our experiments and briefly summarize the data statistics in Table TABREF33. <<</Temporal Relation Data>>> <<<Evaluation Metrics>>> To be consistent with previous research, we adopt two different evaluation metrics. The first one is the standard micro-average scores. For densely annotated data, the micro-average metric should share the same precision, recall and F1 scores. However, since our joint model includes NONE pairs, we follow the convention of IE tasks and exclude them from evaluation. The second one is similar except that we exclude both NONE and VAGUE pairs following BIBREF6. Please refer to Figure 4 in the appendix for a visualizations of the two metrics. <<</Evaluation Metrics>>> <<</Experimental Setup>>> <<<Results and Analysis>>> The main results of this paper can be found in Table TABREF34. All best-recall and F1 scores are achieved by our structured joint model, and the results outperform the baseline systems by 10.0% and 6.8% on end-to-end relation extraction per F1 scores and 3.5% and 2.6% on event extraction per F1 scores. The best precision score for the TB-Dense dataset is achieved by CAEVO, which indicates that the linguistic rule-based system can make highly precise predictions by being conservative. Table TABREF35 shows a more detailed analysis, in which we can see that our single-task models with BERT embeddings and a BiLSTM encoder already outperform the baseline systems on end-to-end relation extraction tasks by 4.9% and 4.4% respectively. In the following sections we discuss step-by-step improvement by adopting multi-task, pipeline joint, and structured joint models on end-to-end relation extraction, event extraction, and relation extraction on gold event pairs. <<<End-to-End Relation Extraction>>> <<<TB-Dense.>>> The improvements over the single-task model per F1 score are 4.1% and 4.2% for the multi-task and pipeline joint model respectively. This indicates that the pipeline joint model is helpful only marginally. Table TABREF46 shows that the structured joint model improves both precision and recall scores for BEFORE and AFTER and achieves the best end-to-end relation extraction performance at 49.4%—which outperforms the baseline system by 10.0% and the single-task model by 5.1%. <<</TB-Dense.>>> <<<MATRES.>>> Compared to the single-task model, the multi-task model improves F1 scores by 1.5%, while the pipeline joint model improves F1 scores by 1.3%—which means that pipeline joint training does not bring any gains for MATRES. The structured joint model reaches the best end-to-end F1 score at 59.6%, which outperforms the baseline system by 6.8% and the single-task model by 2.4%. We speculate that the gains come from the joint model's ability to help deal with NONE pairs, since recall scores for BEFORE and AFTER increase by 1.5% and 1.1% respectively (Table 10 in our appendix). <<</MATRES.>>> <<</End-to-End Relation Extraction>>> <<<Event Extraction>>> <<</Event Extraction>>> <<<Relation Extraction with Gold Events>>> <<</Relation Extraction with Gold Events>>> <<<Discussion>>> <<<Label Imbalance.>>> One way to mitigate the label imbalance issue is to increase the sample weights for small classes during model training. We investigate the impact of class weights by refitting our single-task model with larger weights on INCLUDES, IS_INCLUDED and VAGUE in the cross-entropy loss. Figure FIGREF50 shows that increasing class weights up to 4 times can significantly improve the F1 scores of INCLUDES and IS_INCLUDED classes with a decrease less than 2% for the overall F1 score. Performance of INCLUDES and IS_INCLUDED eventually degrades when class weights are too large. These results seem to suggest that more labels are needed in order to improve the performance on both of these two classes and the overall model. For SIMULTANEOUS, our model does not make any correct predictions for both TB-DENSE and MATRES by increasing class weight up to 10 times, which implies that SIMULTANEOUS could be a hard temporal relation to predict in general. <<</Label Imbalance.>>> <<<Global Constraints.>>> In Table TABREF51 we conduct an ablation study to understand the contributions from the event-relation prediction consistency constraint and the temporal relation transitivity constraint for the structured joint model. As we can see, the event-relation consistency help s improve the F1 scores by 0.9% and 1% for TB-Dense and MATRES, respectively, but the gain by using transitivity is either non-existing or marginal. We hypothesize two potential reasons: 1) We leveraged BERT contextualized embedding as word representation, which could tackle transitivity in the input context; 2) NONE pairs could make transitivity rule less useful, as positive pairs can be predicted as NONE and transitivity rule does not apply to NONE pairs. <<</Global Constraints.>>> <<<Error Analysis.>>> By comparing gold and predicted labels for events and temporal relations and examining predicted probabilities for events, we identified three major sources of mistakes made by our structured model, as illustrated in Table TABREF57 with examples. <<</Error Analysis.>>> <<<Type 1.>>> Both events in Ex 1 are assigned low scores by the event module ($<< 0.01$). Although the structured joint model is designed to predict events and relations jointly, we leverage the event module to filter out tokens with scores lower than a threshold. Consequently, some true events can be mistakenly predicted as non-events, and the relation pairs including them are automatically assigned NONE. <<</Type 1.>>> <<<Type 2.>>> In Ex 2 the event module assigns high scores to tokens happened (0.97) and according (0.89), but according is not an event. When the structured model makes inference jointly, the decision will weigh heavily towards assigning 1 (event) to both tokens. With the event-relation consistency constraint, this pair is highly likely to be predicted as having a positive temporal relation. Nearly all mistakes made in this category follow the same pattern illustrated by this example. <<</Type 2.>>> <<<Type 3.>>> The existence of VAGUE makes temporal relation prediction challenging as it can be easily confused with other temporal relations, as shown in Ex 3. This challenge is compounded with NONE in our end-to-end extraction task. Type 1 and Type 2 errors suggest that building a stronger event detection module will be helpful for both event and temporal relation extraction tasks. To improve the performance on VAGUE pairs, we could either build a stronger model that incorporates both contextual information and commonsense knowledge or create datasets with annotations that better separate VAGUE from other positive temporal relations. <<</Type 3.>>> <<</Discussion>>> <<</Results and Analysis>>> <<<Conclusion>>> In this paper we investigate building an end-to-end event temporal relation extraction system. We propose a novel neural structured prediction model with joint representation learning to make predictions on events and relations simultaneously; this can avoid error propagation in previous pipeline systems. Experiments and comparative studies on two benchmark datasets show that the proposed model is effective for end-to-end event temporal relation extraction. Specifically, we improve the performances of previously published systems by 10% and 6.8% on the TB-Dense and MATRES datasets, respectively. Future research can focus on creating more robust structured constraints between events and relations, especially considering event types, to improve the quality of global assignments using ILP. Since a better event model is generally helpful for relation extraction, another promising direction would be to incorporate multiple datasets to enhance the performance of our event extraction systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Related Work, Introduction" ], "type": "disordered_section" }
2003.12738
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Variational Transformers for Diverse Response Generation <<<Abstract>>> Despite the great promise of Transformers in many sequence modeling tasks (e.g., machine translation), their deterministic nature hinders them from generalizing to high entropy tasks such as dialogue response generation. Previous work proposes to capture the variability of dialogue responses with a recurrent neural network (RNN)-based conditional variational autoencoder (CVAE). However, the autoregressive computation of the RNN limits the training efficiency. Therefore, we propose the Variational Transformer (VT), a variational self-attentive feed-forward sequence model. The VT combines the parallelizability and global receptive field of the Transformer with the variational nature of the CVAE by incorporating stochastic latent variables into Transformers. We explore two types of the VT: 1) modeling the discourse-level diversity with a global latent variable; and 2) augmenting the Transformer decoder with a sequence of fine-grained latent variables. Then, the proposed models are evaluated on three conversational datasets with both automatic metric and human evaluation. The experimental results show that our models improve standard Transformers and other baselines in terms of diversity, semantic relevance, and human judgment. <<</Abstract>>> <<<Introduction>>> Convolutional and fully-attentional feed-forward architectures, such as Transformers BIBREF0, have emerged as effective alternatives to RNNs BIBREF1 in wide range of NLP tasks. These architectures remove the computational temporal dependency during the training and effectively address the long-standing vanishing gradients problem of recurrent models by processing all inputs simultaneously. Notably, transformers apply a fully attention strategy, where each token in the sequence is informed by other tokens via a self-attention mechanism. It acts as an effectively global receptive field across the whole sequences which absence in RNNs. Despite the powerful modeling capability of trasnformers, they often fail to model one-to-many relation in dialogue response generation tasks BIBREF2 due to their deterministic nature. As a result, they generate dull and generic response (e.g., “I am not sure"), especially with greedy and beam search, which are widely used in other sequence modeling tasks. There have been attempts to generate diverse and informative dialogue responses by incorporating latent variable(s) into the RNN encoder-decoder architecture. In particular BIBREF2 adapt a conditional variational autoencoder (CVAE) to capture discourse-level variations of dialogue, while BIBREF3 and BIBREF4 integrates latent variables in the hidden states of the RNN decoder. However, the inherently sequential computation of aforementioned models limit the efficiency for large scale training. In this paper, we introduce the Variational Transformer (VT) a variational self-attentive feed-forward sequence model to address the aforementioned issues. The VT combine the parallelizability and global receptive field of the transformer with the variational nature of CVAE by incorporating stochastic latent variables into transformers. We explore two types of VT: 1) Global Variational Transformer (GVT), and 2) Sequential Variational Transformer. The GVT is the extension of CVAE in BIBREF2, which modeling the discourse-level diversity with a global latent variable, While SVT, inspired by variational autoregressive models BIBREF3, BIBREF4, incorporates a sequence of latent variables into decoding process by using a novel variational decoder layer. Unlike previous approaches BIBREF2, BIBREF3, BIBREF4, SVT uses Non-causal Multi-head Attention, which attend to future tokens for computing posterior latent variables instead of using an additional encoder. The proposed VT architectures integrate stochastic latent variables into Transformers. The experimental results on a three conversation dataset demonstrate that our models can generate more informative and coherent responses. <<</Introduction>>> <<<Related work>>> <<<Neural Conversational Models>>> Conversational systems has been widely studied BIBREF5, BIBREF6, BIBREF7, BIBREF8. Compare to rule-based systems BIBREF5, BIBREF6, sequence-to-sequence conversation models achieve superior performance in terms of scalable training and generalization ability BIBREF7. However, it has been pointed out that encoder-decoder models tend to generate generic and repetitive responses like “I am sorry" BIBREF9. To address this issue, there have been three main lines of work. The first is adding additional information (e.g., persona) as input to guild model generate more informative responses BIBREF10, BIBREF11. The second modifies the learning objective to promote more diverse generation BIBREF9, and the third integrates stochastic latent variables into Seq2Seq models by using the CVAE framework BIBREF12, BIBREF2. Our work comes within this third line introducing a novel model, the Variational Transformer, to improve dialogue response generation. <<</Neural Conversational Models>>> <<<Conditional Variational Autoencoders>>> Many works have attempted to combine CVAEs with encoder-decoder architectures for sequence generation tasks. BIBREF13 propose a variational encoder-decoder model for neural machine translation, while BIBREF14 apply variational recurrent neural networks (VRNN) BIBREF15 for text summarization. BIBREF2 and BIBREF16 explore incorporating meta features into CVAE framework in dialogue response generation tasks. BIBREF3 and BIBREF4 propose variational autoregressive decoders which enhanced by highly multi-modal latent variables to capture the high variability in dialogue responses. BIBREF17 further augment variational autoregressive decoders with dynamic memory networks for improving generation quality. We unify the previous successful ideas of CVAE, and explore the combinations of CVAE and Transformer. <<</Conditional Variational Autoencoders>>> <<<Fully Attentional Networks>>> Taking advantage of the parallel-in-time structure and global receptive field, Transformers BIBREF0 have recently been shown to achieve impressive results on various sequence modeling tasks. Based on this, several follow-up models have been presented. The Image Transformer BIBREF18 has been proposed for image generation, while the MultiModel BIBREF19 integrates convolution, attention and sparsely-gated mixture-of-expert blocks into a single deep-learning model for simultaneously learning multiple tasks from various domains. BIBREF20 proposed a fully attentional mixture-of-expert model (MoEL) for empathetic dialogue modeling. The Universal Transformer BIBREF1 incorporates the recurrent inductive bias of RNNs into the standard Transformer, and achieves better result on a wide range of algorithmic and language understanding tasks. BIBREF21 introduce the Latent Transformer (LT) for non-autoregressive machine translation. During training, the LT first autoencodes a target sequence into a shorter sequence discrete latent variables. Then a parallel decoder decodes the target using discrete latent variables and an input sequence. Different from the LT BIBREF21, the VT generates continuous latent variables during the decoding process. <<</Fully Attentional Networks>>> <<</Related work>>> <<<Preliminaries>>> <<<Conditional Variational Autoencoder for Dialogue Generation>>> The CVAE framework BIBREF22 represents a dyadic conversation via three random variables: the input condition $c$, including conversation context and meta features (meta features can be ignored when not available); a latent variable $z$; and the target response $x$. A CVAE can be efficiently trained with Stochastic Gradient Variational Bayes (SGVB) BIBREF23 by maximizing the variational lower bound of $x$ given c, according to: The typical CVAE consists of a prior network $p_{\theta }(z | c)$, which is used to approximate $p(z | c)$, a recognition network $p_{\phi }(z | c, x)$, which is used to approximate posterior distribution $q(z | c, x)$, and a decoder $p_{\theta }(x | z, c)$, which is used to approximate $p(x | z, c)$. By assuming z follows multivariate Gaussian distribution with a diagonal co-variance matrix, the evidence lower bound (ELBO) can be written as where $\mathcal {L}_{REC}$ denotes the reconstruction loss and $\mathcal {L}_{KL}$ denotes the Kullback-Leibler (KL) divergence between the posterior and prior. In dialogue generation tasks, previous works BIBREF2, BIBREF16 apply RNN encoders (with GRU or LSTM cell) to encode dialogue contexts and responses separately. The condition $c$ is represented by the concatenation of the last hidden state of the context encoder and the meta features (e.g., topic, emotion), while the response $x$ is represented by the last hidden state of response encoder. Then the prior network $p_{\theta }(z | c)$ and the recognition network $p_{\phi }(z | c, x)$ parameterized by multi-layer perceptrons (MLPs) are applied to approximate the means and the log variances of the prior latent distribution $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and posterior latent distribution $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. With the reparameterization trick BIBREF23, we can obtain samples of the prior latent variable (for testing) from $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and samples of the posterior latent variable (for training) from $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. Finally, an RNN decoder use $z$ and $c$ as the initial state to predicts the response $x$. The vanishing latent variable problem BIBREF24 is a common issue in RNN-based CVAEs. That is, the powerful autoregressive RNN decoder first learns to ignore the latent variable, and decodes the response by only condition on the previous tokens. Thus the latent variable fails to encode the meaningful information, and the CVAE deteriorates to seq2seq model. To alleviate this issue, KL annealing BIBREF24 and bag-of-word loss BIBREF2 have been proposed, and have shown effectiveness in various dialogue tasks BIBREF2, BIBREF16. <<</Conditional Variational Autoencoder for Dialogue Generation>>> <<<CVAE with Transformer>>> The aforementioned RNN-based CVAE framework integrate the latent variable into the initial state of RNN decoder, while in transformer, it is more flexible to incorporate the latent variable embedding into the first input token of the decoder to generate the initial state. The overall architecture of GVT is depicted in Figure FIGREF9. Different from RNNs, the Transformer encoder maps an input sequence of symbol representations to a sequence of contextualized representations BIBREF0. In order to get fixed dimension representations of the response and context, we add a special token $CLS$ at the beginning of the input sequence as in BERT BIBREF25, to compute the weighted sum of the output representations via self-attention. Thus the output representation of the token $CLS$ is considered as the representation of the whole sequence. Then we introduce a recognition network and a prior network to compute the posterior latent variable and prior latent variable as in BIBREF2, BIBREF16. We add the latent variable sample $z$ and meta features $m$ (can be ignored when not available) into $e_{SOS}$, the embedding of the start-of-sequence token $SOS$: Finally, the transformer decoder decodes the response $x$ sequentially while attending to the new embedding $e^{\prime }_{SOS}$ of token $SOS$ with latent information. This design enhances the CVAE framework with the global receptive field, and each position of the GVT can directly access the latent information via the multi-head self-attention mechanism. However, we still observe that the GVT suffers the vanishing latent variable problem as RNN-based CVAE because the decoder can bypass the latent information by paying less attention to the $SOS$ token. Hence, we apply the KL annealing, and bag-of-word auxiliary loss $\mathcal {L}_{bow}$ as in BIBREF2, BIBREF16 to preserve the useful information of the latent variable. Therefore, the learning objective of the GVT is defined as follows: <<</CVAE with Transformer>>> <<</Preliminaries>>> <<<Sequential Variational Transformer>>> In order to augment the capacity of the latent variable with multi-modal distributions and to better utilize the latent information, we further explore incorporating a sequence of latent variables in decoding process. We introduce Sequential Variational Transformer (SVT) with a novel variational decoder layer which generate latent variables for each position: $z=\left(z_{1}, \dots , z_{T}\right)$. Similar to BIBREF3, we interpret the latent variables as a generation plan for the future sequence. Unlike previous CVAE models which use an extra encoder to encode the response separately BIBREF2, BIBREF16 or use a backward RNN to encode the future sequence for each time step BIBREF3, BIBREF4, SVT uses a Non-causal Multi-head Attention which leaks the future information to the recognition network for computing the posterior latent variables. As shown in Figure FIGREF13, the SVT shares the same encoder as the standard Transformer BIBREF0, while its decoder consists of a variational decoder layer followed by a stack of $N$ standard Transformer decoder layers. The variational decoder layer has two paths for computing the posterior latent variable and prior latent variable respectively. We denote them as Posterior Path and Prior Path. <<<Prior Path>>> The Prior Path (solid line in Figure FIGREF13) has a masked multi-head self-attention sub-layer which performs causal attention on the shifted response, followed by a multi-head self-attention sub-layer which performs encoder-decoder multi-head attention on the context encoder. The last sub-layer is composed of a MLP prior network which approximates a sequence of prior latent variable for each position, and a Position-wise Feed-Forward Network (FFN) which fuse the latent information $z$ with the observed information representation $o^P$ before the prior network (shown in Figure FIGREF13). Specifically, we concatenate $o^P$ with $z$ as the input to the FNN, and the FNN pass the fused representation to the next layer. Same as BIBREF0, in the variational decoder layer, each sub-layer is followed by a residual connection and layer normalization. That is, the output of each sub-layer is $LayerNorm(x + Sublayer(x))$. We decompose the response $x$ as $x = \left(x_1, \cdots , x_T\right)$ and the latent variable $z$ as $z=\left(z_{1}, \dots , z_{T}\right)$. The prior model produces latent variables at each position $z_t$ by not only conditioning on the input condition $c$ (the concatenation of context and meta features), but also conditioning on the observed response tokens $x_{1:t-1}$. By assuming $z_t$ follows a multivariate Gaussian distribution, the prior model becomes: where <<</Prior Path>>> <<<Posterior Path>>> The only difference between the Posterior Path (dash line in Figure FIGREF13) and Prior Path is that the mask is removed from the masked multi-head attention. Thus the masked (casual) multi-head attention become non-casual multi-head attention, which allows each position to attend to the subsequent positions. Then, the second multi-head attention sub-layer (shared the same weight with prior path) performs posterior attention on the encoder and passes the posterior observed information $o_R$ to the recognition network. The recognition network produces the posterior latent variable for each position $z_t$ as: where During the training, the posterior path guides the learning of prior path via KL divergence constraint: In the training phase, the posterior latent variables from Equation DISPLAY_FORM17 are passed to the FFN, while in the testing phase the Posterior Path will be blocked and the posterior latent variables will be replaced with the prior latent variables from Equation DISPLAY_FORM15. During the decoding process, each response token $x_t$ is generated by conditioning on observed response tokens $x_{1:t-1}$, latent variables $z_{1:t}$, and the input condition $c$. The decoding process of the SVT is: <<</Posterior Path>>> <<<Auxiliary Loss>>> As we expect the latent variables to be a generation plan for the future sequence, we inject such bias into latent variables by using an auxiliary loss: Sequential-Bag-of-Word (SBOW) which proposed by BIBREF4. The idea of the SBOW auxiliary objective is to sequentially predict the bag of succeeding target words $x_{t:T}$ by using latent variable $z_t$. In our case, the succeeding words prediction also leverages the observed information $c$ and $x_{1:t-1}$. Thus the auxiliary loss at each position is computed by: where $f_{aux}$ is a feed-forward neural network with the softmax output. <<</Auxiliary Loss>>> <<<Learning>>> The evidence lower bound (ELBO) objective of SVT is the sum of the reconstruction loss $\mathcal {L}_{REC}(t)$ and Kullback-Leibler divergence loss $\mathcal {L}_{KL}(t)$ at each position: We regularize the ELBO learning objective with an auxiliary loss $\mathcal {L}_{sbow}$ to enhance the expressiveness of the latent variables. Therefore, the final learning objective is formulated as follows: where, <<</Learning>>> <<</Sequential Variational Transformer>>> <<<Experiments>>> <<<Dataset>>> We evaluate the proposed models on three conversationet dataset such as MojiTalk BIBREF16, PersonaChat BIBREF11, Empathetic-Dialogues BIBREF26. <<<MojiTalk>>> dataset consists of 596,959 post and response pairs from Twitter. Each response is labeled by one emoji which indicates the response emotion. There are 64 emoji labels in total with unbalanced distribution. We use the preprocessed data and vocabulary released from BIBREF16 and follow the same split of train/validation/test set. <<</MojiTalk>>> <<<PersonaChat & Empathetic-Dialogues>>> are one-to-one multi-turn conversation datasets. In PersonaChat (Persona), the conversations are revolve around personas which are established by four to six persona sentences. While in Empathetic-Dialogues (ED), the conversation are mostly about situation that happened to one of the speaker and another speaker is trying to understand the feeling and reply accordingly. Both datasets are about modeling social skills and the goal is to make user more engaging. Therefore, we combine the train/validation/test set of two datasets. <<</PersonaChat & Empathetic-Dialogues>>> <<</Dataset>>> <<<Baselines>>> We compare the proposed models with the following baselines: <<<Seq2Seq.>>> An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16. <<</Seq2Seq.>>> <<<CVAE.>>> An RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16. <<</CVAE.>>> <<<Transformer.>>> A transformer BIBREF0 trained by using a Maximum Likelihood Estimation (MLE) objective and can be considered as the base model for both the GVT and SVT. <<</Transformer.>>> <<</Baselines>>> <<<Hyper-parameters and Training Setup>>> We use a 4-layer Transformer as our base model. The hidden size is set to be 300 everywhere, and the word embedding is initialized with the 300-dimensional pre-trained GloVe embeddings for both encoder and decoder. The multi-head attention sub-layers are made up of 4 attention heads each with embedding dimension 64. The size of latent variable is 300. The recognition network and the prior network are parameterized by 3-layer MLPs with 512 hidden dimension. Following the training setup of BIBREF16, we first train our baseline transformer model with the MLE objective and use it to initialize its counterparts in both GVT and SVT. Then the models are trained end-to-end by the Adam optimizer with the initial learning rate $2\times 10^{-4}$. KL annealing and early stopping strategy are applied as in BIBREF16. In the test time, we use greedy decoding strategy for all models. <<</Hyper-parameters and Training Setup>>> <<<Automatic Evaluation>>> <<<PPL & KLD.>>> The evaluation metrics include Perplexity (PPL) and Kullback-Leibler divergence between the posterior and prior (KLD). A well trained model should achieve a low reconstruction and small but non-trivial KL distance BIBREF27. <<</PPL & KLD.>>> <<<Diversity.>>> To measure the generation diversity, we calculate Dist-1, Dist-2, and Dist-3, the ratio of the number of distinct n-grams (unigrams, bigrams, and trigrams) over the total number of n-grams. A higher distinct n-grams ratio indicates more diverse generation. <<</Diversity.>>> <<<Embeddings Similarity.>>> This metric computes the cosine similarity between the sentence embedding of a generated sequence and that of a ground-truth response. In our experiments, we introduce two different ways to represent sentence embeddings. The first is $\textbf {EMB}_\textbf {FT}$ BIBREF28 that calculates the average of word embeddings in a sentence using FastText BIBREF29 which is trained with Common Crawl and Wikipedia data. We use FastText embeddings instead of other pre-trained word embeddings because it can handle out-of-vocabulary issue. However, representing a sentence by simply taking the average of word embeddings ignores the context information. Therefore, we propose to use a pre-trained language model BERT BIBREF25 to compute the contextualized sentence representation. Specifically, we use a pre-trained BERT to encode a generated sentence and a ground-truth response, and average the output representation of both to obtain the sentence embeddings. We denote such contextualized sentence embedding as $\textbf {EMB}_\textbf {BERT}$. <<</Embeddings Similarity.>>> <<</Automatic Evaluation>>> <<<Human Evaluation>>> In the human evaluation, we prepare multiple-choice questions for human evaluators and the answers are the generation results from the five models (Seq2Seq, CVAE, Transformer, GVT, and SVT). we first randomly sample 100 dialogues and their corresponding responses from our models and the baselines. For each response, we assign three human annotators to select the most coherent (on topic) response to the context (multiple answers are allowed). In addition, annotators also need to choose the best response correlated to the given emoji label in Mojitalk and the most engaging response in PersonaChat and Empathetic-Dialogues. If there is no response that satisfies the evaluators, they can choose “all answers are bad", which means none of the answer is chosen. We compute the rate that each model is chosen to quantify generation quality regarding to the human standard. <<</Human Evaluation>>> <<</Experiments>>> <<<Results>>> <<<Quantitative Analysis>>> The automatic evaluation results are shown in Table TABREF35. Transformer-based models have significantly lower perplexity compared to RNN-based models which indicate that the global receptive field performed by multi-head self-attention boost the modeling capacity. However, deterministic Seq2Seq and Transformer models tends to generate generic responses which leads to a low diversity score. Meanwhile incorporating a stochastic latent variable into both models (CVAE and GVT) promote more diverse generation results and boost the diversity scores such as Dist-1, Dist-2, and Dist-3. Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL. On the other hand, SVT achieves the highest score in terms of two semantic relevance-oriented metrics such as $\textbf {EMB}_\textbf {FT}$ and $\textbf {EMB}_\textbf {BERT}$ in MojiTalk dataset, while in the combined dataset of Persona and ED, we observe performance drop of SVT compare to other models. This is because both Persona and ED are well designed and have lower entropy than MojiTalk which collected from Twitter. We hypothesize that the sequential latent variables have no advantage in term of similarity to single, fixed "gold response" when model low entropy response. Indeed, in open domain dialogue response generation, automatic metric is not always aligned with the human judgement BIBREF28. In contrast, human evaluation result reported in Table TABREF35 demonstrates the generations of SVT are closer to the human standard in terms of coherence, invoked emotion and engagedness. <<</Quantitative Analysis>>> <<<Qualitative Analysis>>> Table TABREF42 compares the generation of the proposed models with baselines given the same contexts. We observe that the Seq2Seq and vanilla transformer tend to generate generic and repetitive responses (e.g., i am not sure) in MojiTalk due to their deterministic structure fail to capture the variability in dialogue response. By incorporating stochastic latent variables, the CVAE and GVT can generate more diverse responses, but their responses are sometimes digressive (e.g., example 5). Interestingly, GVT and SVT generalize the topic beyong the context which make the dialogue more engaging (e.g., example 4). In general, SVT is able to generate more coherent and informative responses. <<</Qualitative Analysis>>> <<</Results>>> <<<Conclusion>>> This paper introduces the Variational Transformer (VT), a variational self-attentive feed-forward sequence model that combines the global receptive field of a Transformer with the variational nature of a CVAE. We propose two types of the VT: 1) the Global Variational Transformer (GVT) which incorporates a global latent variable as additional input to the transformer decoder; and 2) the Sequential Variational Transformer (SVT) which generates latent variables for each position during decoding process. Quantitative and qualitative experimental results shows that our models outperform baselines in terms of diversity, semantic relevance, and human judgment. In future work, we will utilize the pre-training language models BIBREF30 as the back-bone to strengthen the language model of the VT for better generation. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Related work, Abstract" ], "type": "disordered_section" }
1909.03544
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Czech Text Processing with Contextual Embeddings: POS Tagging, Lemmatization, Parsing and NER <<<Abstract>>> Contextualized embeddings, which capture appropriate word meaning depending on context, have recently been proposed. We evaluate two meth ods for precomputing such embeddings, BERT and Flair, on four Czech text processing tasks: part-of-speech (POS) tagging, lemmatization, dependency pars ing and named entity recognition (NER). The first three tasks, POS tagging, lemmatization and dependency parsing, are evaluated on two corpora: the Prague Dependency Treebank 3.5 and the Universal Dependencies 2.3. The named entity recognition (NER) is evaluated on the Czech Named Entity Corpus 1.1 and 2.0. We report state-of-the-art results for the above mentioned tasks and corpora. <<</Abstract>>> <<<Introduction>>> Recently, a novel way of computing word embeddings has been proposed. Instead of computing one word embedding for each word which sums over all its occurrences, ignoring the appropriate word meaning in various contexts, the contextualized embeddings are computed for each word occurrence, taking into account the whole sentence. Three ways of computing such contextualized embeddings have been proposed: ELMo BIBREF0, BERT BIBREF1 and Flair BIBREF2, along with precomputed models. Peters et al. (2018) BIBREF0 obtain the proposed embeddings, called ELMo, from internal states of deep bidirectional language model, pretrained on a large corpus. Akbik et al. (2018) BIBREF2 introduced Flair, contextualized word embeddings obtained from internal states of a character-level bidirectional language model, thus significantly increasing state of the art of POS tagging, chunking and NER tasks. Last, but not least, Devlin et al. (2018) BIBREF1 employ a Transformer BIBREF3 to compute contextualized embeddings from preceeding and following context at the same time, at the cost of increased processing costs. The new BERT embeddings achieved state-of-the-art results in eleven natural language tasks. Using two of these methods, for which precomputed models for Czech are available, namely BERT and Flair, we present our models for four NLP tasks: part-of-speech (POS) tagging, lemmatization, dependency parsing and named entity recognition (NER). Adding the contextualized embeddings as optional inputs in strong artificial neural network baselines, we report state-of-the-art results in these four tasks. <<</Introduction>>> <<<Related Work>>> As for the Prague Dependency Treebank (PDT) BIBREF4, most of the previous works are non-neural systems with one exception of BIBREF5 who hold the state of the art for Czech POS tagging and lemmatization, achieved with the recurrent neural network (RNN) using end-to-end trainable word embeddings and character-level word embeddings. Otherwise, Spoustová et al. (2009) BIBREF6 used an averaged perceptron for POS tagging. For parsing the PDT, Holan and Zabokrtský (2006) BIBREF7 and Novák and Žabokrtský (2007) BIBREF8 used a combination of non-neural parsing techniques . In the multilingual shared task CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9, raw text is processed and the POS tagging, lemmatization and dependency parsing are evaluated on the Universal Dependencies (UD) BIBREF10. Czech is one of the 57 evaluated languages. Interestingly, all 26 participant systems employed the artificial neural networks in some way. Of these, 3 participant systems used (a slightly modified variant of) the only newly presented contextualized embeddings called ELMo BIBREF0, most notably one of the shared task winners BIBREF11. BERT and Flair were not available at the time. For the Czech NER, Straková et al. (2016) BIBREF12 use an artificial neural network with word- and character-level word embeddings to perform NER on the Czech Named Entity Corpus (CNEC) BIBREF13, BIBREF14, BIBREF15. <<</Related Work>>> <<<Datasets>>> <<<Prague Dependency Treebank 3.5>>> The Prague Dependency Treebank 3.5 BIBREF4 is a 2018 edition of the core Prague Dependency Treebank. The Prague Dependency Treebank 3.5 contains the same texts as the previous versions since 2.0, and is divided into train, dtest, and etest subparts, where dtest is used as a development set and etest as a test set. The dataset consists of several layers – the morphological m-layer is the largest and contains morphological annotations (POS tags and lemmas), the analytical a-layer contains labeled dependency trees, and the t-layer is the smallest and contains tectogrammatical trees. The statistics of PDT 3.5 sizes is presented in Table TABREF7. A detailed description of the morphological system can be found in BIBREF16, a specification of the syntactic annotations has been presented in BIBREF17. We note that in PDT, lemmas with the same word form are disambiguated using a number suffix – for example, English lemmas for the word forms can (noun) and can (verb) would be annotated as can-1 and can-2. In evaluation, we compute: [noitemsep,topsep=0pt] POS tagging accuracy, lemmatization accuracy, unlabeled attachment score (UAS), labeled attachment score (LAS). <<</Prague Dependency Treebank 3.5>>> <<<Universal Dependencies>>> The Universal Dependencies project BIBREF10 seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages. We evaluate the Czech PDT treebank of UD 2.3 BIBREF18, which is an automated conversion of PDT 3.5 a-layer to Universal Dependencies annotation. The original POS tags are used to generate UPOS (universal POS tags), XPOS (language-specific POS tags, in this case the original PDT tags), and Feats (universal morphological features). The UD lemmas are the raw textual lemmas, so the discriminative numeric suffix of PDT is dropped. The dependency trees are converted according to the UD guidelines, adapting both the unlabeled trees and the dependency labels. To compute the evaluation scores, we use the official CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 evaluation script, which produces the following metrics: [noitemsep,topsep=0pt] UPOS – universal POS tags accuracy, XPOS – language-specific POS tags accuracy, UFeats – universal subset of morphological features accuracy, Lemmas – lemmatization accuracy, UAS – unlabeled attachment score, LAS – labeled attachment score, MLAS – morphology-aware LAS, BLEX – bi-lexical dependency score. <<</Universal Dependencies>>> <<<Czech Named Entity Corpus>>> The Czech Named Entity Corpus 1.1 BIBREF13, BIBREF14 is a corpus of $5\,868$ Czech sentences with manually annotated $33\,662$ Czech named entities, classified according to a two-level hierarchy of 62 named entities. The Czech Named Entity Corpus 2.0 BIBREF15 contains $8\,993$ Czech sentences with manually annotated $35\,220$ Czech named entities, classified according to a two-level hierarchy of 46 named entities. We evaluate the NER task with the official CNEC evaluation script. Similarly to previous literature BIBREF13, BIBREF12 etc., the script only evaluates the first round annotation classes for the CNEC 1.1. For the CNEC 2.0, the script evaluates all annotated classes. <<</Czech Named Entity Corpus>>> <<</Datasets>>> <<<Neural Architectures>>> All our neural architectures are recurrent neural networks (RNNs). The POS tagging, lemmatization and dependency parsing is performed with the UDPipe 2.0 (Section SECREF16) and NER is performed with our new sequence-to-sequence model (Section SECREF36). <<<POS Tagging, Lemmatization, and Dependency Parsing>>> We perform POS tagging, lemmatization and dependency parsing using UDPipe 2.0 BIBREF19, one of the three winning systems of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 and an overall winner of The 2018 Shared Task on Extrinsic Parser Evaluation BIBREF20. An overview of this architecture is presented in Figure FIGREF17 and the full details of the architecture and the training procedure are available in BIBREF19. <<<POS Tagging and Lemmatization>>> The tagger employs a standard bi-LSTM architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are performed, followed by a softmax output layers for POS tags and lemmas. While a classification output layer is natural for POS tags, we also apply it to lemmatization and generate lemmas by classifying the input words into lemma generation rules, therefore considering lemmatization as another tagging task. We construct a lemma generation rule from a given form and lemma as follows: [noitemsep,topsep=0pt] We start by finding the longest continuous substring of the form and the lemma. If it is empty, we use the lemma itself as the class. If there is a common substring of the form and the lemma, we compute the shortest edit script converting the prefix of the form into the prefix of the lemma, and the shortest edit script converting the suffix of the form to the suffix of the lemma. The edit scripts permit the operations delete_current_char and insert_char(c). All above operations are performed case insensitively. To indicate correct casing of the lemma, we consider the lemma to be a concatenation of segments, where each segment is composed of either a sequence of lowercase characters, or a sequence of uppercase characters. We represent the lemma casing by encoding the beginning of every such segment, where the offsets in the first half of the lemma are computed relatively to the start of the lemma, and the offsets in the second half of the lemma are computed relatively to the end of the lemma. <<</POS Tagging and Lemmatization>>> <<<Dependency Parsing>>> The dependency parsing is again predicted using UDPipe 2.0 architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are again performed, followed by a biaffine attention layer BIBREF22 producing labeled dependency trees. In our evaluation we do not utilize gold POS tags and lemmas on the test set for dependency parsing. Instead, we consider three ways of employing them during parsing: [noitemsep,topsep=0pt] not using them at all; adding predicted POS tags and lemmas on input; perform joint training of POS tags, lemmatization, and dependency parsing. In this case, we share first two bidirectional LSTM layers between the tagger and the parser. <<</Dependency Parsing>>> <<<Input Embeddings>>> In our baseline model, we use the end-to-end word embeddings and also character-level word embeddings (bidirectional GRUs, BIBREF23, BIBREF24, BIBREF25 of dimension 256) trained specifically for the task. Our architecture can optionally employ the following additional inputs [noitemsep,topsep=0pt] pretrained word embeddings (WE): For the PDT experiments, we generate the word embeddings with word2vec on a concatenation of large raw Czech corpora available from the LINDAT/CLARIN repository. For UD Czech, we use FastText word embeddings BIBREF27 of dimension 300, which we pretrain on Czech Wikipedia using segmentation and tokenization trained from the UD data. BERT BIBREF1: Pretrained contextual word embeddings of dimension 768 from the Base model. We average the last four layers of the BERT model to produce the embeddings. Because BERT utilizes word pieces, we decompose UD words into appropriate subwords and then average the generated embeddings over subwords belonging to the same word. Flair BIBREF2: Pretrained contextual word embeddings of dimension 4096. <<</Input Embeddings>>> <<<POS Tags and Lemmas Decoding>>> Optionally, we employ a morphological dictionary MorfFlex BIBREF28 during decoding. If the morphological dictionary is used, it may produce analyses for an input word as (POS tag, lemma) pairs. If any are generated, we choose the pair with maximum likelihood given by both the POS tag and lemmatization model. <<</POS Tags and Lemmas Decoding>>> <<</POS Tagging, Lemmatization, and Dependency Parsing>>> <<<Named Entity Recognition>>> We use a novel approach BIBREF29 for nested named entity recognition (NER) to capture the nested entities in the Czech Named Entity Corpus. The nested entities are encoded in a sequence and the problem of nested NER is then viewed as a sequence-to-sequence (seq2seq) problem, in which the input sequence consists of the input tokens (forms) and the output sequence of the linearized entity labels. The system is a encoder-decoder architecture. The encoder is a bi-directional LSTM and the decoder is a LSTM. The encoded labels are predicted one by one by the decoder, until the decoder outputs the "<eow>" (end of word) label and moves to the next token. We use a hard attention on the word whose label(s) is being predicted. We train the network using the lazy variant of the Adam optimizer BIBREF30, which only updates accumulators for variables that appear in the current batch, with parameters $\beta _1=0.9$ and $\beta _2=0.98$. We use mini-batches of size 8. As a regularization, we apply dropout with rate $0.5$ and the word dropout replaces $20\%$ of words by the unknown token to force the network to rely more on context. We did not perform any complex hyperparameter search. In this model, we use the following word- and character-level word embeddings: [noitemsep,topsep=0pt] pretrained word embeddings: We use the FastText BIBREF27 word embeddings of dimension 300 from the publicly available Czech model. end-to-end word embeddings: We embed the input forms and lemmas (256 dimensions) and POS tags (one-hot). end-to-end character-level word embeddings: We use bidirectional GRUs BIBREF23, BIBREF24 of dimension 128 in line with BIBREF25: we represent every Unicode character with a vector of dimension 128, and concatenate GRU outputs for forward and reversed word characters. Optionally, we add the BERT BIBREF1 and the Flair BIBREF2 contextualized embeddings in the same way as in the UDPipe 2.0 (Section SECREF16). <<</Named Entity Recognition>>> <<</Neural Architectures>>> <<<Results>>> <<<POS Tagging and Lemmatization on PDT 3.5>>> The POS tagging and lemmatization results are presented in Table TABREF44. The word2vec word embeddings (WE) considerably increase performance compared to the baseline, especially in POS tagging. When only Flair embeddings are added to the baseline, we also observe an improvement, but not as high. We hypothesise that the lower performance (in contrast with the results reported in BIBREF2) is caused by the size of the training data, because we train the word2vec WE on considerably larger dataset than the Czech Flair model. However, when WE and Flair embeddings are combined, performance moderately increases, demonstrating that the two embedding methods produce at least partially complementary representations. The BERT embeddings alone bring highest improvement in performance. Furthermore, combination with WE or Flair again yields performance increase. The best results are achieved by exploiting all three embedding methods, substantially exceeding state-of-the-art results. Utilization of morphological dictionary improves prediction accuracy. However, as the performance of a model itself increases, the gains obtained by the morphological dictionary diminishes – for a model without any pretrained embeddings, morphological dictionary improves POS tagging by and lemmatization by $0.43\%$ and $0.45\%$, while the best performing model gains only $0.11\%$ and $0.23\%$. <<</POS Tagging and Lemmatization on PDT 3.5>>> <<<Dependency Parsing on PDT 3.5>>> The evaluation of the contextualized embeddings methods as well as various ways of POS tag utilization is presented in Table TABREF44. Without POS tags and lemmas, the Flair embeddings bring only a slight improvement in dependency parsing when added to WE. In contrast, BERT embeddings employment results in substantial gains, increasing UAS and LAS by 1.6% and 2.1%. A combination of BERT and Flair embeddings does not result in any performance improvement, demonstrating that BERT syntactic representations encompass the Flair embeddings. When introducing POS tags and lemmas predicted by the best model from Section SECREF43 as inputs for dependency parsing, the performance increases only slightly. A better way of POS tags and lemmas exploitation is achieved in a joint model, which predicts POS tags, lemmas, and dependency trees simultaneously. Again, BERT embeddings bring significant improvements, but in contrast to syntax parsing only, adding Flair embeddings to BERT results in moderate gain – we hypothesise that the increase is due to the complementary morphological information present in Flair embeddings (cf. Section SECREF43). Note that the joint model achieves better parsing accuracy than the one given gold POS tags and lemmas on input. However, the POS tags and lemmas predicted by the joint model are of slightly lower quality compared to a standalone tagger of the best configuration from Section SECREF43. Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation). To our best knowledge, research on PDT parsing was performed mostly in the first decade of this century, therefore even our baseline model substantially surpasses previous works. Our best model with contextualized embeddings achieves nearly 50% error reduction both in UAS and LAS. <<</Dependency Parsing on PDT 3.5>>> <<<POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies>>> Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank. This treebank is derived from PDT 3.5 a-layer, with original POS tags kept in XPOS, and the dependency trees and lemmas modified according to UD guidelines. We observe that the word2vec WEs perform similarly to Flair embeddings in this setting. Our hypothesis is that the word2vec WEs performance loss (compared to WEs in Section SECREF43) is caused by using a considerably smaller raw corpus to pretrain the WEs (Czech Wikipedia with 785M words, compared to 4G words used in Section SECREF43), due to licensing reasons. BERT embeddings once more deliver the highest improvement, especially in dependency parsing, and our best model employs all three embedding methods. In the previous ablation experiments, we used the gold segmentation and tokenization in the Czech PDT UD 2.3 treebank. For comparison with state of the art, Czech PDT UD 2.2 treebank without gold segmentation and tokenization is used in evaluation, according to the CoNLL 2018 shared task training and evaluation protocol. Our system reuses segmentation and tokenization produced by UDPipe 2.0 in the CoNLL 2018 shared task and surpasses previous works substantially in all metrics (bottom part of Table TABREF47). Comparing the results with a joint tagging and parsing PDT 3.5 model from Table TABREF7, we observe that the XPOS results are nearly identical as expected. Lemmatization on the UD treebank is performed without the discriminative numeric suffixes (see Section SECREF3) and therefore reaches better performance. Both UAS and LAS are also better on the UD treebank, which we assume is caused by the different annotation scheme. <<</POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies>>> <<</Results>>> <<<Conclusion>>> We have presented an evaluation of two contextualized embeddings methods, namely BERT and Flair. By utilizing these embeddings as input to deep neural networks, we have achieved state-of-the-art results in several Czech text processing tasks, namely in POS tagging, lemmatization, dependency parsing and named entity recognition. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Conclusion" ], "type": "disordered_section" }
1909.03544
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Czech Text Processing with Contextual Embeddings: POS Tagging, Lemmatization, Parsing and NER <<<Abstract>>> Contextualized embeddings, which capture appropriate word meaning depending on context, have recently been proposed. We evaluate two meth ods for precomputing such embeddings, BERT and Flair, on four Czech text processing tasks: part-of-speech (POS) tagging, lemmatization, dependency pars ing and named entity recognition (NER). The first three tasks, POS tagging, lemmatization and dependency parsing, are evaluated on two corpora: the Prague Dependency Treebank 3.5 and the Universal Dependencies 2.3. The named entity recognition (NER) is evaluated on the Czech Named Entity Corpus 1.1 and 2.0. We report state-of-the-art results for the above mentioned tasks and corpora. <<</Abstract>>> <<<Introduction>>> Recently, a novel way of computing word embeddings has been proposed. Instead of computing one word embedding for each word which sums over all its occurrences, ignoring the appropriate word meaning in various contexts, the contextualized embeddings are computed for each word occurrence, taking into account the whole sentence. Three ways of computing such contextualized embeddings have been proposed: ELMo BIBREF0, BERT BIBREF1 and Flair BIBREF2, along with precomputed models. Peters et al. (2018) BIBREF0 obtain the proposed embeddings, called ELMo, from internal states of deep bidirectional language model, pretrained on a large corpus. Akbik et al. (2018) BIBREF2 introduced Flair, contextualized word embeddings obtained from internal states of a character-level bidirectional language model, thus significantly increasing state of the art of POS tagging, chunking and NER tasks. Last, but not least, Devlin et al. (2018) BIBREF1 employ a Transformer BIBREF3 to compute contextualized embeddings from preceeding and following context at the same time, at the cost of increased processing costs. The new BERT embeddings achieved state-of-the-art results in eleven natural language tasks. Using two of these methods, for which precomputed models for Czech are available, namely BERT and Flair, we present our models for four NLP tasks: part-of-speech (POS) tagging, lemmatization, dependency parsing and named entity recognition (NER). Adding the contextualized embeddings as optional inputs in strong artificial neural network baselines, we report state-of-the-art results in these four tasks. <<</Introduction>>> <<<Related Work>>> As for the Prague Dependency Treebank (PDT) BIBREF4, most of the previous works are non-neural systems with one exception of BIBREF5 who hold the state of the art for Czech POS tagging and lemmatization, achieved with the recurrent neural network (RNN) using end-to-end trainable word embeddings and character-level word embeddings. Otherwise, Spoustová et al. (2009) BIBREF6 used an averaged perceptron for POS tagging. For parsing the PDT, Holan and Zabokrtský (2006) BIBREF7 and Novák and Žabokrtský (2007) BIBREF8 used a combination of non-neural parsing techniques . In the multilingual shared task CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9, raw text is processed and the POS tagging, lemmatization and dependency parsing are evaluated on the Universal Dependencies (UD) BIBREF10. Czech is one of the 57 evaluated languages. Interestingly, all 26 participant systems employed the artificial neural networks in some way. Of these, 3 participant systems used (a slightly modified variant of) the only newly presented contextualized embeddings called ELMo BIBREF0, most notably one of the shared task winners BIBREF11. BERT and Flair were not available at the time. For the Czech NER, Straková et al. (2016) BIBREF12 use an artificial neural network with word- and character-level word embeddings to perform NER on the Czech Named Entity Corpus (CNEC) BIBREF13, BIBREF14, BIBREF15. <<</Related Work>>> <<<Datasets>>> <<<Prague Dependency Treebank 3.5>>> The Prague Dependency Treebank 3.5 BIBREF4 is a 2018 edition of the core Prague Dependency Treebank. The Prague Dependency Treebank 3.5 contains the same texts as the previous versions since 2.0, and is divided into train, dtest, and etest subparts, where dtest is used as a development set and etest as a test set. The dataset consists of several layers – the morphological m-layer is the largest and contains morphological annotations (POS tags and lemmas), the analytical a-layer contains labeled dependency trees, and the t-layer is the smallest and contains tectogrammatical trees. The statistics of PDT 3.5 sizes is presented in Table TABREF7. A detailed description of the morphological system can be found in BIBREF16, a specification of the syntactic annotations has been presented in BIBREF17. We note that in PDT, lemmas with the same word form are disambiguated using a number suffix – for example, English lemmas for the word forms can (noun) and can (verb) would be annotated as can-1 and can-2. In evaluation, we compute: [noitemsep,topsep=0pt] POS tagging accuracy, lemmatization accuracy, unlabeled attachment score (UAS), labeled attachment score (LAS). <<</Prague Dependency Treebank 3.5>>> <<<Universal Dependencies>>> The Universal Dependencies project BIBREF10 seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages. We evaluate the Czech PDT treebank of UD 2.3 BIBREF18, which is an automated conversion of PDT 3.5 a-layer to Universal Dependencies annotation. The original POS tags are used to generate UPOS (universal POS tags), XPOS (language-specific POS tags, in this case the original PDT tags), and Feats (universal morphological features). The UD lemmas are the raw textual lemmas, so the discriminative numeric suffix of PDT is dropped. The dependency trees are converted according to the UD guidelines, adapting both the unlabeled trees and the dependency labels. To compute the evaluation scores, we use the official CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 evaluation script, which produces the following metrics: [noitemsep,topsep=0pt] UPOS – universal POS tags accuracy, XPOS – language-specific POS tags accuracy, UFeats – universal subset of morphological features accuracy, Lemmas – lemmatization accuracy, UAS – unlabeled attachment score, LAS – labeled attachment score, MLAS – morphology-aware LAS, BLEX – bi-lexical dependency score. <<</Universal Dependencies>>> <<<Czech Named Entity Corpus>>> The Czech Named Entity Corpus 1.1 BIBREF13, BIBREF14 is a corpus of $5\,868$ Czech sentences with manually annotated $33\,662$ Czech named entities, classified according to a two-level hierarchy of 62 named entities. The Czech Named Entity Corpus 2.0 BIBREF15 contains $8\,993$ Czech sentences with manually annotated $35\,220$ Czech named entities, classified according to a two-level hierarchy of 46 named entities. We evaluate the NER task with the official CNEC evaluation script. Similarly to previous literature BIBREF13, BIBREF12 etc., the script only evaluates the first round annotation classes for the CNEC 1.1. For the CNEC 2.0, the script evaluates all annotated classes. <<</Czech Named Entity Corpus>>> <<</Datasets>>> <<<Neural Architectures>>> All our neural architectures are recurrent neural networks (RNNs). The POS tagging, lemmatization and dependency parsing is performed with the UDPipe 2.0 (Section SECREF16) and NER is performed with our new sequence-to-sequence model (Section SECREF36). <<<POS Tagging, Lemmatization, and Dependency Parsing>>> We perform POS tagging, lemmatization and dependency parsing using UDPipe 2.0 BIBREF19, one of the three winning systems of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 and an overall winner of The 2018 Shared Task on Extrinsic Parser Evaluation BIBREF20. An overview of this architecture is presented in Figure FIGREF17 and the full details of the architecture and the training procedure are available in BIBREF19. <<<POS Tagging and Lemmatization>>> The tagger employs a standard bi-LSTM architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are performed, followed by a softmax output layers for POS tags and lemmas. While a classification output layer is natural for POS tags, we also apply it to lemmatization and generate lemmas by classifying the input words into lemma generation rules, therefore considering lemmatization as another tagging task. We construct a lemma generation rule from a given form and lemma as follows: [noitemsep,topsep=0pt] We start by finding the longest continuous substring of the form and the lemma. If it is empty, we use the lemma itself as the class. If there is a common substring of the form and the lemma, we compute the shortest edit script converting the prefix of the form into the prefix of the lemma, and the shortest edit script converting the suffix of the form to the suffix of the lemma. The edit scripts permit the operations delete_current_char and insert_char(c). All above operations are performed case insensitively. To indicate correct casing of the lemma, we consider the lemma to be a concatenation of segments, where each segment is composed of either a sequence of lowercase characters, or a sequence of uppercase characters. We represent the lemma casing by encoding the beginning of every such segment, where the offsets in the first half of the lemma are computed relatively to the start of the lemma, and the offsets in the second half of the lemma are computed relatively to the end of the lemma. <<</POS Tagging and Lemmatization>>> <<<Dependency Parsing>>> The dependency parsing is again predicted using UDPipe 2.0 architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are again performed, followed by a biaffine attention layer BIBREF22 producing labeled dependency trees. In our evaluation we do not utilize gold POS tags and lemmas on the test set for dependency parsing. Instead, we consider three ways of employing them during parsing: [noitemsep,topsep=0pt] not using them at all; adding predicted POS tags and lemmas on input; perform joint training of POS tags, lemmatization, and dependency parsing. In this case, we share first two bidirectional LSTM layers between the tagger and the parser. <<</Dependency Parsing>>> <<<Input Embeddings>>> In our baseline model, we use the end-to-end word embeddings and also character-level word embeddings (bidirectional GRUs, BIBREF23, BIBREF24, BIBREF25 of dimension 256) trained specifically for the task. Our architecture can optionally employ the following additional inputs [noitemsep,topsep=0pt] pretrained word embeddings (WE): For the PDT experiments, we generate the word embeddings with word2vec on a concatenation of large raw Czech corpora available from the LINDAT/CLARIN repository. For UD Czech, we use FastText word embeddings BIBREF27 of dimension 300, which we pretrain on Czech Wikipedia using segmentation and tokenization trained from the UD data. BERT BIBREF1: Pretrained contextual word embeddings of dimension 768 from the Base model. We average the last four layers of the BERT model to produce the embeddings. Because BERT utilizes word pieces, we decompose UD words into appropriate subwords and then average the generated embeddings over subwords belonging to the same word. Flair BIBREF2: Pretrained contextual word embeddings of dimension 4096. <<</Input Embeddings>>> <<<POS Tags and Lemmas Decoding>>> Optionally, we employ a morphological dictionary MorfFlex BIBREF28 during decoding. If the morphological dictionary is used, it may produce analyses for an input word as (POS tag, lemma) pairs. If any are generated, we choose the pair with maximum likelihood given by both the POS tag and lemmatization model. <<</POS Tags and Lemmas Decoding>>> <<</POS Tagging, Lemmatization, and Dependency Parsing>>> <<<Named Entity Recognition>>> We use a novel approach BIBREF29 for nested named entity recognition (NER) to capture the nested entities in the Czech Named Entity Corpus. The nested entities are encoded in a sequence and the problem of nested NER is then viewed as a sequence-to-sequence (seq2seq) problem, in which the input sequence consists of the input tokens (forms) and the output sequence of the linearized entity labels. The system is a encoder-decoder architecture. The encoder is a bi-directional LSTM and the decoder is a LSTM. The encoded labels are predicted one by one by the decoder, until the decoder outputs the "<eow>" (end of word) label and moves to the next token. We use a hard attention on the word whose label(s) is being predicted. We train the network using the lazy variant of the Adam optimizer BIBREF30, which only updates accumulators for variables that appear in the current batch, with parameters $\beta _1=0.9$ and $\beta _2=0.98$. We use mini-batches of size 8. As a regularization, we apply dropout with rate $0.5$ and the word dropout replaces $20\%$ of words by the unknown token to force the network to rely more on context. We did not perform any complex hyperparameter search. In this model, we use the following word- and character-level word embeddings: [noitemsep,topsep=0pt] pretrained word embeddings: We use the FastText BIBREF27 word embeddings of dimension 300 from the publicly available Czech model. end-to-end word embeddings: We embed the input forms and lemmas (256 dimensions) and POS tags (one-hot). end-to-end character-level word embeddings: We use bidirectional GRUs BIBREF23, BIBREF24 of dimension 128 in line with BIBREF25: we represent every Unicode character with a vector of dimension 128, and concatenate GRU outputs for forward and reversed word characters. Optionally, we add the BERT BIBREF1 and the Flair BIBREF2 contextualized embeddings in the same way as in the UDPipe 2.0 (Section SECREF16). <<</Named Entity Recognition>>> <<</Neural Architectures>>> <<<Results>>> <<<POS Tagging and Lemmatization on PDT 3.5>>> The POS tagging and lemmatization results are presented in Table TABREF44. The word2vec word embeddings (WE) considerably increase performance compared to the baseline, especially in POS tagging. When only Flair embeddings are added to the baseline, we also observe an improvement, but not as high. We hypothesise that the lower performance (in contrast with the results reported in BIBREF2) is caused by the size of the training data, because we train the word2vec WE on considerably larger dataset than the Czech Flair model. However, when WE and Flair embeddings are combined, performance moderately increases, demonstrating that the two embedding methods produce at least partially complementary representations. The BERT embeddings alone bring highest improvement in performance. Furthermore, combination with WE or Flair again yields performance increase. The best results are achieved by exploiting all three embedding methods, substantially exceeding state-of-the-art results. Utilization of morphological dictionary improves prediction accuracy. However, as the performance of a model itself increases, the gains obtained by the morphological dictionary diminishes – for a model without any pretrained embeddings, morphological dictionary improves POS tagging by and lemmatization by $0.43\%$ and $0.45\%$, while the best performing model gains only $0.11\%$ and $0.23\%$. <<</POS Tagging and Lemmatization on PDT 3.5>>> <<<Dependency Parsing on PDT 3.5>>> The evaluation of the contextualized embeddings methods as well as various ways of POS tag utilization is presented in Table TABREF44. Without POS tags and lemmas, the Flair embeddings bring only a slight improvement in dependency parsing when added to WE. In contrast, BERT embeddings employment results in substantial gains, increasing UAS and LAS by 1.6% and 2.1%. A combination of BERT and Flair embeddings does not result in any performance improvement, demonstrating that BERT syntactic representations encompass the Flair embeddings. When introducing POS tags and lemmas predicted by the best model from Section SECREF43 as inputs for dependency parsing, the performance increases only slightly. A better way of POS tags and lemmas exploitation is achieved in a joint model, which predicts POS tags, lemmas, and dependency trees simultaneously. Again, BERT embeddings bring significant improvements, but in contrast to syntax parsing only, adding Flair embeddings to BERT results in moderate gain – we hypothesise that the increase is due to the complementary morphological information present in Flair embeddings (cf. Section SECREF43). Note that the joint model achieves better parsing accuracy than the one given gold POS tags and lemmas on input. However, the POS tags and lemmas predicted by the joint model are of slightly lower quality compared to a standalone tagger of the best configuration from Section SECREF43. Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation). To our best knowledge, research on PDT parsing was performed mostly in the first decade of this century, therefore even our baseline model substantially surpasses previous works. Our best model with contextualized embeddings achieves nearly 50% error reduction both in UAS and LAS. <<</Dependency Parsing on PDT 3.5>>> <<<POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies>>> Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank. This treebank is derived from PDT 3.5 a-layer, with original POS tags kept in XPOS, and the dependency trees and lemmas modified according to UD guidelines. We observe that the word2vec WEs perform similarly to Flair embeddings in this setting. Our hypothesis is that the word2vec WEs performance loss (compared to WEs in Section SECREF43) is caused by using a considerably smaller raw corpus to pretrain the WEs (Czech Wikipedia with 785M words, compared to 4G words used in Section SECREF43), due to licensing reasons. BERT embeddings once more deliver the highest improvement, especially in dependency parsing, and our best model employs all three embedding methods. In the previous ablation experiments, we used the gold segmentation and tokenization in the Czech PDT UD 2.3 treebank. For comparison with state of the art, Czech PDT UD 2.2 treebank without gold segmentation and tokenization is used in evaluation, according to the CoNLL 2018 shared task training and evaluation protocol. Our system reuses segmentation and tokenization produced by UDPipe 2.0 in the CoNLL 2018 shared task and surpasses previous works substantially in all metrics (bottom part of Table TABREF47). Comparing the results with a joint tagging and parsing PDT 3.5 model from Table TABREF7, we observe that the XPOS results are nearly identical as expected. Lemmatization on the UD treebank is performed without the discriminative numeric suffixes (see Section SECREF3) and therefore reaches better performance. Both UAS and LAS are also better on the UD treebank, which we assume is caused by the different annotation scheme. <<</POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies>>> <<</Results>>> <<<Conclusion>>> We have presented an evaluation of two contextualized embeddings methods, namely BERT and Flair. By utilizing these embeddings as input to deep neural networks, we have achieved state-of-the-art results in several Czech text processing tasks, namely in POS tagging, lemmatization, dependency parsing and named entity recognition. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Datasets" ], "type": "disordered_section" }
1909.03544
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Czech Text Processing with Contextual Embeddings: POS Tagging, Lemmatization, Parsing and NER <<<Abstract>>> Contextualized embeddings, which capture appropriate word meaning depending on context, have recently been proposed. We evaluate two meth ods for precomputing such embeddings, BERT and Flair, on four Czech text processing tasks: part-of-speech (POS) tagging, lemmatization, dependency pars ing and named entity recognition (NER). The first three tasks, POS tagging, lemmatization and dependency parsing, are evaluated on two corpora: the Prague Dependency Treebank 3.5 and the Universal Dependencies 2.3. The named entity recognition (NER) is evaluated on the Czech Named Entity Corpus 1.1 and 2.0. We report state-of-the-art results for the above mentioned tasks and corpora. <<</Abstract>>> <<<Introduction>>> Recently, a novel way of computing word embeddings has been proposed. Instead of computing one word embedding for each word which sums over all its occurrences, ignoring the appropriate word meaning in various contexts, the contextualized embeddings are computed for each word occurrence, taking into account the whole sentence. Three ways of computing such contextualized embeddings have been proposed: ELMo BIBREF0, BERT BIBREF1 and Flair BIBREF2, along with precomputed models. Peters et al. (2018) BIBREF0 obtain the proposed embeddings, called ELMo, from internal states of deep bidirectional language model, pretrained on a large corpus. Akbik et al. (2018) BIBREF2 introduced Flair, contextualized word embeddings obtained from internal states of a character-level bidirectional language model, thus significantly increasing state of the art of POS tagging, chunking and NER tasks. Last, but not least, Devlin et al. (2018) BIBREF1 employ a Transformer BIBREF3 to compute contextualized embeddings from preceeding and following context at the same time, at the cost of increased processing costs. The new BERT embeddings achieved state-of-the-art results in eleven natural language tasks. Using two of these methods, for which precomputed models for Czech are available, namely BERT and Flair, we present our models for four NLP tasks: part-of-speech (POS) tagging, lemmatization, dependency parsing and named entity recognition (NER). Adding the contextualized embeddings as optional inputs in strong artificial neural network baselines, we report state-of-the-art results in these four tasks. <<</Introduction>>> <<<Related Work>>> As for the Prague Dependency Treebank (PDT) BIBREF4, most of the previous works are non-neural systems with one exception of BIBREF5 who hold the state of the art for Czech POS tagging and lemmatization, achieved with the recurrent neural network (RNN) using end-to-end trainable word embeddings and character-level word embeddings. Otherwise, Spoustová et al. (2009) BIBREF6 used an averaged perceptron for POS tagging. For parsing the PDT, Holan and Zabokrtský (2006) BIBREF7 and Novák and Žabokrtský (2007) BIBREF8 used a combination of non-neural parsing techniques . In the multilingual shared task CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9, raw text is processed and the POS tagging, lemmatization and dependency parsing are evaluated on the Universal Dependencies (UD) BIBREF10. Czech is one of the 57 evaluated languages. Interestingly, all 26 participant systems employed the artificial neural networks in some way. Of these, 3 participant systems used (a slightly modified variant of) the only newly presented contextualized embeddings called ELMo BIBREF0, most notably one of the shared task winners BIBREF11. BERT and Flair were not available at the time. For the Czech NER, Straková et al. (2016) BIBREF12 use an artificial neural network with word- and character-level word embeddings to perform NER on the Czech Named Entity Corpus (CNEC) BIBREF13, BIBREF14, BIBREF15. <<</Related Work>>> <<<Datasets>>> <<<Prague Dependency Treebank 3.5>>> The Prague Dependency Treebank 3.5 BIBREF4 is a 2018 edition of the core Prague Dependency Treebank. The Prague Dependency Treebank 3.5 contains the same texts as the previous versions since 2.0, and is divided into train, dtest, and etest subparts, where dtest is used as a development set and etest as a test set. The dataset consists of several layers – the morphological m-layer is the largest and contains morphological annotations (POS tags and lemmas), the analytical a-layer contains labeled dependency trees, and the t-layer is the smallest and contains tectogrammatical trees. The statistics of PDT 3.5 sizes is presented in Table TABREF7. A detailed description of the morphological system can be found in BIBREF16, a specification of the syntactic annotations has been presented in BIBREF17. We note that in PDT, lemmas with the same word form are disambiguated using a number suffix – for example, English lemmas for the word forms can (noun) and can (verb) would be annotated as can-1 and can-2. In evaluation, we compute: [noitemsep,topsep=0pt] POS tagging accuracy, lemmatization accuracy, unlabeled attachment score (UAS), labeled attachment score (LAS). <<</Prague Dependency Treebank 3.5>>> <<<Universal Dependencies>>> The Universal Dependencies project BIBREF10 seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages. We evaluate the Czech PDT treebank of UD 2.3 BIBREF18, which is an automated conversion of PDT 3.5 a-layer to Universal Dependencies annotation. The original POS tags are used to generate UPOS (universal POS tags), XPOS (language-specific POS tags, in this case the original PDT tags), and Feats (universal morphological features). The UD lemmas are the raw textual lemmas, so the discriminative numeric suffix of PDT is dropped. The dependency trees are converted according to the UD guidelines, adapting both the unlabeled trees and the dependency labels. To compute the evaluation scores, we use the official CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 evaluation script, which produces the following metrics: [noitemsep,topsep=0pt] UPOS – universal POS tags accuracy, XPOS – language-specific POS tags accuracy, UFeats – universal subset of morphological features accuracy, Lemmas – lemmatization accuracy, UAS – unlabeled attachment score, LAS – labeled attachment score, MLAS – morphology-aware LAS, BLEX – bi-lexical dependency score. <<</Universal Dependencies>>> <<<Czech Named Entity Corpus>>> The Czech Named Entity Corpus 1.1 BIBREF13, BIBREF14 is a corpus of $5\,868$ Czech sentences with manually annotated $33\,662$ Czech named entities, classified according to a two-level hierarchy of 62 named entities. The Czech Named Entity Corpus 2.0 BIBREF15 contains $8\,993$ Czech sentences with manually annotated $35\,220$ Czech named entities, classified according to a two-level hierarchy of 46 named entities. We evaluate the NER task with the official CNEC evaluation script. Similarly to previous literature BIBREF13, BIBREF12 etc., the script only evaluates the first round annotation classes for the CNEC 1.1. For the CNEC 2.0, the script evaluates all annotated classes. <<</Czech Named Entity Corpus>>> <<</Datasets>>> <<<Neural Architectures>>> All our neural architectures are recurrent neural networks (RNNs). The POS tagging, lemmatization and dependency parsing is performed with the UDPipe 2.0 (Section SECREF16) and NER is performed with our new sequence-to-sequence model (Section SECREF36). <<<POS Tagging, Lemmatization, and Dependency Parsing>>> We perform POS tagging, lemmatization and dependency parsing using UDPipe 2.0 BIBREF19, one of the three winning systems of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 and an overall winner of The 2018 Shared Task on Extrinsic Parser Evaluation BIBREF20. An overview of this architecture is presented in Figure FIGREF17 and the full details of the architecture and the training procedure are available in BIBREF19. <<<POS Tagging and Lemmatization>>> The tagger employs a standard bi-LSTM architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are performed, followed by a softmax output layers for POS tags and lemmas. While a classification output layer is natural for POS tags, we also apply it to lemmatization and generate lemmas by classifying the input words into lemma generation rules, therefore considering lemmatization as another tagging task. We construct a lemma generation rule from a given form and lemma as follows: [noitemsep,topsep=0pt] We start by finding the longest continuous substring of the form and the lemma. If it is empty, we use the lemma itself as the class. If there is a common substring of the form and the lemma, we compute the shortest edit script converting the prefix of the form into the prefix of the lemma, and the shortest edit script converting the suffix of the form to the suffix of the lemma. The edit scripts permit the operations delete_current_char and insert_char(c). All above operations are performed case insensitively. To indicate correct casing of the lemma, we consider the lemma to be a concatenation of segments, where each segment is composed of either a sequence of lowercase characters, or a sequence of uppercase characters. We represent the lemma casing by encoding the beginning of every such segment, where the offsets in the first half of the lemma are computed relatively to the start of the lemma, and the offsets in the second half of the lemma are computed relatively to the end of the lemma. <<</POS Tagging and Lemmatization>>> <<<Dependency Parsing>>> The dependency parsing is again predicted using UDPipe 2.0 architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are again performed, followed by a biaffine attention layer BIBREF22 producing labeled dependency trees. In our evaluation we do not utilize gold POS tags and lemmas on the test set for dependency parsing. Instead, we consider three ways of employing them during parsing: [noitemsep,topsep=0pt] not using them at all; adding predicted POS tags and lemmas on input; perform joint training of POS tags, lemmatization, and dependency parsing. In this case, we share first two bidirectional LSTM layers between the tagger and the parser. <<</Dependency Parsing>>> <<<Input Embeddings>>> In our baseline model, we use the end-to-end word embeddings and also character-level word embeddings (bidirectional GRUs, BIBREF23, BIBREF24, BIBREF25 of dimension 256) trained specifically for the task. Our architecture can optionally employ the following additional inputs [noitemsep,topsep=0pt] pretrained word embeddings (WE): For the PDT experiments, we generate the word embeddings with word2vec on a concatenation of large raw Czech corpora available from the LINDAT/CLARIN repository. For UD Czech, we use FastText word embeddings BIBREF27 of dimension 300, which we pretrain on Czech Wikipedia using segmentation and tokenization trained from the UD data. BERT BIBREF1: Pretrained contextual word embeddings of dimension 768 from the Base model. We average the last four layers of the BERT model to produce the embeddings. Because BERT utilizes word pieces, we decompose UD words into appropriate subwords and then average the generated embeddings over subwords belonging to the same word. Flair BIBREF2: Pretrained contextual word embeddings of dimension 4096. <<</Input Embeddings>>> <<<POS Tags and Lemmas Decoding>>> Optionally, we employ a morphological dictionary MorfFlex BIBREF28 during decoding. If the morphological dictionary is used, it may produce analyses for an input word as (POS tag, lemma) pairs. If any are generated, we choose the pair with maximum likelihood given by both the POS tag and lemmatization model. <<</POS Tags and Lemmas Decoding>>> <<</POS Tagging, Lemmatization, and Dependency Parsing>>> <<<Named Entity Recognition>>> We use a novel approach BIBREF29 for nested named entity recognition (NER) to capture the nested entities in the Czech Named Entity Corpus. The nested entities are encoded in a sequence and the problem of nested NER is then viewed as a sequence-to-sequence (seq2seq) problem, in which the input sequence consists of the input tokens (forms) and the output sequence of the linearized entity labels. The system is a encoder-decoder architecture. The encoder is a bi-directional LSTM and the decoder is a LSTM. The encoded labels are predicted one by one by the decoder, until the decoder outputs the "<eow>" (end of word) label and moves to the next token. We use a hard attention on the word whose label(s) is being predicted. We train the network using the lazy variant of the Adam optimizer BIBREF30, which only updates accumulators for variables that appear in the current batch, with parameters $\beta _1=0.9$ and $\beta _2=0.98$. We use mini-batches of size 8. As a regularization, we apply dropout with rate $0.5$ and the word dropout replaces $20\%$ of words by the unknown token to force the network to rely more on context. We did not perform any complex hyperparameter search. In this model, we use the following word- and character-level word embeddings: [noitemsep,topsep=0pt] pretrained word embeddings: We use the FastText BIBREF27 word embeddings of dimension 300 from the publicly available Czech model. end-to-end word embeddings: We embed the input forms and lemmas (256 dimensions) and POS tags (one-hot). end-to-end character-level word embeddings: We use bidirectional GRUs BIBREF23, BIBREF24 of dimension 128 in line with BIBREF25: we represent every Unicode character with a vector of dimension 128, and concatenate GRU outputs for forward and reversed word characters. Optionally, we add the BERT BIBREF1 and the Flair BIBREF2 contextualized embeddings in the same way as in the UDPipe 2.0 (Section SECREF16). <<</Named Entity Recognition>>> <<</Neural Architectures>>> <<<Results>>> <<<POS Tagging and Lemmatization on PDT 3.5>>> The POS tagging and lemmatization results are presented in Table TABREF44. The word2vec word embeddings (WE) considerably increase performance compared to the baseline, especially in POS tagging. When only Flair embeddings are added to the baseline, we also observe an improvement, but not as high. We hypothesise that the lower performance (in contrast with the results reported in BIBREF2) is caused by the size of the training data, because we train the word2vec WE on considerably larger dataset than the Czech Flair model. However, when WE and Flair embeddings are combined, performance moderately increases, demonstrating that the two embedding methods produce at least partially complementary representations. The BERT embeddings alone bring highest improvement in performance. Furthermore, combination with WE or Flair again yields performance increase. The best results are achieved by exploiting all three embedding methods, substantially exceeding state-of-the-art results. Utilization of morphological dictionary improves prediction accuracy. However, as the performance of a model itself increases, the gains obtained by the morphological dictionary diminishes – for a model without any pretrained embeddings, morphological dictionary improves POS tagging by and lemmatization by $0.43\%$ and $0.45\%$, while the best performing model gains only $0.11\%$ and $0.23\%$. <<</POS Tagging and Lemmatization on PDT 3.5>>> <<<Dependency Parsing on PDT 3.5>>> The evaluation of the contextualized embeddings methods as well as various ways of POS tag utilization is presented in Table TABREF44. Without POS tags and lemmas, the Flair embeddings bring only a slight improvement in dependency parsing when added to WE. In contrast, BERT embeddings employment results in substantial gains, increasing UAS and LAS by 1.6% and 2.1%. A combination of BERT and Flair embeddings does not result in any performance improvement, demonstrating that BERT syntactic representations encompass the Flair embeddings. When introducing POS tags and lemmas predicted by the best model from Section SECREF43 as inputs for dependency parsing, the performance increases only slightly. A better way of POS tags and lemmas exploitation is achieved in a joint model, which predicts POS tags, lemmas, and dependency trees simultaneously. Again, BERT embeddings bring significant improvements, but in contrast to syntax parsing only, adding Flair embeddings to BERT results in moderate gain – we hypothesise that the increase is due to the complementary morphological information present in Flair embeddings (cf. Section SECREF43). Note that the joint model achieves better parsing accuracy than the one given gold POS tags and lemmas on input. However, the POS tags and lemmas predicted by the joint model are of slightly lower quality compared to a standalone tagger of the best configuration from Section SECREF43. Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation). To our best knowledge, research on PDT parsing was performed mostly in the first decade of this century, therefore even our baseline model substantially surpasses previous works. Our best model with contextualized embeddings achieves nearly 50% error reduction both in UAS and LAS. <<</Dependency Parsing on PDT 3.5>>> <<<POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies>>> Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank. This treebank is derived from PDT 3.5 a-layer, with original POS tags kept in XPOS, and the dependency trees and lemmas modified according to UD guidelines. We observe that the word2vec WEs perform similarly to Flair embeddings in this setting. Our hypothesis is that the word2vec WEs performance loss (compared to WEs in Section SECREF43) is caused by using a considerably smaller raw corpus to pretrain the WEs (Czech Wikipedia with 785M words, compared to 4G words used in Section SECREF43), due to licensing reasons. BERT embeddings once more deliver the highest improvement, especially in dependency parsing, and our best model employs all three embedding methods. In the previous ablation experiments, we used the gold segmentation and tokenization in the Czech PDT UD 2.3 treebank. For comparison with state of the art, Czech PDT UD 2.2 treebank without gold segmentation and tokenization is used in evaluation, according to the CoNLL 2018 shared task training and evaluation protocol. Our system reuses segmentation and tokenization produced by UDPipe 2.0 in the CoNLL 2018 shared task and surpasses previous works substantially in all metrics (bottom part of Table TABREF47). Comparing the results with a joint tagging and parsing PDT 3.5 model from Table TABREF7, we observe that the XPOS results are nearly identical as expected. Lemmatization on the UD treebank is performed without the discriminative numeric suffixes (see Section SECREF3) and therefore reaches better performance. Both UAS and LAS are also better on the UD treebank, which we assume is caused by the different annotation scheme. <<</POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies>>> <<</Results>>> <<<Conclusion>>> We have presented an evaluation of two contextualized embeddings methods, namely BERT and Flair. By utilizing these embeddings as input to deep neural networks, we have achieved state-of-the-art results in several Czech text processing tasks, namely in POS tagging, lemmatization, dependency parsing and named entity recognition. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Conclusion, Neural Architectures" ], "type": "disordered_section" }
1909.12642
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> HateMonitors: Language Agnostic Abuse Detection in Social Media <<<Abstract>>> Reducing hateful and offensive content in online social media pose a dual problem for the moderators. On the one hand, rigid censorship on social media cannot be imposed. On the other, the free flow of such content cannot be allowed. Hence, we require efficient abusive language detection system to detect such harmful content in social media. In this paper, we present our machine learning model, HateMonitor, developed for Hate Speech and Offensive Content Identification in Indo-European Languages (HASOC), a shared task at FIRE 2019. We have used a Gradient Boosting model, along with BERT and LASER embeddings, to make the system language agnostic. Our model came at First position for the German sub-task A. We have also made our model public at this https URL . <<</Abstract>>> <<<Introduction>>> In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society. Social media moderators are having a hard time in combating the rampant spread of hate speech as it is closely related to the other forms of abusive language. The evolution of new slangs and multilingualism, further adding to the complexity. Recently, there has been a sharp rise in hate speech related incidents in India, the lynchings being the clear indication BIBREF1. Arun et al. BIBREF1 suggests that hate speech in India is very complicated as people are not directly spreading hate but are spreading misinformation against a particular community. Hence, it has become imperative to study hate speech in Indian language. For the first time, a shared task on abusive content detection has been released for Hindi language at HASOC 2019. This will fuel the hate speech and offensive language research for Indian languages. The inclusion of datasets for English and German language will give a performance comparison for detection of abusive content in high and low resource language. In this paper, we focus on the detection of multilingual hate speech detection that are written in Hindi, English, and German and describe our submission (HateMonitors) for HASOC at FIRE 2019 competition. Our system concatenates two types of sentence embeddings to represent each tweet and use machine learning models for classification. <<</Introduction>>> <<<Related works>>> Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum. Silva et al. BIBREF9 performed a large scale study to understand the target of such hate speech on two social media platforms: Twitter and Whisper. These target could be the Refugees and Immigrants BIBREF10, Jews BIBREF11, BIBREF12 and Muslims BIBREF13, BIBREF14. People could become the target of hate speech based on Nationality BIBREF15, sex BIBREF16, BIBREF17, and gender BIBREF18, BIBREF19 as well. Public expressions of hate speech affects the devaluation of minority members BIBREF20, the exclusion of minorities from the society BIBREF21, and tend to diffuse through the network at a faster rate BIBREF22. One of the key issues with the current state of the hate and offensive language research is that the majority of the research is dedicated to the English language on BIBREF23. Few researchers have tried to solve the problem of abusive language in other languages BIBREF10, BIBREF24, but the works are mostly monolingual. Any online social media platform contains people of different ethnicity, which results in the spread of information in multiple languages. Hence, a robust classifier is needed, which can deal with abusive language in the multilingual domain. Several shared tasks like HASOC BIBREF0, HaSpeeDe BIBREF25, GermEval BIBREF26, AMI BIBREF27, HatEval BIBREF28 have focused on detection of abusive text in multiple languages recently. <<</Related works>>> <<<Dataset and Task description>>> The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages. <<<Datasets>>> We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced. <<</Datasets>>> <<<Tasks>>> Sub-task A consists of building a binary classification model which can predict if a given piece of text is hateful and offensive (HOF) or not (NOT). A data point is annotated as HOF if it contains any form of non-acceptable language such as hate speech, aggression, profanity. Each of the three languages had this subtask. Sub-task B consists of building a multi-class classification model which can predict the three different classes in the data points annotated as HOF: Hate speech (HATE), Offensive language (OFFN), and Profane (PRFN). Again all three languages have this sub-task. Sub-task C consists of building a binary classification model which can predict the type of offense: Targeted (TIN) and Untargeted (UNT). Sub-task C was not conducted for the German dataset. <<</Tasks>>> <<</Dataset and Task description>>> <<<System Description>>> In this section, we will explain the details about our system, which comprises of two sub-parts- feature generation and model selection. Figure FIGREF15 shows the architecture of our system. <<<Feature Generation>>> <<<Preprocessing:>>> We preprocess the tweets before performing the feature extraction. The following steps were followed: We remove all the URLs. Convert text to lowercase. This step was not applied to the Hindi language since Devanagari script does not have lowercase and uppercase characters. We did not normalize the mentions in the text as they could potentially reveal important information for the embeddings encoders. Any numerical figure was normalized to a string `number'. We did not remove any punctuation and stop-words since the context of the sentence might get lost in such a process. Since we are using sentence embedding, it is essential to keep the context of the sentence intact. <<</Preprocessing:>>> <<<Feature vectors:>>> The preprocessed posts are then used to generate features for the classifier. For our model, we decided to generate two types of feature vector: BERT Embeddings and LASER Embeddings. For each post, we generate the BERT and LASER Embedding, which are then concatenated and fed as input to the final classifier. Multilingual BERT embeddings: Bidirectional Encoder Representations from Transformers(BERT) BIBREF29 has played a key role in the advancement of natural language processing domain (NLP). BERT is a language model which is trained to predict the masked words in a sentence. To generate the sentence embedding for a post, we take the mean of the last 11 layers (out of 12) to get a sentence vector with length of 768. LASER embeddings: Researchers at Facebook released a language agnostic sentence embeddings representations (LASER) BIBREF30, where the model jointly learns on 93 languages. The model takes the sentence as input and produces a vector representation of length 1024. The model is able to handle code mixing as well BIBREF31. We pass the preprocessed sentences through each of these embedding models and got two separate sentence representation. Further, we concatenate the embeddings into one single feature vector of length 1792, which is then passed to the final classification model. <<</Feature vectors:>>> <<</Feature Generation>>> <<<Our Model>>> The amount of data in each category was insufficient to train a deep learning model. Building such deep models would lead to overfitting. So, we resorted to using simpler models such as SVM and Gradient boosted trees. Gradient boosted trees BIBREF32 are often the choice for systems where features are pre-extracted from the raw data. In the category of gradient boosted trees, Light Gradient Boosting Machine (LGBM) BIBREF33 is considered one of the most efficient in terms of memory footprint. Moreover, it has been part of winning solutions of many competition . Hence, we used LGBM as model for the downstream tasks in this competition. <<</Our Model>>> <<</System Description>>> <<<Results>>> The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively. <<</Results>>> <<<Discussion>>> In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21. <<</Discussion>>> <<<Conclusion>>> In this shared task, we experimented with zero-shot transfer learning on abusive text detection with pre-trained BERT and LASER sentence embeddings. We use an LGBM model to train the embeddings to perform downstream task. Our model for German language got the first position. The results provided a strong baseline for further research in multilingual hate speech. We have also made the models public for use by other researchers. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Related works" ], "type": "disordered_section" }
1909.12642
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> HateMonitors: Language Agnostic Abuse Detection in Social Media <<<Abstract>>> Reducing hateful and offensive content in online social media pose a dual problem for the moderators. On the one hand, rigid censorship on social media cannot be imposed. On the other, the free flow of such content cannot be allowed. Hence, we require efficient abusive language detection system to detect such harmful content in social media. In this paper, we present our machine learning model, HateMonitor, developed for Hate Speech and Offensive Content Identification in Indo-European Languages (HASOC), a shared task at FIRE 2019. We have used a Gradient Boosting model, along with BERT and LASER embeddings, to make the system language agnostic. Our model came at First position for the German sub-task A. We have also made our model public at this https URL . <<</Abstract>>> <<<Introduction>>> In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society. Social media moderators are having a hard time in combating the rampant spread of hate speech as it is closely related to the other forms of abusive language. The evolution of new slangs and multilingualism, further adding to the complexity. Recently, there has been a sharp rise in hate speech related incidents in India, the lynchings being the clear indication BIBREF1. Arun et al. BIBREF1 suggests that hate speech in India is very complicated as people are not directly spreading hate but are spreading misinformation against a particular community. Hence, it has become imperative to study hate speech in Indian language. For the first time, a shared task on abusive content detection has been released for Hindi language at HASOC 2019. This will fuel the hate speech and offensive language research for Indian languages. The inclusion of datasets for English and German language will give a performance comparison for detection of abusive content in high and low resource language. In this paper, we focus on the detection of multilingual hate speech detection that are written in Hindi, English, and German and describe our submission (HateMonitors) for HASOC at FIRE 2019 competition. Our system concatenates two types of sentence embeddings to represent each tweet and use machine learning models for classification. <<</Introduction>>> <<<Related works>>> Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum. Silva et al. BIBREF9 performed a large scale study to understand the target of such hate speech on two social media platforms: Twitter and Whisper. These target could be the Refugees and Immigrants BIBREF10, Jews BIBREF11, BIBREF12 and Muslims BIBREF13, BIBREF14. People could become the target of hate speech based on Nationality BIBREF15, sex BIBREF16, BIBREF17, and gender BIBREF18, BIBREF19 as well. Public expressions of hate speech affects the devaluation of minority members BIBREF20, the exclusion of minorities from the society BIBREF21, and tend to diffuse through the network at a faster rate BIBREF22. One of the key issues with the current state of the hate and offensive language research is that the majority of the research is dedicated to the English language on BIBREF23. Few researchers have tried to solve the problem of abusive language in other languages BIBREF10, BIBREF24, but the works are mostly monolingual. Any online social media platform contains people of different ethnicity, which results in the spread of information in multiple languages. Hence, a robust classifier is needed, which can deal with abusive language in the multilingual domain. Several shared tasks like HASOC BIBREF0, HaSpeeDe BIBREF25, GermEval BIBREF26, AMI BIBREF27, HatEval BIBREF28 have focused on detection of abusive text in multiple languages recently. <<</Related works>>> <<<Dataset and Task description>>> The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages. <<<Datasets>>> We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced. <<</Datasets>>> <<<Tasks>>> Sub-task A consists of building a binary classification model which can predict if a given piece of text is hateful and offensive (HOF) or not (NOT). A data point is annotated as HOF if it contains any form of non-acceptable language such as hate speech, aggression, profanity. Each of the three languages had this subtask. Sub-task B consists of building a multi-class classification model which can predict the three different classes in the data points annotated as HOF: Hate speech (HATE), Offensive language (OFFN), and Profane (PRFN). Again all three languages have this sub-task. Sub-task C consists of building a binary classification model which can predict the type of offense: Targeted (TIN) and Untargeted (UNT). Sub-task C was not conducted for the German dataset. <<</Tasks>>> <<</Dataset and Task description>>> <<<System Description>>> In this section, we will explain the details about our system, which comprises of two sub-parts- feature generation and model selection. Figure FIGREF15 shows the architecture of our system. <<<Feature Generation>>> <<<Preprocessing:>>> We preprocess the tweets before performing the feature extraction. The following steps were followed: We remove all the URLs. Convert text to lowercase. This step was not applied to the Hindi language since Devanagari script does not have lowercase and uppercase characters. We did not normalize the mentions in the text as they could potentially reveal important information for the embeddings encoders. Any numerical figure was normalized to a string `number'. We did not remove any punctuation and stop-words since the context of the sentence might get lost in such a process. Since we are using sentence embedding, it is essential to keep the context of the sentence intact. <<</Preprocessing:>>> <<<Feature vectors:>>> The preprocessed posts are then used to generate features for the classifier. For our model, we decided to generate two types of feature vector: BERT Embeddings and LASER Embeddings. For each post, we generate the BERT and LASER Embedding, which are then concatenated and fed as input to the final classifier. Multilingual BERT embeddings: Bidirectional Encoder Representations from Transformers(BERT) BIBREF29 has played a key role in the advancement of natural language processing domain (NLP). BERT is a language model which is trained to predict the masked words in a sentence. To generate the sentence embedding for a post, we take the mean of the last 11 layers (out of 12) to get a sentence vector with length of 768. LASER embeddings: Researchers at Facebook released a language agnostic sentence embeddings representations (LASER) BIBREF30, where the model jointly learns on 93 languages. The model takes the sentence as input and produces a vector representation of length 1024. The model is able to handle code mixing as well BIBREF31. We pass the preprocessed sentences through each of these embedding models and got two separate sentence representation. Further, we concatenate the embeddings into one single feature vector of length 1792, which is then passed to the final classification model. <<</Feature vectors:>>> <<</Feature Generation>>> <<<Our Model>>> The amount of data in each category was insufficient to train a deep learning model. Building such deep models would lead to overfitting. So, we resorted to using simpler models such as SVM and Gradient boosted trees. Gradient boosted trees BIBREF32 are often the choice for systems where features are pre-extracted from the raw data. In the category of gradient boosted trees, Light Gradient Boosting Machine (LGBM) BIBREF33 is considered one of the most efficient in terms of memory footprint. Moreover, it has been part of winning solutions of many competition . Hence, we used LGBM as model for the downstream tasks in this competition. <<</Our Model>>> <<</System Description>>> <<<Results>>> The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively. <<</Results>>> <<<Discussion>>> In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21. <<</Discussion>>> <<<Conclusion>>> In this shared task, we experimented with zero-shot transfer learning on abusive text detection with pre-trained BERT and LASER sentence embeddings. We use an LGBM model to train the embeddings to perform downstream task. Our model for German language got the first position. The results provided a strong baseline for further research in multilingual hate speech. We have also made the models public for use by other researchers. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Conclusion" ], "type": "disordered_section" }
1909.12642
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> HateMonitors: Language Agnostic Abuse Detection in Social Media <<<Abstract>>> Reducing hateful and offensive content in online social media pose a dual problem for the moderators. On the one hand, rigid censorship on social media cannot be imposed. On the other, the free flow of such content cannot be allowed. Hence, we require efficient abusive language detection system to detect such harmful content in social media. In this paper, we present our machine learning model, HateMonitor, developed for Hate Speech and Offensive Content Identification in Indo-European Languages (HASOC), a shared task at FIRE 2019. We have used a Gradient Boosting model, along with BERT and LASER embeddings, to make the system language agnostic. Our model came at First position for the German sub-task A. We have also made our model public at this https URL . <<</Abstract>>> <<<Introduction>>> In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society. Social media moderators are having a hard time in combating the rampant spread of hate speech as it is closely related to the other forms of abusive language. The evolution of new slangs and multilingualism, further adding to the complexity. Recently, there has been a sharp rise in hate speech related incidents in India, the lynchings being the clear indication BIBREF1. Arun et al. BIBREF1 suggests that hate speech in India is very complicated as people are not directly spreading hate but are spreading misinformation against a particular community. Hence, it has become imperative to study hate speech in Indian language. For the first time, a shared task on abusive content detection has been released for Hindi language at HASOC 2019. This will fuel the hate speech and offensive language research for Indian languages. The inclusion of datasets for English and German language will give a performance comparison for detection of abusive content in high and low resource language. In this paper, we focus on the detection of multilingual hate speech detection that are written in Hindi, English, and German and describe our submission (HateMonitors) for HASOC at FIRE 2019 competition. Our system concatenates two types of sentence embeddings to represent each tweet and use machine learning models for classification. <<</Introduction>>> <<<Related works>>> Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum. Silva et al. BIBREF9 performed a large scale study to understand the target of such hate speech on two social media platforms: Twitter and Whisper. These target could be the Refugees and Immigrants BIBREF10, Jews BIBREF11, BIBREF12 and Muslims BIBREF13, BIBREF14. People could become the target of hate speech based on Nationality BIBREF15, sex BIBREF16, BIBREF17, and gender BIBREF18, BIBREF19 as well. Public expressions of hate speech affects the devaluation of minority members BIBREF20, the exclusion of minorities from the society BIBREF21, and tend to diffuse through the network at a faster rate BIBREF22. One of the key issues with the current state of the hate and offensive language research is that the majority of the research is dedicated to the English language on BIBREF23. Few researchers have tried to solve the problem of abusive language in other languages BIBREF10, BIBREF24, but the works are mostly monolingual. Any online social media platform contains people of different ethnicity, which results in the spread of information in multiple languages. Hence, a robust classifier is needed, which can deal with abusive language in the multilingual domain. Several shared tasks like HASOC BIBREF0, HaSpeeDe BIBREF25, GermEval BIBREF26, AMI BIBREF27, HatEval BIBREF28 have focused on detection of abusive text in multiple languages recently. <<</Related works>>> <<<Dataset and Task description>>> The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages. <<<Datasets>>> We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced. <<</Datasets>>> <<<Tasks>>> Sub-task A consists of building a binary classification model which can predict if a given piece of text is hateful and offensive (HOF) or not (NOT). A data point is annotated as HOF if it contains any form of non-acceptable language such as hate speech, aggression, profanity. Each of the three languages had this subtask. Sub-task B consists of building a multi-class classification model which can predict the three different classes in the data points annotated as HOF: Hate speech (HATE), Offensive language (OFFN), and Profane (PRFN). Again all three languages have this sub-task. Sub-task C consists of building a binary classification model which can predict the type of offense: Targeted (TIN) and Untargeted (UNT). Sub-task C was not conducted for the German dataset. <<</Tasks>>> <<</Dataset and Task description>>> <<<System Description>>> In this section, we will explain the details about our system, which comprises of two sub-parts- feature generation and model selection. Figure FIGREF15 shows the architecture of our system. <<<Feature Generation>>> <<<Preprocessing:>>> We preprocess the tweets before performing the feature extraction. The following steps were followed: We remove all the URLs. Convert text to lowercase. This step was not applied to the Hindi language since Devanagari script does not have lowercase and uppercase characters. We did not normalize the mentions in the text as they could potentially reveal important information for the embeddings encoders. Any numerical figure was normalized to a string `number'. We did not remove any punctuation and stop-words since the context of the sentence might get lost in such a process. Since we are using sentence embedding, it is essential to keep the context of the sentence intact. <<</Preprocessing:>>> <<<Feature vectors:>>> The preprocessed posts are then used to generate features for the classifier. For our model, we decided to generate two types of feature vector: BERT Embeddings and LASER Embeddings. For each post, we generate the BERT and LASER Embedding, which are then concatenated and fed as input to the final classifier. Multilingual BERT embeddings: Bidirectional Encoder Representations from Transformers(BERT) BIBREF29 has played a key role in the advancement of natural language processing domain (NLP). BERT is a language model which is trained to predict the masked words in a sentence. To generate the sentence embedding for a post, we take the mean of the last 11 layers (out of 12) to get a sentence vector with length of 768. LASER embeddings: Researchers at Facebook released a language agnostic sentence embeddings representations (LASER) BIBREF30, where the model jointly learns on 93 languages. The model takes the sentence as input and produces a vector representation of length 1024. The model is able to handle code mixing as well BIBREF31. We pass the preprocessed sentences through each of these embedding models and got two separate sentence representation. Further, we concatenate the embeddings into one single feature vector of length 1792, which is then passed to the final classification model. <<</Feature vectors:>>> <<</Feature Generation>>> <<<Our Model>>> The amount of data in each category was insufficient to train a deep learning model. Building such deep models would lead to overfitting. So, we resorted to using simpler models such as SVM and Gradient boosted trees. Gradient boosted trees BIBREF32 are often the choice for systems where features are pre-extracted from the raw data. In the category of gradient boosted trees, Light Gradient Boosting Machine (LGBM) BIBREF33 is considered one of the most efficient in terms of memory footprint. Moreover, it has been part of winning solutions of many competition . Hence, we used LGBM as model for the downstream tasks in this competition. <<</Our Model>>> <<</System Description>>> <<<Results>>> The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively. <<</Results>>> <<<Discussion>>> In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21. <<</Discussion>>> <<<Conclusion>>> In this shared task, we experimented with zero-shot transfer learning on abusive text detection with pre-trained BERT and LASER sentence embeddings. We use an LGBM model to train the embeddings to perform downstream task. Our model for German language got the first position. The results provided a strong baseline for further research in multilingual hate speech. We have also made the models public for use by other researchers. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, System Description" ], "type": "disordered_section" }
2003.00639
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation <<<Abstract>>> Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. What is more, so far, there are no unified dialogue complexity measurements, and the dialogue complexity embodies multiple aspects of attributes---specificity, repetitiveness, relevance, etc. Inspired by human behaviors of learning to converse, where children learn from easy dialogues to complex ones and dynamically adjust their learning progress, in this paper, we first analyze five dialogue attributes to measure the dialogue complexity in multiple perspectives on three publicly available corpora. Then, we propose an adaptive multi-curricula learning framework to schedule a committee of the organized curricula. The framework is established upon the reinforcement learning paradigm, which automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model. Extensive experiments conducted on five state-of-the-art models demonstrate its learning efficiency and effectiveness with respect to 13 automatic evaluation metrics and human judgments. <<</Abstract>>> <<<Introduction>>> Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly. Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog. <<</Introduction>>> <<<Curriculum Plausibility>>> Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively. <<<Conversational Attributes>>> <<<Specificity>>> A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1): where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$. <<</Specificity>>> <<<Repetitiveness>>> Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as: where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise. <<</Repetitiveness>>> <<<Query-relatedness>>> A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively. <<</Query-relatedness>>> <<<Continuity>>> A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them. <<</Continuity>>> <<<Model Confidence>>> Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model. <<</Model Confidence>>> <<</Conversational Attributes>>> <<<Dialogue Analysis>>> <<<Distributions among Attributes>>> The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat. <<</Distributions among Attributes>>> <<<Attributes Independence>>> So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives. <<</Attributes Independence>>> <<</Dialogue Analysis>>> <<</Curriculum Plausibility>>> <<<Curriculum Dialogue Learning>>> We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model. <<<Single Curriculum Dialogue Learning>>> We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum. <<</Single Curriculum Dialogue Learning>>> <<<Adaptive Multi-curricula Learning>>> Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges. More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments: where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$. The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient: where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $. <<</Adaptive Multi-curricula Learning>>> <<</Curriculum Dialogue Learning>>> <<<Experiments>>> <<<Experiment Settings>>> We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6. <<</Experiment Settings>>> <<<Implementation and Reproducibility>>> Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same. <<</Implementation and Reproducibility>>> <<<Overall Performance and Human Evaluation>>> The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm. We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects. <<</Overall Performance and Human Evaluation>>> <<<Model Analysis>>> <<<Single vs Multi-curricula>>> To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance. <<</Single vs Multi-curricula>>> <<<Effects of Adaptive Multi-curricula Learning>>> Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner. <<</Effects of Adaptive Multi-curricula Learning>>> <<<Learning Efficiency>>> Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases. <<</Learning Efficiency>>> <<<Multi-curricula Learning Route>>> To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior. <<</Multi-curricula Learning Route>>> <<<Examples with Different Learning Frequencies>>> As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework. <<</Examples with Different Learning Frequencies>>> <<</Model Analysis>>> <<</Experiments>>> <<<Related Work>>> Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality. Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity. <<</Related Work>>> <<<Conclusion>>> In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Conclusion" ], "type": "disordered_section" }
1909.13668
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation <<<Abstract>>> Variational Autoencoders (VAEs) are known to suffer from learning uninformative latent representation of the input due to issues such as approximated posterior collapse, or entanglement of the latent space. We impose an explicit constraint on the Kullback-Leibler (KL) divergence term inside the VAE objective function. While the explicit constraint naturally avoids posterior collapse, we use it to further understand the significance of the KL term in controlling the information transmitted through the VAE channel. Within this framework, we explore different properties of the estimated posterior distribution, and highlight the trade-off between the amount of information encoded in a latent code during training, and the generative capacity of the model. <<</Abstract>>> <<<Introduction>>> Despite the recent success of deep generative models such as Variational Autoencoders (VAEs) BIBREF0 and Generative Adversarial Networks (GANs) BIBREF1 in different areas of Machine Learning, they have failed to produce similar generative quality in NLP. In this paper we focus on VAEs and their mathematical underpinning to explain their behaviors in the context of text generation. The vanilla VAE applied to text BIBREF2 consists of an encoder (inference) and decoder (generative) networks: Given an input $x$, the encoder network parameterizes $q_\phi (z|x)$ and infers about latent continuous representations of $x$, while the decoder network parameterizes $p_\theta (x|z)$ and generates $x$ from the continuous code $z$. The two models are jointly trained by maximizing the Evidence Lower Bound (ELBO), $\mathcal {L}(\theta , \phi ; x,z)$: where the first term is the reconstruction term, and the second term is the Kullback-Leibler (KL) divergence between the posterior distribution of latent variable $z$ and its prior $p({z})$ (i.e., $\mathcal {N}(0,I)$). The KL term can be interpreted as a regularizer which prevents the inference network from copying ${x}$ into ${z}$, and for the case of a Gaussian prior and posterior has a closed-form solution. With powerful autoregressive decoders, such as LSTMs, the internal decoder's cells are likely to suffice for representing the sentence, leading to a sub-optimal solution where the decoder ignores the inferred latent code ${z}$. This allows the encoder to become independent of $x$, an issue known as posterior collapse ($q_\phi ({z}|{x})\approx p({z})$) where the inference network produces uninformative latent variables. Several solutions have been proposed to address the posterior collapse issue: (i) Modifying the architecture of the model by weakening decoders BIBREF2, BIBREF3, BIBREF4, BIBREF5, or introducing additional connections between the encoder and decoder to enforce the dependence between $x$ and $z$ BIBREF6, BIBREF7, BIBREF8; (ii) Using more flexible or multimodal priors BIBREF9, BIBREF10; (iii) Alternating the training by focusing on the inference network in the earlier stages BIBREF11, or augmenting amortized optimization of VAEs with instance-based optimization of stochastic variational inference BIBREF12, BIBREF13. All of the aforementioned approaches impose one or more of the following limitations: restraining the choice of decoder, modifying the training algorithm, or requiring a substantial alternation of the objective function. As exceptions to these, $\delta $-VAE BIBREF14 and $\beta $-VAE BIBREF15 aim to avoid the posterior collapse by explicitly controlling the regularizer term in eqn. DISPLAY_FORM2. While $\delta $-VAE aims to impose a lower bound on the divergence term, $\beta $-VAE (betavae) controls the impact of regularization via an additional hyperparameter (i.e., $\beta D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )$). A special case of $\beta $-VAE is annealing BIBREF2, where $\beta $ increases from 0 to 1 during training. In this study, we propose to use an extension of $\beta $-VAE BIBREF16 which permits us to explicitly control the magnitude of the KL term while avoiding the posterior collapse issue even in the existence of a powerful decoder. We use this framework to examine different properties of the estimated posterior and the generative behaviour of VAEs and discuss them in the context of text generation via various qualitative and quantitative experiments. <<</Introduction>>> <<<Kullback-Leibler Divergence in VAE>>> We take the encoder-decoder of VAEs as the sender-receiver in a communication network. Given an input message $x$, a sender generates a compressed encoding of $x$ denoted by $z$, while the receiver aims to fully decode $z$ back into $x$. The quality of this communication can be explained in terms of rate (R) which measures the compression level of $z$ as compared to the original message $x$, and distortion (D) which quantities the overall performance of the communication in encoding a message at sender and successfully decoding it at the receiver. Additionally, the capacity of the encoder channel can be measured in terms of the amount of mutual information between $x$ and $z$, denoted by $\text{I}({x};{z})$ BIBREF17. <<<Reconstruction vs. KL>>> The reconstruction loss can naturally measure distortion ($D := - \big \langle \log p_\theta ({x}|{z}) \big \rangle $), while the KL term quantifies the amount of compression (rate; $R := D_{KL}[q_\phi ({z}|{x})|| p({z})]$) by measuring the divergence between a channel that transmits zero bit of information about $x$, denoted by $p(z)$, and the encoder channel of VAEs, $q_\phi (z|x)$. BIBREF18 introduced the $H-D \le \text{I}({x};{z}) \le R$ bounds, where $H$ is the empirical data entropy (a constant). These bounds on mutual information allow us to analyze the trade-off between the reconstruction and KL terms in eqn. (DISPLAY_FORM2). For instance, since $\text{I}({x};{z})$ is non-negative (using Jensen's inequality), the posterior collapse can be explained as the situation where $\text{I}({x};{z})=0$, where encoder transmits no information about $x$, causing $R=0, D=H$. Increasing $\text{I}({x};{z})$ can be encouraged by increasing both bounds: increasing the upper-bound (KL term) can be seen as the mean to control the maximum capacity of the encoder channel, while reducing the distortion (reconstruction loss) will tighten the bound by pushing the lower bound to its limits ($H-D\rightarrow H$). A similar effect on the lower-bound can be encouraged by using stronger decoders which could potentially decrease the reconstruction loss. Hence, having a framework that permits the use of strong decoders while avoiding the posterior collapse is desirable. Similarly, channel capacity can be decreased. <<</Reconstruction vs. KL>>> <<<Explicit KL Control via @!START@$\beta $@!END@-VAE>>> Given the above interpretation, we now turn to a slightly different formulation of ELBO based on $\beta $-VAE BIBREF15. This allows control of the trade-off between the reconstruction and KL terms, as well as to set explicit KL value. While $\beta $-VAE offers regularizing the ELBO via an additional coefficient $\beta \in {\rm I\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term, where $C\!\! \in \!\! {\rm I\!R}^+$ and $| . |$ denotes the absolute value. While we could apply constraint optimization to impose the explicit constraint of $\text{KL}\!\!=\!\!C$, we found that the above objective function satisfies the constraint (experiment). Alternatively, it has been shown BIBREF21 the similar effect could be reached by replacing the second term in eqn. DISPLAY_FORM6 with $\max \big (C,D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )\big )$ at the risk of breaking the ELBO when $\text{KL}\!\!<\!\!C$ BIBREF22. <<</Explicit KL Control via @!START@$\beta $@!END@-VAE>>> <<</Kullback-Leibler Divergence in VAE>>> <<<Experiments>>> We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space. For all experiments, we use the objective function of eqn. DISPLAY_FORM6 with $\beta =1$. We do not use larger $\beta $s because the constraint $\text{KL}=C$ is always satisfied. <<<Corpora>>> We use 5 different corpora covering different domains and size through this section: Yelp and Yahoo BIBREF4 both have ($100k$,$10k$,$10k$) sentences in (train, dev, test) sets and $20k$ words in vocabulary, Children's Book Test (CBT; BIBREF23) has ($192k$,$10k$,$12k$) sentences and $12k$ vocab, Wikipedia (WIKI; BIBREF24) has ($2m$,$270k$,$270k$) sentences and $20k$ vocab, and WebText BIBREF25 has ($1m$,$23k$,$24k$) sentences and $22k$ vocab. <<</Corpora>>> <<<Models>>> We examine three VAE architectures, covering a range of decoding strengths to examine if the objective function in eqn. DISPLAY_FORM6 is immune to posterior collapse regardless of the choice of encoder-decoder architectures: $\beta _C$-VAELSTM with (LSTM encoder, LSTM decoder), $\beta _C$-VAEGRU with (GRU encoder, GRU decoder) BIBREF26, and $\beta _C$-VAECNN with (LSTM encoder, CNN decoder) BIBREF27. The dimension of word embeddings is 256 and the dimension of the latent variable is 64. The encoder and the decoder, for both VAELSTM and VAEGRU, have hidden size of 512 dimensions. VAECNN has exactly the same encoder as VAELSTM, while the decoder follows similar architecture to GLU with a bottleneck structure (with two blocks) BIBREF27 and has 512 channels externally and 128 internally for the convolutions with the filter size of 20. All models were trained for 10 epochs and optimised the objective function (eqn. DISPLAY_FORM6) with Adam BIBREF28 with following learning rates: $10^{-5}\times 85$ for VAEGRU and VAELSTM, and $10^{-4}$ for VAECNN. To couple the encoder with the decoder we concatenate the latent variable to word embeddings at each time step without initialisation of hidden state. <<</Models>>> <<<Rate and Distortion>>> To analyse the dependence between the values of explicit rate ($C$) and distortion, we trained our models with different values of $C$, ranging from 10 to 100. Figure FIGREF8 reports the results for $\beta _C$-VAEGRU, $\beta _C$-VAELSTM, and $\beta _C$-VAECNN models on Yahoo and Yelp corpora. In all our experiments we found that $C\!-\!1\!\le KL\!\le \! C\!+\!1$, demonstrating that the objective function effectively imposed the desired constraint on KL term. Hence, setting any $C>0$ can in practice avoid the collapse issue. The general trend is that by increasing the value of $C$ one can get a better reconstruction (lower distortion) while the amount of gain varies depending on the VAE's architecture and corpus. Additionally, we measured rate and distortion on CBT, WIKI, and WebText corpora using $\beta _C$-VAELSTM and observed the same trend with the increase of $C$, see Table TABREF12. This observation is consistent with the bound on $\text{I}({x};{z})$ we discussed earlier (expl) such that with an increase of KL we increase an upper bound on $\text{I}({x};{z})$ which in turn allows to have smaller values of reconstruction loss. Additionally, as reported in Table TABREF12, encouraging higher rates (via larger $C$) encourages more active units (AU; BIBREF29) in the latent code $z$. As an additional verification, we also group the test sentences into buckets based on their length and report BLEU-2/4 and ROUGE-2/4 metrics to measure the quality of reconstruction step in Table TABREF12. As expected, we observe that increasing rate has a consistently positive impact on improving BLEU and ROUGE scores. <<</Rate and Distortion>>> <<<Aggregated Posterior>>> To understand how the approximated posteriors are being affected by the magnitude of the KL, we adopted an approach from BIBREF6 and looked at the divergence between the aggregated posterior, $q_\phi (z)=\sum _{x\sim q(x)} q_\phi (z|x)$, and prior $p(z$). Since during generation we generate samples from the prior, ideally we would like the aggregated posterior to be as close as possible to the prior. We obtained unbiased samples of ${z}$ first by sampling an ${x}$ from data and then ${z} \sim q_\phi ({z}|{x})$, and measured the log determinant of covariance of the samples ($\log \det (\mathrm {Cov}[q_\phi ({z})])$). As reported in Figure FIGREF8, we observed that $\log \det (\mathrm {Cov}[q_\phi ({z})])$ degrades as $C$ grows, indicating sharper approximate posteriors. We then consider the difference of $p(z)$ and $q(z)$ in their means and variances, by computing the KL divergence from the moment-matching Gaussian fit of $q(z)$ to $p(z)$: This returns smaller values for $\beta _{C=5}$-VAEGRU (Yelp: 0, Yahoo: 0), and larger values for $\beta _{C=100}$-VAEGRU (Yelp: 8, Yahoo: 5), which illustrates that the overlap between $q_\phi ({z})$ and $p(z)$ shrinks further as $C$ grows. The above observation is better pronounced in Table TABREF12, where we also report the mean ($||\mu ||^2_2$) of unbiased samples of $z$, highlighting the divergence from the mean of the prior distribution as rate increases. Therefore, for the case of lower $C$, the latent variables observed during training are closer to the generated sample from the prior which makes the decoder more suitable for generation purpose. We will examine this hypothesis in the following section. <<</Aggregated Posterior>>> <<<Text Generation>>> To empirically examine how channel capacity translates into generative capacity of the model, we experimented with the $\beta _C$-VAELSTM models from Table TABREF12. To generate a novel sentence, after a model was trained, a latent variable $z$ is sampled from the prior distribution and then transformed into a sequence of words by the decoder $p(x|z)$. During decoding for generation we try three decoding schemes: (i) Greedy: which selects the most probable word at each step, (ii) Top-k BIBREF30: which at each step samples from the K most probable words, and (iii) Nucleus Sampling (NS) BIBREF31: which at each step samples from a flexible subset of most probable words chosen based on their cumulative mass (set by a threshold $p$, where $p = 1$ means sampling from the full distribution). While similar to Top-k, the benefit of NS scheme is that the vocabulary size at each time step of decoding varies, a property that encourages diversity and avoids degenerate text patterns of greedy or beam search decoding BIBREF31. We experiment with NS $(p=\lbrace 0.5, 0.9\rbrace )$ and Top-k $(k=\lbrace 5, 15\rbrace )$. <<<Qualitative Analysis>>> We follow the settings of homotopy experiment BIBREF2 where first a set of latent variables was obtained by performing a linear interpolation between $z_1 \sim p(z)$ and $z_2 \sim p(z)$. Then each $z$ in the set was converted into a sequence of words by the decoder $p(x|z)$. Besides the initial motivation of BIBREF2 to examine how neighbouring latent codes look like, our additional incentive is to analyse how sensitive the decoder is to small variations in the latent variable when trained with different channel capacities, $C=\lbrace 3,15,100\rbrace $. Table TABREF17 shows the generated sentences via different decoding schemes for each channel capacity. For space reason, we only report the generated sentences for greedy, Top-$k=15$, and NS $p=0.9$. To make the generated sequences comparable across different decoding schemes or C values, we use the same samples of $z$ for decoding. <<<Sensitivity of Decoder>>> To examine the sensitivity of the decoder to variations of the latent variable, we consider the sentences generate with the greedy decoding scheme (the first column in Table TABREF17). The other two schemes are not suitable for this analysis as they include sampling procedure. This means that if we decode the same latent variable twice we will get two different sentences. We observed that with lower channel capacity ($C=3$) the decoder tends to generate identical sentences for the interpolated latent variables (we highlight these sentences in gray), exhibiting decoder's lower sensitivity to $z$'s variations. However, with the increase of channel capacity ($C=15,100$) the decoder becomes more sensitive. This observation is further supported by the increasing pattern of active units in Table TABREF12: Given that AU increases with increase of $C$ one would expect that activation pattern of a latent variable becomes more complex as it comprises more information. Therefore small change in the pattern would have a greater effect on the decoder. <<</Sensitivity of Decoder>>> <<<Coherence of Sequences>>> We observe that the model trained with large values of $C$ compromises sequences' coherence during the sampling. This is especially evident when we compare $C=3$ with $C=100$. Analysis of Top-15 and NS (p=0.9) generated samples reveals that the lack of coherence is not due to the greedy decoding scheme per se, and can be attributed to the model in general. To understand this behavior further, we need two additional results from Table TABREF12: LogDetCov and $||\mu ||^2_2$. One can notice that as $C$ increases LogDetCov decreases and $||\mu ||^2_2$ increases. This indicates that the aggregated posterior becomes further apart from the prior, hence the latent codes seen during the training diverge more from the codes sampled from the prior during generation. We speculate this contributes to the coherence of the generated samples, as the decoder is not equipped to decode prior samples properly at higher $C$s. <<</Coherence of Sequences>>> <<</Qualitative Analysis>>> <<<Quantitative Analysis>>> Quantitative analysis of generated text without gold reference sequences (e.g. in Machine Translation or Summarization) has been a long-standing challenge. Recently, there have been efforts towards this direction, with proposal such as self-BLEU BIBREF32, forward cross entropy BIBREF33 and Fréchet InferSent Distance BIBREF33. We opted for FCE as a complementary metric to our qualitative analysis. To calculate FCE, first a collection of synthetic sentences are generated by sampling $z\sim p(z)$ and decoding the samples into sentences. The synthetic sequences are then used to train a language model (an LSTM with the parametrisation of our decoder). The FCE score is estimated by reporting the negative log likelihood (NLL) of the trained LM on the set of human generated sentences. We generated synthetic corpora using trained models from Table TABREF12 with different C and decoding schemes and using the same exact $z$ samples for all corpora. Since the generated corpora using different C values would have different coverage of words in the test set (i.e., Out-of-Vocabulary ratios), we used a fixed vocabulary to minimize the effect of different vocabularies in our analysis. Our dictionary contains words that are common in all of the three corpora, while the rest of the words that don't exist in this dictionary are replaced with 〈unk〉 symbol. Similarly, we used this fixed dictionary to preprocess the test sets. Also, to reduce bias to a particular set of sampled $z$'s we measure the FCE score three times, each time we sampled a new training corpus from a $\beta _C$-VAELSTM decoder and trained an LM from scratch. In Table TABREF20 we report the average FCE (NLL) for the generated corpora. In the qualitative analysis we observed that the text generated by the $\beta _C$-VAELSTM trained with large values of $C=100$ exhibits lower quality (i.e., in terms of coherence). This observation is supported by the FCE score of NS(p=0.9) decoding scheme (TABREF20), since the performance drops when the LM is trained on the corpus generated with $C=100$. The generated corpora with $C=3$ and $C=15$ achieve similar FCE score. However, these patterns are reversed for Greedy decoding scheme, where the general tendency of FCE scores suggests that for larger values of $C$ the $\beta _C$-VAELSTM seems to generate text which better approximates the natural sentences in the test set. To understand this further, we report additional statistics in Table TABREF20: percentage of 〈unk〉 symbols, self-BLEU and average sentence length in the corpus. The average sentence length, in the generated corpora is very similar for both decoding schemes, removing the possibility that the pathological pattern on FCE scores was caused by difference in sentence length. However, we observe that for Greedy decoding more than $30\%$ of the test set consists of 〈unk〉. Intuitively, seeing more evidence of this symbol during training would improve our estimate for the 〈unk〉. As reported in the table, the $\%$unk increases on almost all corpora as $C$ grows, which is then translated into getting a better FCE score at test. Therefore, we believe that FCE at high $\%$unk is not a reliable quantitative metric to assess the quality of the generated syntactic corpora. Furthermore, for Greedy decoding, self-BLEU decreases when $C$ increases. This suggests that generated sentences for higher value of $C$ are more diverse. Hence, the LM trained on more diverse corpora can generalise better, which in turn affects the FCE. In contrast, the effect the 〈unk〉 symbol has on the corpora generated with the NS(p=0.9) decoding scheme is minimal for two reasons: First, the vocabulary size for the generated corpora, for all values of $C$ is close to the original corpus (the corpus we used to train the $\beta _C$-VAELSTM). Second, the vocabularies of the corpora generated with three values of $C$ is very close to each other. As a result, minimum replacement of the words with the 〈unk〉 symbol is required, making the experiment to be more reflective of the quality of the generated text. Similarly, self-BLEU for the NS(p=0.9) is the same for all values of $C$. This suggests that the diversity of sentences has minimal, if any, effect on the FCE. <<</Quantitative Analysis>>> <<</Text Generation>>> <<<Syntactic Test>>> In this section, we explore if any form of syntactic information is captured by the encoder and represented in the latent codes despite the lack of any explicit syntactic signal during the training of the $\beta _C$-VAELSTM. To train the models we used the same WIKI data set as in BIBREF24, but we filtered out all the sentences that are longer than 50 space-separated tokens. We use the data set of BIBREF24 which consists of pairs of grammatical and ungrammatical sentences to test various syntactic phenomenon. For example, a pair in subject-verb agreement category would be: (The author laughs, The author laugh). We encode both the grammatical and ungrammatical sentences into the latent codes $z^+$ and $z^-$, respectively. Then we condition the decoder on the $z^+$ and try to determine whether the decoder assigns higher probability to the grammatical sentence (denoted by $x^+$): $p(x^-|z^+) < p(x^+|z^+)$ (denoted by p1 in Table TABREF28). We repeat the same experiment but this time try to determine whether the decoder, when conditioned on the ungrammatical code ($z^-$), still prefers to assign higher probability to the grammatical sentence: $p(x^-|z^-) < p(x^+|z^-)$ (denoted by p2 in Table TABREF28). Table TABREF28 shows the p1 and p2 for the $\beta _C$-VAELSTM model trained with $C=\lbrace 3,100\rbrace $. Both the p1 and p2 are similar to the accuracy and correspond to how many times a grammatical sentence was assigned a higher probability. As reported for C=3, p1 and p2 match in almost all cases. This is to some degree expected since lower channel capacity encourages a more dominating decoder which in our case was trained on grammatical sentences from the WIKI. On the other hand, this illustrates that despite avoiding the KL-collapse issue, the dependence of the decoder on the latent code is so negligible that the decoder hardly distinguishes the grammatical and ungrammatical inputs. This changes for $C=100$, as in almost all the cases the decoder becomes strongly dependent on the latent code and can differentiate between what it has seen as input and the closely similar sentence it hasn't received as the input: The decoder assigns larger probability to the ungrammatical sentence when conditioned on the $z^-$ and, similarly, larger probability to the grammatical sentence when conditioned on the $z^+$. However, the above observations neither confirm nor reject existence of grammar signal in the latent codes. We run a second set of experiments where we aim to discard sentence specific information from the latent codes by averaging the codes inside each syntactic category. The averaged codes are denoted by $\bar{z}^+$ and $\bar{z}^-$, and the corresponding accuracies are reported by p̄1 and p̄2 in Table TABREF28. Our hypothesis is that the only invariant factor during averaging the codes inside a category is the grammatical property of its corresponding sentences. As expected, due to the weak dependence of decoder on latent code, the performance of the model under $C=3$ is almost identical (not included for space limits) when comparing p1 vs. p̄1, and p2 vs. p̄2. However, for $C=100$ the performance of the model deteriorates. While we leave further exploration of this behavior to our future work, we speculate this could be an indication of two things: the increase of complexity in the latent code which encourages a higher variance around the mean, or the absence of syntactic signal in the latent codes. <<</Syntactic Test>>> <<</Experiments>>> <<<Discussion and Conclusion>>> In this paper we analysed the interdependence of the KL term in Evidence Lower Bound (ELBO) and the properties of the approximated posterior for text generation. To perform the analysis we used an information theoretic framework based on a variant of $\beta $-VAE objective, which permits explicit control of the KL term, and treats KL as a mechanism to control the amount of information transmitted between the encoder and decoder. The immediate impact of the explicit constraint is avoiding the collapse issue ($D_{KL}=0$) by setting a non-zero positive constraint ($C\ge 0$) on the KL term ($|D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )-C|$). We experimented with a range of constraints ($C$) on the KL term and various powerful and weak decoder architectures (LSTM, GRU, and CNN), and empirically confirmed that in all cases the constraint was satisfied. We showed that the higher value of KL encourages not only divergence from the prior distribution, but also a sharper and more concentrated approximated posteriors. It encourages the decoder to be more sensitive to the variations on the latent code, and makes the model with higher KL less suitable for generation as the latent variables observed during training are farther away from the prior samples used during generation. To analyse its impact on generation we conducted a set of qualitative and quantitative experiments. In the qualitative analysis we showed that small and large values of KL term impose different properties on the generated text: the decoder trained under smaller KL term tends to generate repetitive but mainly plausible sentences, while for larger KL the generated sentences were diverse but incoherent. This behaviour was observed across three different decoding schemes and complemented by a quantitative analysis where we measured the performance of an LSTM LM trained on different VAE-generated synthetic corpora via different KL magnitudes, and tested on human generated sentences. Finally, in an attempt to understand the ability of the latent code in VAEs to represent some form of syntactic information, we tested the ability of the model to distinguish between grammatical and ungrammatical sentences. We verified that at lower (and still non-zero) KL the decoder tends to pay less attention to the latent code, but our findings regarding the presence of a syntactic signal in the latent code were inconclusive. We leave it as a possible avenue to explore in our future work. Also, we plan to develop practical algorithms for the automatic selection of the $C$'s value, and verify our findings under multi-modal priors and complex posteriors. <<</Discussion and Conclusion>>> <<</Title>>>
{ "references": [ "Discussion and Conclusion, Introduction" ], "type": "disordered_section" }
1909.13668
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation <<<Abstract>>> Variational Autoencoders (VAEs) are known to suffer from learning uninformative latent representation of the input due to issues such as approximated posterior collapse, or entanglement of the latent space. We impose an explicit constraint on the Kullback-Leibler (KL) divergence term inside the VAE objective function. While the explicit constraint naturally avoids posterior collapse, we use it to further understand the significance of the KL term in controlling the information transmitted through the VAE channel. Within this framework, we explore different properties of the estimated posterior distribution, and highlight the trade-off between the amount of information encoded in a latent code during training, and the generative capacity of the model. <<</Abstract>>> <<<Introduction>>> Despite the recent success of deep generative models such as Variational Autoencoders (VAEs) BIBREF0 and Generative Adversarial Networks (GANs) BIBREF1 in different areas of Machine Learning, they have failed to produce similar generative quality in NLP. In this paper we focus on VAEs and their mathematical underpinning to explain their behaviors in the context of text generation. The vanilla VAE applied to text BIBREF2 consists of an encoder (inference) and decoder (generative) networks: Given an input $x$, the encoder network parameterizes $q_\phi (z|x)$ and infers about latent continuous representations of $x$, while the decoder network parameterizes $p_\theta (x|z)$ and generates $x$ from the continuous code $z$. The two models are jointly trained by maximizing the Evidence Lower Bound (ELBO), $\mathcal {L}(\theta , \phi ; x,z)$: where the first term is the reconstruction term, and the second term is the Kullback-Leibler (KL) divergence between the posterior distribution of latent variable $z$ and its prior $p({z})$ (i.e., $\mathcal {N}(0,I)$). The KL term can be interpreted as a regularizer which prevents the inference network from copying ${x}$ into ${z}$, and for the case of a Gaussian prior and posterior has a closed-form solution. With powerful autoregressive decoders, such as LSTMs, the internal decoder's cells are likely to suffice for representing the sentence, leading to a sub-optimal solution where the decoder ignores the inferred latent code ${z}$. This allows the encoder to become independent of $x$, an issue known as posterior collapse ($q_\phi ({z}|{x})\approx p({z})$) where the inference network produces uninformative latent variables. Several solutions have been proposed to address the posterior collapse issue: (i) Modifying the architecture of the model by weakening decoders BIBREF2, BIBREF3, BIBREF4, BIBREF5, or introducing additional connections between the encoder and decoder to enforce the dependence between $x$ and $z$ BIBREF6, BIBREF7, BIBREF8; (ii) Using more flexible or multimodal priors BIBREF9, BIBREF10; (iii) Alternating the training by focusing on the inference network in the earlier stages BIBREF11, or augmenting amortized optimization of VAEs with instance-based optimization of stochastic variational inference BIBREF12, BIBREF13. All of the aforementioned approaches impose one or more of the following limitations: restraining the choice of decoder, modifying the training algorithm, or requiring a substantial alternation of the objective function. As exceptions to these, $\delta $-VAE BIBREF14 and $\beta $-VAE BIBREF15 aim to avoid the posterior collapse by explicitly controlling the regularizer term in eqn. DISPLAY_FORM2. While $\delta $-VAE aims to impose a lower bound on the divergence term, $\beta $-VAE (betavae) controls the impact of regularization via an additional hyperparameter (i.e., $\beta D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )$). A special case of $\beta $-VAE is annealing BIBREF2, where $\beta $ increases from 0 to 1 during training. In this study, we propose to use an extension of $\beta $-VAE BIBREF16 which permits us to explicitly control the magnitude of the KL term while avoiding the posterior collapse issue even in the existence of a powerful decoder. We use this framework to examine different properties of the estimated posterior and the generative behaviour of VAEs and discuss them in the context of text generation via various qualitative and quantitative experiments. <<</Introduction>>> <<<Kullback-Leibler Divergence in VAE>>> We take the encoder-decoder of VAEs as the sender-receiver in a communication network. Given an input message $x$, a sender generates a compressed encoding of $x$ denoted by $z$, while the receiver aims to fully decode $z$ back into $x$. The quality of this communication can be explained in terms of rate (R) which measures the compression level of $z$ as compared to the original message $x$, and distortion (D) which quantities the overall performance of the communication in encoding a message at sender and successfully decoding it at the receiver. Additionally, the capacity of the encoder channel can be measured in terms of the amount of mutual information between $x$ and $z$, denoted by $\text{I}({x};{z})$ BIBREF17. <<<Reconstruction vs. KL>>> The reconstruction loss can naturally measure distortion ($D := - \big \langle \log p_\theta ({x}|{z}) \big \rangle $), while the KL term quantifies the amount of compression (rate; $R := D_{KL}[q_\phi ({z}|{x})|| p({z})]$) by measuring the divergence between a channel that transmits zero bit of information about $x$, denoted by $p(z)$, and the encoder channel of VAEs, $q_\phi (z|x)$. BIBREF18 introduced the $H-D \le \text{I}({x};{z}) \le R$ bounds, where $H$ is the empirical data entropy (a constant). These bounds on mutual information allow us to analyze the trade-off between the reconstruction and KL terms in eqn. (DISPLAY_FORM2). For instance, since $\text{I}({x};{z})$ is non-negative (using Jensen's inequality), the posterior collapse can be explained as the situation where $\text{I}({x};{z})=0$, where encoder transmits no information about $x$, causing $R=0, D=H$. Increasing $\text{I}({x};{z})$ can be encouraged by increasing both bounds: increasing the upper-bound (KL term) can be seen as the mean to control the maximum capacity of the encoder channel, while reducing the distortion (reconstruction loss) will tighten the bound by pushing the lower bound to its limits ($H-D\rightarrow H$). A similar effect on the lower-bound can be encouraged by using stronger decoders which could potentially decrease the reconstruction loss. Hence, having a framework that permits the use of strong decoders while avoiding the posterior collapse is desirable. Similarly, channel capacity can be decreased. <<</Reconstruction vs. KL>>> <<<Explicit KL Control via @!START@$\beta $@!END@-VAE>>> Given the above interpretation, we now turn to a slightly different formulation of ELBO based on $\beta $-VAE BIBREF15. This allows control of the trade-off between the reconstruction and KL terms, as well as to set explicit KL value. While $\beta $-VAE offers regularizing the ELBO via an additional coefficient $\beta \in {\rm I\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term, where $C\!\! \in \!\! {\rm I\!R}^+$ and $| . |$ denotes the absolute value. While we could apply constraint optimization to impose the explicit constraint of $\text{KL}\!\!=\!\!C$, we found that the above objective function satisfies the constraint (experiment). Alternatively, it has been shown BIBREF21 the similar effect could be reached by replacing the second term in eqn. DISPLAY_FORM6 with $\max \big (C,D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )\big )$ at the risk of breaking the ELBO when $\text{KL}\!\!<\!\!C$ BIBREF22. <<</Explicit KL Control via @!START@$\beta $@!END@-VAE>>> <<</Kullback-Leibler Divergence in VAE>>> <<<Experiments>>> We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space. For all experiments, we use the objective function of eqn. DISPLAY_FORM6 with $\beta =1$. We do not use larger $\beta $s because the constraint $\text{KL}=C$ is always satisfied. <<<Corpora>>> We use 5 different corpora covering different domains and size through this section: Yelp and Yahoo BIBREF4 both have ($100k$,$10k$,$10k$) sentences in (train, dev, test) sets and $20k$ words in vocabulary, Children's Book Test (CBT; BIBREF23) has ($192k$,$10k$,$12k$) sentences and $12k$ vocab, Wikipedia (WIKI; BIBREF24) has ($2m$,$270k$,$270k$) sentences and $20k$ vocab, and WebText BIBREF25 has ($1m$,$23k$,$24k$) sentences and $22k$ vocab. <<</Corpora>>> <<<Models>>> We examine three VAE architectures, covering a range of decoding strengths to examine if the objective function in eqn. DISPLAY_FORM6 is immune to posterior collapse regardless of the choice of encoder-decoder architectures: $\beta _C$-VAELSTM with (LSTM encoder, LSTM decoder), $\beta _C$-VAEGRU with (GRU encoder, GRU decoder) BIBREF26, and $\beta _C$-VAECNN with (LSTM encoder, CNN decoder) BIBREF27. The dimension of word embeddings is 256 and the dimension of the latent variable is 64. The encoder and the decoder, for both VAELSTM and VAEGRU, have hidden size of 512 dimensions. VAECNN has exactly the same encoder as VAELSTM, while the decoder follows similar architecture to GLU with a bottleneck structure (with two blocks) BIBREF27 and has 512 channels externally and 128 internally for the convolutions with the filter size of 20. All models were trained for 10 epochs and optimised the objective function (eqn. DISPLAY_FORM6) with Adam BIBREF28 with following learning rates: $10^{-5}\times 85$ for VAEGRU and VAELSTM, and $10^{-4}$ for VAECNN. To couple the encoder with the decoder we concatenate the latent variable to word embeddings at each time step without initialisation of hidden state. <<</Models>>> <<<Rate and Distortion>>> To analyse the dependence between the values of explicit rate ($C$) and distortion, we trained our models with different values of $C$, ranging from 10 to 100. Figure FIGREF8 reports the results for $\beta _C$-VAEGRU, $\beta _C$-VAELSTM, and $\beta _C$-VAECNN models on Yahoo and Yelp corpora. In all our experiments we found that $C\!-\!1\!\le KL\!\le \! C\!+\!1$, demonstrating that the objective function effectively imposed the desired constraint on KL term. Hence, setting any $C>0$ can in practice avoid the collapse issue. The general trend is that by increasing the value of $C$ one can get a better reconstruction (lower distortion) while the amount of gain varies depending on the VAE's architecture and corpus. Additionally, we measured rate and distortion on CBT, WIKI, and WebText corpora using $\beta _C$-VAELSTM and observed the same trend with the increase of $C$, see Table TABREF12. This observation is consistent with the bound on $\text{I}({x};{z})$ we discussed earlier (expl) such that with an increase of KL we increase an upper bound on $\text{I}({x};{z})$ which in turn allows to have smaller values of reconstruction loss. Additionally, as reported in Table TABREF12, encouraging higher rates (via larger $C$) encourages more active units (AU; BIBREF29) in the latent code $z$. As an additional verification, we also group the test sentences into buckets based on their length and report BLEU-2/4 and ROUGE-2/4 metrics to measure the quality of reconstruction step in Table TABREF12. As expected, we observe that increasing rate has a consistently positive impact on improving BLEU and ROUGE scores. <<</Rate and Distortion>>> <<<Aggregated Posterior>>> To understand how the approximated posteriors are being affected by the magnitude of the KL, we adopted an approach from BIBREF6 and looked at the divergence between the aggregated posterior, $q_\phi (z)=\sum _{x\sim q(x)} q_\phi (z|x)$, and prior $p(z$). Since during generation we generate samples from the prior, ideally we would like the aggregated posterior to be as close as possible to the prior. We obtained unbiased samples of ${z}$ first by sampling an ${x}$ from data and then ${z} \sim q_\phi ({z}|{x})$, and measured the log determinant of covariance of the samples ($\log \det (\mathrm {Cov}[q_\phi ({z})])$). As reported in Figure FIGREF8, we observed that $\log \det (\mathrm {Cov}[q_\phi ({z})])$ degrades as $C$ grows, indicating sharper approximate posteriors. We then consider the difference of $p(z)$ and $q(z)$ in their means and variances, by computing the KL divergence from the moment-matching Gaussian fit of $q(z)$ to $p(z)$: This returns smaller values for $\beta _{C=5}$-VAEGRU (Yelp: 0, Yahoo: 0), and larger values for $\beta _{C=100}$-VAEGRU (Yelp: 8, Yahoo: 5), which illustrates that the overlap between $q_\phi ({z})$ and $p(z)$ shrinks further as $C$ grows. The above observation is better pronounced in Table TABREF12, where we also report the mean ($||\mu ||^2_2$) of unbiased samples of $z$, highlighting the divergence from the mean of the prior distribution as rate increases. Therefore, for the case of lower $C$, the latent variables observed during training are closer to the generated sample from the prior which makes the decoder more suitable for generation purpose. We will examine this hypothesis in the following section. <<</Aggregated Posterior>>> <<<Text Generation>>> To empirically examine how channel capacity translates into generative capacity of the model, we experimented with the $\beta _C$-VAELSTM models from Table TABREF12. To generate a novel sentence, after a model was trained, a latent variable $z$ is sampled from the prior distribution and then transformed into a sequence of words by the decoder $p(x|z)$. During decoding for generation we try three decoding schemes: (i) Greedy: which selects the most probable word at each step, (ii) Top-k BIBREF30: which at each step samples from the K most probable words, and (iii) Nucleus Sampling (NS) BIBREF31: which at each step samples from a flexible subset of most probable words chosen based on their cumulative mass (set by a threshold $p$, where $p = 1$ means sampling from the full distribution). While similar to Top-k, the benefit of NS scheme is that the vocabulary size at each time step of decoding varies, a property that encourages diversity and avoids degenerate text patterns of greedy or beam search decoding BIBREF31. We experiment with NS $(p=\lbrace 0.5, 0.9\rbrace )$ and Top-k $(k=\lbrace 5, 15\rbrace )$. <<<Qualitative Analysis>>> We follow the settings of homotopy experiment BIBREF2 where first a set of latent variables was obtained by performing a linear interpolation between $z_1 \sim p(z)$ and $z_2 \sim p(z)$. Then each $z$ in the set was converted into a sequence of words by the decoder $p(x|z)$. Besides the initial motivation of BIBREF2 to examine how neighbouring latent codes look like, our additional incentive is to analyse how sensitive the decoder is to small variations in the latent variable when trained with different channel capacities, $C=\lbrace 3,15,100\rbrace $. Table TABREF17 shows the generated sentences via different decoding schemes for each channel capacity. For space reason, we only report the generated sentences for greedy, Top-$k=15$, and NS $p=0.9$. To make the generated sequences comparable across different decoding schemes or C values, we use the same samples of $z$ for decoding. <<<Sensitivity of Decoder>>> To examine the sensitivity of the decoder to variations of the latent variable, we consider the sentences generate with the greedy decoding scheme (the first column in Table TABREF17). The other two schemes are not suitable for this analysis as they include sampling procedure. This means that if we decode the same latent variable twice we will get two different sentences. We observed that with lower channel capacity ($C=3$) the decoder tends to generate identical sentences for the interpolated latent variables (we highlight these sentences in gray), exhibiting decoder's lower sensitivity to $z$'s variations. However, with the increase of channel capacity ($C=15,100$) the decoder becomes more sensitive. This observation is further supported by the increasing pattern of active units in Table TABREF12: Given that AU increases with increase of $C$ one would expect that activation pattern of a latent variable becomes more complex as it comprises more information. Therefore small change in the pattern would have a greater effect on the decoder. <<</Sensitivity of Decoder>>> <<<Coherence of Sequences>>> We observe that the model trained with large values of $C$ compromises sequences' coherence during the sampling. This is especially evident when we compare $C=3$ with $C=100$. Analysis of Top-15 and NS (p=0.9) generated samples reveals that the lack of coherence is not due to the greedy decoding scheme per se, and can be attributed to the model in general. To understand this behavior further, we need two additional results from Table TABREF12: LogDetCov and $||\mu ||^2_2$. One can notice that as $C$ increases LogDetCov decreases and $||\mu ||^2_2$ increases. This indicates that the aggregated posterior becomes further apart from the prior, hence the latent codes seen during the training diverge more from the codes sampled from the prior during generation. We speculate this contributes to the coherence of the generated samples, as the decoder is not equipped to decode prior samples properly at higher $C$s. <<</Coherence of Sequences>>> <<</Qualitative Analysis>>> <<<Quantitative Analysis>>> Quantitative analysis of generated text without gold reference sequences (e.g. in Machine Translation or Summarization) has been a long-standing challenge. Recently, there have been efforts towards this direction, with proposal such as self-BLEU BIBREF32, forward cross entropy BIBREF33 and Fréchet InferSent Distance BIBREF33. We opted for FCE as a complementary metric to our qualitative analysis. To calculate FCE, first a collection of synthetic sentences are generated by sampling $z\sim p(z)$ and decoding the samples into sentences. The synthetic sequences are then used to train a language model (an LSTM with the parametrisation of our decoder). The FCE score is estimated by reporting the negative log likelihood (NLL) of the trained LM on the set of human generated sentences. We generated synthetic corpora using trained models from Table TABREF12 with different C and decoding schemes and using the same exact $z$ samples for all corpora. Since the generated corpora using different C values would have different coverage of words in the test set (i.e., Out-of-Vocabulary ratios), we used a fixed vocabulary to minimize the effect of different vocabularies in our analysis. Our dictionary contains words that are common in all of the three corpora, while the rest of the words that don't exist in this dictionary are replaced with 〈unk〉 symbol. Similarly, we used this fixed dictionary to preprocess the test sets. Also, to reduce bias to a particular set of sampled $z$'s we measure the FCE score three times, each time we sampled a new training corpus from a $\beta _C$-VAELSTM decoder and trained an LM from scratch. In Table TABREF20 we report the average FCE (NLL) for the generated corpora. In the qualitative analysis we observed that the text generated by the $\beta _C$-VAELSTM trained with large values of $C=100$ exhibits lower quality (i.e., in terms of coherence). This observation is supported by the FCE score of NS(p=0.9) decoding scheme (TABREF20), since the performance drops when the LM is trained on the corpus generated with $C=100$. The generated corpora with $C=3$ and $C=15$ achieve similar FCE score. However, these patterns are reversed for Greedy decoding scheme, where the general tendency of FCE scores suggests that for larger values of $C$ the $\beta _C$-VAELSTM seems to generate text which better approximates the natural sentences in the test set. To understand this further, we report additional statistics in Table TABREF20: percentage of 〈unk〉 symbols, self-BLEU and average sentence length in the corpus. The average sentence length, in the generated corpora is very similar for both decoding schemes, removing the possibility that the pathological pattern on FCE scores was caused by difference in sentence length. However, we observe that for Greedy decoding more than $30\%$ of the test set consists of 〈unk〉. Intuitively, seeing more evidence of this symbol during training would improve our estimate for the 〈unk〉. As reported in the table, the $\%$unk increases on almost all corpora as $C$ grows, which is then translated into getting a better FCE score at test. Therefore, we believe that FCE at high $\%$unk is not a reliable quantitative metric to assess the quality of the generated syntactic corpora. Furthermore, for Greedy decoding, self-BLEU decreases when $C$ increases. This suggests that generated sentences for higher value of $C$ are more diverse. Hence, the LM trained on more diverse corpora can generalise better, which in turn affects the FCE. In contrast, the effect the 〈unk〉 symbol has on the corpora generated with the NS(p=0.9) decoding scheme is minimal for two reasons: First, the vocabulary size for the generated corpora, for all values of $C$ is close to the original corpus (the corpus we used to train the $\beta _C$-VAELSTM). Second, the vocabularies of the corpora generated with three values of $C$ is very close to each other. As a result, minimum replacement of the words with the 〈unk〉 symbol is required, making the experiment to be more reflective of the quality of the generated text. Similarly, self-BLEU for the NS(p=0.9) is the same for all values of $C$. This suggests that the diversity of sentences has minimal, if any, effect on the FCE. <<</Quantitative Analysis>>> <<</Text Generation>>> <<<Syntactic Test>>> In this section, we explore if any form of syntactic information is captured by the encoder and represented in the latent codes despite the lack of any explicit syntactic signal during the training of the $\beta _C$-VAELSTM. To train the models we used the same WIKI data set as in BIBREF24, but we filtered out all the sentences that are longer than 50 space-separated tokens. We use the data set of BIBREF24 which consists of pairs of grammatical and ungrammatical sentences to test various syntactic phenomenon. For example, a pair in subject-verb agreement category would be: (The author laughs, The author laugh). We encode both the grammatical and ungrammatical sentences into the latent codes $z^+$ and $z^-$, respectively. Then we condition the decoder on the $z^+$ and try to determine whether the decoder assigns higher probability to the grammatical sentence (denoted by $x^+$): $p(x^-|z^+) < p(x^+|z^+)$ (denoted by p1 in Table TABREF28). We repeat the same experiment but this time try to determine whether the decoder, when conditioned on the ungrammatical code ($z^-$), still prefers to assign higher probability to the grammatical sentence: $p(x^-|z^-) < p(x^+|z^-)$ (denoted by p2 in Table TABREF28). Table TABREF28 shows the p1 and p2 for the $\beta _C$-VAELSTM model trained with $C=\lbrace 3,100\rbrace $. Both the p1 and p2 are similar to the accuracy and correspond to how many times a grammatical sentence was assigned a higher probability. As reported for C=3, p1 and p2 match in almost all cases. This is to some degree expected since lower channel capacity encourages a more dominating decoder which in our case was trained on grammatical sentences from the WIKI. On the other hand, this illustrates that despite avoiding the KL-collapse issue, the dependence of the decoder on the latent code is so negligible that the decoder hardly distinguishes the grammatical and ungrammatical inputs. This changes for $C=100$, as in almost all the cases the decoder becomes strongly dependent on the latent code and can differentiate between what it has seen as input and the closely similar sentence it hasn't received as the input: The decoder assigns larger probability to the ungrammatical sentence when conditioned on the $z^-$ and, similarly, larger probability to the grammatical sentence when conditioned on the $z^+$. However, the above observations neither confirm nor reject existence of grammar signal in the latent codes. We run a second set of experiments where we aim to discard sentence specific information from the latent codes by averaging the codes inside each syntactic category. The averaged codes are denoted by $\bar{z}^+$ and $\bar{z}^-$, and the corresponding accuracies are reported by p̄1 and p̄2 in Table TABREF28. Our hypothesis is that the only invariant factor during averaging the codes inside a category is the grammatical property of its corresponding sentences. As expected, due to the weak dependence of decoder on latent code, the performance of the model under $C=3$ is almost identical (not included for space limits) when comparing p1 vs. p̄1, and p2 vs. p̄2. However, for $C=100$ the performance of the model deteriorates. While we leave further exploration of this behavior to our future work, we speculate this could be an indication of two things: the increase of complexity in the latent code which encourages a higher variance around the mean, or the absence of syntactic signal in the latent codes. <<</Syntactic Test>>> <<</Experiments>>> <<<Discussion and Conclusion>>> In this paper we analysed the interdependence of the KL term in Evidence Lower Bound (ELBO) and the properties of the approximated posterior for text generation. To perform the analysis we used an information theoretic framework based on a variant of $\beta $-VAE objective, which permits explicit control of the KL term, and treats KL as a mechanism to control the amount of information transmitted between the encoder and decoder. The immediate impact of the explicit constraint is avoiding the collapse issue ($D_{KL}=0$) by setting a non-zero positive constraint ($C\ge 0$) on the KL term ($|D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )-C|$). We experimented with a range of constraints ($C$) on the KL term and various powerful and weak decoder architectures (LSTM, GRU, and CNN), and empirically confirmed that in all cases the constraint was satisfied. We showed that the higher value of KL encourages not only divergence from the prior distribution, but also a sharper and more concentrated approximated posteriors. It encourages the decoder to be more sensitive to the variations on the latent code, and makes the model with higher KL less suitable for generation as the latent variables observed during training are farther away from the prior samples used during generation. To analyse its impact on generation we conducted a set of qualitative and quantitative experiments. In the qualitative analysis we showed that small and large values of KL term impose different properties on the generated text: the decoder trained under smaller KL term tends to generate repetitive but mainly plausible sentences, while for larger KL the generated sentences were diverse but incoherent. This behaviour was observed across three different decoding schemes and complemented by a quantitative analysis where we measured the performance of an LSTM LM trained on different VAE-generated synthetic corpora via different KL magnitudes, and tested on human generated sentences. Finally, in an attempt to understand the ability of the latent code in VAEs to represent some form of syntactic information, we tested the ability of the model to distinguish between grammatical and ungrammatical sentences. We verified that at lower (and still non-zero) KL the decoder tends to pay less attention to the latent code, but our findings regarding the presence of a syntactic signal in the latent code were inconclusive. We leave it as a possible avenue to explore in our future work. Also, we plan to develop practical algorithms for the automatic selection of the $C$'s value, and verify our findings under multi-modal priors and complex posteriors. <<</Discussion and Conclusion>>> <<</Title>>>
{ "references": [ "Experiments, Abstract" ], "type": "disordered_section" }
2003.01472
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Seshat: A tool for managing and verifying annotation campaigns of audio data <<<Abstract>>> We introduce Seshat, a new, simple and open-source software to efficiently manage annotations of speech corpora. The Seshat software allows users to easily customise and manage annotations of large audio corpora while ensuring compliance with the formatting and naming conventions of the annotated output files. In addition, it includes procedures for checking the content of annotations following specific rules are implemented in personalised parsers. Finally, we propose a double-annotation mode, for which Seshat computes automatically an associated inter-annotator agreement with the $\gamma$ measure taking into account the categorisation and segmentation discrepancies. <<</Abstract>>> <<<Introduction>>> Large corpora of speech, obtained in the laboratory and in naturalistic conditions, become easier to collect. This new trend broadens the scope of scientific questions on speech and language that can be answered. However, this poses an important challenge for the construction of reliable and usable annotations. Managing annotators and ensuring the quality of their annotations are highly demanding tasks for research endeavours and industrial projects BIBREF0. When organised manually, the manager of annotation campaigns usually faces three major problems: the mishandling of files (e.g., character-encoding problems, incorrect naming of files), the non-conformity of the annotations BIBREF1, and the inconsistency of the annotations BIBREF2. In this paper, we introduce Seshat, a system for the automated management of annotation campaigns for audio/speech data which addresses these challenges. It is built on two components that communicate via a Restful API: a back-end (server) written in Flask and a front-end (client) in Angular Typescript. Seshat is easy to install for non-developers and easy to use for researchers and annotators while having some extension capabilities for developers. In Section SECREF2, we describe the related work on annotations tools, which do not provide solutions to all the aforementioned challenges during corpus creation. In Section SECREF3, we make an overview of the different functionalities of the software. Then, we explain, in Section SECREF4, the architecture of the software, and also the several UX/UI design and engineering choices that have been made to facilitate the usage of the platform. We describe how to use of Seshat in Section SECREF5 and Section SECREF6 presents two specific use-cases. Finally, we conclude and describe future plans for Seshat in Section SECREF7. <<</Introduction>>> <<<Related Work>>> Self-hosted annotation systems. There are many standalone solutions for the transcription of speech data that are already used by researchers: Transcriber BIBREF3, Wavesurfer BIBREF4, Praat BIBREF5, ELAN BIBREF6, XTrans BIBREF7. These systems allow the playback of sound data and the construction of different layers of annotations with various specifications, with some advanced capabilities (such as annotations with hierarchical or no relationship between layers, number of audio channels, video support). Yet, these solutions lack a management system: each researcher must track the files assigned to annotators and build a pipeline to parse (and eventually check) the output annotation files. Moreover, checking can only be done once the annotations have been submitted to the researchers. This task becomes quickly untraceable as the number of files and annotators grow. In addition, most of these transcription systems do not provide a way to evaluate consistency (intra- and inter-annotator agreement) that would be appropriate for speech data BIBREF8. Web-based annotations systems. There are several web-based annotation systems for the annotation of audio data. Among them we find light-weight systems, like the VIA software BIBREF9 or Praat on the web BIBREF10 that allow to build simple layers of annotations. However, they do not provide a proper management system for a pool of annotators nor do they integrate annotation checking. On the other side of the spectrum, there are more sophisticated systems with various capabilities. Camomille BIBREF11 and the EMU-SDMS system (that can also be used offline) BIBREF12 allow to work with speech data and to distribute the tasks to several annotators. But these systems require expertise in web hosting and technologies to deploy and modify them. Finally, WebAnno BIBREF13 and GATE Teamware BIBREF14 are the tools that most closely match our main contributions regarding quality control (conformity and consistency checking), annotators' management and flexibility. WebAnno includes consistency checking with the integration of different metrics BIBREF15. However, these tools have only been built for text data. The format and all the custom layers have been designed for Natural Language Processing tasks. Porting WebAnno to support speech data seemed a major engineering challenge. That is why it appeared necessary to develop a new and user-friendly tool addressed to the speech community. <<</Related Work>>> <<<Overview of Seshat>>> Seshat is a user-friendly web-based interface whose objective is to smoothly manage large campaigns of audio data annotation, see Figure FIGREF8. Below, we describe the several terms used in Seshat's workflow: [font=, leftmargin=1cm, style=nextline] A set of audio/speech files that a Campaign Manager wants to annotate. It is indicated either by a folder containing sound files, or by a CSV summarizing a set of files. We support the same formats as Praat so far: WAV, Flac and MP3. An object that enables the Campaign Manager to assign Annotation Tasks to the Annotators. It references a Corpus, and allows the Manager to track the annotation's tasks progress and completion in real time. At its creation, a Textgrid Checking Scheme can also be defined for that campaign. It is contained in an Annotation Campaign, it references an audio file from the campaign's designated Audio Corpus, and assigned to Annotators. It can either be a Single Annotator Task (assigned to one Annotator) or a Double Annotator Task (assigned to two annotators, who will annotatote the assigned task in parallel). A set of rules defining the TextGrid files' structure and content of the annotations. It is set at the beginning of the Annotation Campaign's creation, and is used to enforce that all TextGrids from the campaign contain the same amount of Tiers, with the same names. It can also enforce, for certain chosen tiers, a set of valid annotations. Users with the rights to create Annotation Campaigns and Annotators user accounts, and assign Annotation Tasks to Annotators. Users who are assigned a set of Annotation Tasks. Their job is to complete the annotation of the audio files with the Praat software. If the TextGrid file they submit does not comply with their Annotation Task's TextGrid Checking Scheme, Seshat pinpoint their annotation errors with detailed messages. The annotator can re-submit the concerned file to the platform based on these different feedbacks. Once they they connected to their instance of Seshat, campaign managers can access ongoing annotation campaigns or create new ones. Campaign managers are able to add annotators, assign annotation tasks and track progress. Annotator see a list of assigned tasks. The first step for them is to download the sound file with its corresponding auto-generated template TextGrid. In the current implementation, the annotation work has to be done locally with Praat. An upcoming version will use of web tools like Praat on the web BIBREF10. Once the task is completed, the TextGrid file is to be uploaded to Seshat via the web interface. We used the TextGrid format because of the wide acceptance of the Praat software in the speech science community (e.g., language acquisition research, clinical linguistics, phonetics and phonology). The Textgrid Checking Scheme that encompasses rules on the tier's naming, file structure, and the content of the annotations, is associated with a specific campaign and defined at the creation of the campaign. Seshat back-end will automatically check that the submitted TextGrid file conforms to the Annotation Campaign's Textgrid Checking Scheme. Seshat allows the campaign manager to create two type of tasks: single annotator, and double annotator. Regarding the first task, one audio file is attributed to one annotator. Once the annotation is completed, Sesha automatically checks the conformity of the annotation, and only declares a tasks completed if the conformity checks is passed. Regarding the second task, one audio file is attributed to two annotators. The two annotators annotate the same file independently, then the two versions are merged and the annotators are guided through a compare and review process to agree one final version. We summarise in the Figure FIGREF7 the different steps for the double-annotator task. At each step during merging, the two annotators are provided feedbacks to focus on where are the disagreements. This process also results in the computation of an Inter-annotator agreement for each file. The double annotator task can be used to train new annotators alongside experts. Annotating speech data is a joint task of segmentation and categorisation of audio events. That is why we adopted the $\gamma $ measure BIBREF8 to evaluate the inter- or intra- annotator agreement in each individual tier. Campaign manager can customise the distance used by $\gamma $ by inserting a custom distance along their own parser (See short snippet of code for a parser of French Phonetics with the SAMPA alphabet in Algorithm ). <<</Overview of Seshat>>> <<<Development>>> <<<Engineering choices>>> Our utmost priority when building Seshat was to make it as easy as possible for others to deploy, use, administer and eventually contribute to. To do so, we chose the most common frameworks that are free and open-source, all of which are detailed in the following sections. Additionally, to match the current trend in web development, we decided to use the so-called "web-app" architecture for Seshat, i.e., we separated the application into two distinct entities: a front-end, running on the browser, and a back-end, serving data to the front-end and interacting with the database. <<<Back-end Choices>>> The back-end system runs on a server. It holds and updates the campaign databases and runs the annotation checking and inter-rater agreement evaluation services. We chose Python, given its widespread use in the scientific community, with a wide array of speech and linguistic packages. Moreover, its usage on the back-end side will allow the future integration of powerful speech processing tools like Pyannote BIBREF16 to semi-automatize annotations. We thus went for Python3.6 for Seshat's server back-end. We used the Flask-Smorest extension (which is based on Flask) to clearly and thoroughly document our API, which can be exported to the popular OpenAPI 3.0.2 RESTful API description format. The files and server data are stored on a MongoDB database, chosen for its flexible document model and general ease of use. We used the Object-Relational Mapping (ORM) MongoEngine to define our database schemas and interact with that database. MongoDB's GridFS system also allowed us to directly store annotation files (which are usually very light-weight) directly in the database, instead of going through the file system. <<</Back-end Choices>>> <<<Front-end Choices>>> The front-end handles all of the interactions between the users (campaing manager or annotator) with the databses. It is implemented as an App within their browser. We decided to base Seshat's front-end on the Angular Typescript framework. Despite its' steep learning curve, it enforces strict design patterns that guarantee that others can make additions to our code without jeopardising the stability of the App. Angular Typescript has a wide community support in the web development industry and is backed by Google and Microsoft. Moreover, the fact that it is based on TypeScript alleviates the numerous shortcomings of JavaScript, ensuring our implementation's readability and stability. <<</Front-end Choices>>> <<</Engineering choices>>> <<<UX/UI Choices>>> The interface and the features we selected for our implementation are the process of a year-long iterative process involving a team of annotators, two campaign managers and software engineers. We followed some guiding principles from the recent Material design language. Our goal while designing our interface (with the help of a professional designer) was to make it fully usable by non-technical people. We also put some extra care into the annotators' interface to give them a clear sense of what is to be done, how they should follow the annotation protocol, and how to correct potential errors in their annotations (See Figure FIGREF21) The goal was to reduce the number of actions to perform for annotators and enable to focus only on the annotations content. <<</UX/UI Choices>>> <<</Development>>> <<<Using Seshat>>> <<<Installation and Setup>>> Setting up a modern fully-fledged web service is a arduous task, usually requiring a seasoned system administrator as well as sometimes having very precise system requirements. Luckily, the Docker virtualisation platform ensures that anyone with a recent-enough install of that software can set up Seshat in about one command (while still allowing some flexibility via a configuration file). For those willing to have a more tightly-controlled installation of Seshat on their system, we also fully specify the manual installation steps in our online documentation). Importing an audio corpus that you are willing to annotate is easy as dropping files into a default `corpora/` folder. It is possible to either drop a folder containing audio files (with no constraints on the folder's structure), or a CSV file listing audio filenames along with their durations (in case the files are sensitive and you're not willing to risk them being hosted on the server). It is then possible to review the automatically imported files via the web interface. <<</Installation and Setup>>> <<<Launching and monitoring an annotation campaign>>> The Campaign manager can easily define and monitor annotation campaign. As shown in Figure FIGREF33, the online form enable to choose corpora, pre-define and pre-configure the annotations scheme (tiers and parsers). There are 2 types of tiers already implemented by default: one with no check at all, and one with pre-defined categories. For the latter, these categories are pre-defined when the campaign is created. Only Campaign managers can access and build new campaigns. If Campaign manager have several campaigns they can easily switch between them via the menu bar or get a full overview with the dashboard (See Figure FIGREF26). The campaign managers can visualise the progress of the assigned tasks at the campaign level or more precisely at the task level. They can retrieve all the intermediate files that have been created for each task. For instance, the campaign manager can examine qualitatively and quantitatively what are the annotation differences before the merge phases of the double annotator task. <<</Launching and monitoring an annotation campaign>>> <<<Scripting API>>> For those willing to interact with Seshat using code, it is possible to interact with Seshat using either its RESTful API or its command-line interface (CLI). The API endpoints that can be called are all listed in a simple interface, and can be made from any programming language able to make HTTP requests. The CLI interface can be used via your terminal, and therefore can be interacted with using Bash scripts. A typical usage of these features would be to assign annotation tasks from a large speech corpus (spoken by several speakers) to a large pool of annotators, all the while making sure each annotator has a similar number of tasks, with each speaker being evenly distributed among annotators as well. This would be tedious to do manually via the user interface, but easy to program in any scripting language. <<</Scripting API>>> <<<Annotation Parser Customisation>>> We aimed at a reasonable trade-off between simplicity and flexibility for the TextGrid annotations checking component. However, we understand (from our own experience in particular) that sometimes annotations can follow a very specific and complex standard (for instance, parsing SAMPA phonemes strings). To allow users to define their own annotation standards, we added the possibility for users to define an annotation parser, via a simple package-based extension system (taking inspiration from pyannote's extension system). Anyone willing to create a new annotation parser has to be able to program in Python and have a minimal understanding of its packaging system. As presented in our example French SAMPA Parser (Algorithm ), implementing a custom annotation parsers only requires the overload of two methods from Seshat's BaseCustomParser class: check-annotation: takes an annotation string as input and raises an error if and only if the annotation is deemed to be invalid. It doesn't return anything. distance: takes two annotations as input and should return a float corresponding to the distance between these two annotations. <<</Annotation Parser Customisation>>> <<<Inter-rater agreement: the @!START@$\gamma $@!END@ measure>>> It is necessary have a measure of confidence to obtain high-quality datasets and therefore to draw valid conclusions from annotations. Annotations tasks of audio and speech data usually have some specificities. The items to annotate have to be both segmented in time and categorised. The segments can be hierarchically defined or overlapping. In addition, the audio stream may require only sparse annotations (especially in-the-wild recordings which contain a lot of non-speech segments). To evaluate speech annotations, the measure needs to take these characteristics into account. That is why we decided to re-implement and compute the $\gamma $ measure (see mathet2015unified for its design and the advantages of this measure over previous agreement measures). First, the $\gamma $ software aligns (tier-wise) the annotations of the different annotators. To align the two sets of annotations the $\gamma $ measure the distance between all the individual units. The difference of position of two annotated units $u$ and $v$ is measured with the positional distance: If the tiers are categorical, the distance for the content of the annotated units $u$ and $v$ is defined as: This distance can be over-written by the custom parser as mentioned above. These two distance are summed with equal weights to obtain the distance between every annotated units from 2 annotators. Then, it is possible to obtain the disorder $\delta (a)$ of a specific alignment $a$ by summing the distance of all the aligned units in $a$. All possible alignments $a$ are considered and the one that minimises the disorder $\delta (a)$ is kept. To get the value of $\gamma $, the disorder is chance-corrected to obtain an expected disorder. It is obtained by re-sampling randomly the annotations of the annotators. This means that real annotations are drawn from the annotators, and one position in the audio is randomly chosen. The annotation is split at this random position and the two parts are permuted. It is then possible to obtain an approximation of the expected disorder $\delta _e$. The final agreement measure is defined as: This $\gamma $ measure is automatically computed by the back-end server for the double-annotator tasks. The Campaign manager can retrieve these measures in Seshat by downloading a simple CSV file. <<</Inter-rater agreement: the @!START@$\gamma $@!END@ measure>>> <<</Using Seshat>>> <<<Use cases>>> We present two use cases on which Seshat was developped: clinical interviews, and daylong child-centered recordings. <<<Clinical interviews>>> Seshat was intially developped to study the impact of Huntington's Disease BIBREF17 on speech and language production. One hundred and fifty two interviews between a neuropsychologist and a patient with the Huntington's Disease (HD) were recorded between June 2018 and November 2019. The campaign manager created a campaign with multiple tiers to annotate the turn takings and the speech/non speech boundaries of the utterances of the patient. For both tasks, the annotations did not need to cover completely the audio (sparsity property mentioned above). For the Turn-taking annotations, there are 3 pre-defined tiers, each one with a single class ('Patient', 'Non-Patient', and 'Noise'), which results in possible overlap between these classes. For the Utterance annotations, there is only one pre-defined class ('Utterance'). To this date, a total of 67 files have been fully annotated with the help of Seshat by a cohort of 18 speech pathologist students (see Figure FIGREF33). Among these, 16 have been done by 2 different annotators independently with the Double-annotator task. The results are summarised in Table TABREF34. Even though there are more categories for Turn-Takings than Utterance (gut2004measuring reported that the more categories the more the task is difficult in speech annotations), the mean $\gamma $ for the Turn-Takings $\gamma = 0.64$ is slightly higher than the one for Utterance $\gamma = 0.61$. And the range of values for the Turn-Takings is smaller than the Utterance. Indeed, the speech pathologists reported the difficulty to annotate the boundary of utterances in spontaneous speech, with several ambiguous cases due to pauses. These results will help us to redefine the protocol and be more precise on the given instructions. <<</Clinical interviews>>> <<<In-the-wild child-centered recordings>>> The Seshat software is also currently used to annotate audio files in a study of day-long audio-recordings captured by two devices (LENA BIBREF18, and a BabyCloud baby-logger device) worn by young children growing up in remote Papua New Guinea. The project aims at establishing language input and outcomes in this seldom-studied population. To establish reliability levels, 20 1-min files were double-annotated by 2 speech pathology students. Among the tasks given to the annotators there was: (1) locating the portions of Speech (Speech activity), (2) locating the speech produced by an adult that is directed to a child or not (Adult-Directed Speech versus Child-Directed Speech). As in the previous example, the annotations do not need to cover the full audio file. The Speech Activity task has only 1 class ('Speech') and the Addressee task has 2 classes ('ADS', 'CDS'). These recordings have been done in naturalistic and noisy conditions; moreover, the annotators do not understand the language. Probably as a result of these challenges, agreement between annotators is lower than in the Clinical interviews use case. This information is nonetheless valuable to the researchers, as it can help them appropriately lower their confidence in the ensuing speech quantity estimates. <<</In-the-wild child-centered recordings>>> <<</Use cases>>> <<<Conclusion and Future work>>> Seshat is a new tool for the management of audio annotation efforts. Seshat enables users to define their own campaign of annotations. Based on this configuration, Seshat automatically enforces the format of the annotations returned by the annotators. Besides, we also add the capability to finely tailor the parsing of the annotations. Finally, Seshat provides automatic routines to compute the inter-rate agreements that are specifically designed for audio annotations. Seshat lays some foundations for more advanced features, either for the interface or the annotation capabilities. In future work, we plan to implement an automatic task assignments and an integration of a diarization processing step to reduce human effort. Another planned feature is to add possibility for the campaign manager to design more complex annotation workflows such as, for instance, dependencies between tiers or more intermediate steps of annotations. <<</Conclusion and Future work>>> <<</Title>>>
{ "references": [ "Abstract, Development" ], "type": "disordered_section" }
2003.01472
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Seshat: A tool for managing and verifying annotation campaigns of audio data <<<Abstract>>> We introduce Seshat, a new, simple and open-source software to efficiently manage annotations of speech corpora. The Seshat software allows users to easily customise and manage annotations of large audio corpora while ensuring compliance with the formatting and naming conventions of the annotated output files. In addition, it includes procedures for checking the content of annotations following specific rules are implemented in personalised parsers. Finally, we propose a double-annotation mode, for which Seshat computes automatically an associated inter-annotator agreement with the $\gamma$ measure taking into account the categorisation and segmentation discrepancies. <<</Abstract>>> <<<Introduction>>> Large corpora of speech, obtained in the laboratory and in naturalistic conditions, become easier to collect. This new trend broadens the scope of scientific questions on speech and language that can be answered. However, this poses an important challenge for the construction of reliable and usable annotations. Managing annotators and ensuring the quality of their annotations are highly demanding tasks for research endeavours and industrial projects BIBREF0. When organised manually, the manager of annotation campaigns usually faces three major problems: the mishandling of files (e.g., character-encoding problems, incorrect naming of files), the non-conformity of the annotations BIBREF1, and the inconsistency of the annotations BIBREF2. In this paper, we introduce Seshat, a system for the automated management of annotation campaigns for audio/speech data which addresses these challenges. It is built on two components that communicate via a Restful API: a back-end (server) written in Flask and a front-end (client) in Angular Typescript. Seshat is easy to install for non-developers and easy to use for researchers and annotators while having some extension capabilities for developers. In Section SECREF2, we describe the related work on annotations tools, which do not provide solutions to all the aforementioned challenges during corpus creation. In Section SECREF3, we make an overview of the different functionalities of the software. Then, we explain, in Section SECREF4, the architecture of the software, and also the several UX/UI design and engineering choices that have been made to facilitate the usage of the platform. We describe how to use of Seshat in Section SECREF5 and Section SECREF6 presents two specific use-cases. Finally, we conclude and describe future plans for Seshat in Section SECREF7. <<</Introduction>>> <<<Related Work>>> Self-hosted annotation systems. There are many standalone solutions for the transcription of speech data that are already used by researchers: Transcriber BIBREF3, Wavesurfer BIBREF4, Praat BIBREF5, ELAN BIBREF6, XTrans BIBREF7. These systems allow the playback of sound data and the construction of different layers of annotations with various specifications, with some advanced capabilities (such as annotations with hierarchical or no relationship between layers, number of audio channels, video support). Yet, these solutions lack a management system: each researcher must track the files assigned to annotators and build a pipeline to parse (and eventually check) the output annotation files. Moreover, checking can only be done once the annotations have been submitted to the researchers. This task becomes quickly untraceable as the number of files and annotators grow. In addition, most of these transcription systems do not provide a way to evaluate consistency (intra- and inter-annotator agreement) that would be appropriate for speech data BIBREF8. Web-based annotations systems. There are several web-based annotation systems for the annotation of audio data. Among them we find light-weight systems, like the VIA software BIBREF9 or Praat on the web BIBREF10 that allow to build simple layers of annotations. However, they do not provide a proper management system for a pool of annotators nor do they integrate annotation checking. On the other side of the spectrum, there are more sophisticated systems with various capabilities. Camomille BIBREF11 and the EMU-SDMS system (that can also be used offline) BIBREF12 allow to work with speech data and to distribute the tasks to several annotators. But these systems require expertise in web hosting and technologies to deploy and modify them. Finally, WebAnno BIBREF13 and GATE Teamware BIBREF14 are the tools that most closely match our main contributions regarding quality control (conformity and consistency checking), annotators' management and flexibility. WebAnno includes consistency checking with the integration of different metrics BIBREF15. However, these tools have only been built for text data. The format and all the custom layers have been designed for Natural Language Processing tasks. Porting WebAnno to support speech data seemed a major engineering challenge. That is why it appeared necessary to develop a new and user-friendly tool addressed to the speech community. <<</Related Work>>> <<<Overview of Seshat>>> Seshat is a user-friendly web-based interface whose objective is to smoothly manage large campaigns of audio data annotation, see Figure FIGREF8. Below, we describe the several terms used in Seshat's workflow: [font=, leftmargin=1cm, style=nextline] A set of audio/speech files that a Campaign Manager wants to annotate. It is indicated either by a folder containing sound files, or by a CSV summarizing a set of files. We support the same formats as Praat so far: WAV, Flac and MP3. An object that enables the Campaign Manager to assign Annotation Tasks to the Annotators. It references a Corpus, and allows the Manager to track the annotation's tasks progress and completion in real time. At its creation, a Textgrid Checking Scheme can also be defined for that campaign. It is contained in an Annotation Campaign, it references an audio file from the campaign's designated Audio Corpus, and assigned to Annotators. It can either be a Single Annotator Task (assigned to one Annotator) or a Double Annotator Task (assigned to two annotators, who will annotatote the assigned task in parallel). A set of rules defining the TextGrid files' structure and content of the annotations. It is set at the beginning of the Annotation Campaign's creation, and is used to enforce that all TextGrids from the campaign contain the same amount of Tiers, with the same names. It can also enforce, for certain chosen tiers, a set of valid annotations. Users with the rights to create Annotation Campaigns and Annotators user accounts, and assign Annotation Tasks to Annotators. Users who are assigned a set of Annotation Tasks. Their job is to complete the annotation of the audio files with the Praat software. If the TextGrid file they submit does not comply with their Annotation Task's TextGrid Checking Scheme, Seshat pinpoint their annotation errors with detailed messages. The annotator can re-submit the concerned file to the platform based on these different feedbacks. Once they they connected to their instance of Seshat, campaign managers can access ongoing annotation campaigns or create new ones. Campaign managers are able to add annotators, assign annotation tasks and track progress. Annotator see a list of assigned tasks. The first step for them is to download the sound file with its corresponding auto-generated template TextGrid. In the current implementation, the annotation work has to be done locally with Praat. An upcoming version will use of web tools like Praat on the web BIBREF10. Once the task is completed, the TextGrid file is to be uploaded to Seshat via the web interface. We used the TextGrid format because of the wide acceptance of the Praat software in the speech science community (e.g., language acquisition research, clinical linguistics, phonetics and phonology). The Textgrid Checking Scheme that encompasses rules on the tier's naming, file structure, and the content of the annotations, is associated with a specific campaign and defined at the creation of the campaign. Seshat back-end will automatically check that the submitted TextGrid file conforms to the Annotation Campaign's Textgrid Checking Scheme. Seshat allows the campaign manager to create two type of tasks: single annotator, and double annotator. Regarding the first task, one audio file is attributed to one annotator. Once the annotation is completed, Sesha automatically checks the conformity of the annotation, and only declares a tasks completed if the conformity checks is passed. Regarding the second task, one audio file is attributed to two annotators. The two annotators annotate the same file independently, then the two versions are merged and the annotators are guided through a compare and review process to agree one final version. We summarise in the Figure FIGREF7 the different steps for the double-annotator task. At each step during merging, the two annotators are provided feedbacks to focus on where are the disagreements. This process also results in the computation of an Inter-annotator agreement for each file. The double annotator task can be used to train new annotators alongside experts. Annotating speech data is a joint task of segmentation and categorisation of audio events. That is why we adopted the $\gamma $ measure BIBREF8 to evaluate the inter- or intra- annotator agreement in each individual tier. Campaign manager can customise the distance used by $\gamma $ by inserting a custom distance along their own parser (See short snippet of code for a parser of French Phonetics with the SAMPA alphabet in Algorithm ). <<</Overview of Seshat>>> <<<Development>>> <<<Engineering choices>>> Our utmost priority when building Seshat was to make it as easy as possible for others to deploy, use, administer and eventually contribute to. To do so, we chose the most common frameworks that are free and open-source, all of which are detailed in the following sections. Additionally, to match the current trend in web development, we decided to use the so-called "web-app" architecture for Seshat, i.e., we separated the application into two distinct entities: a front-end, running on the browser, and a back-end, serving data to the front-end and interacting with the database. <<<Back-end Choices>>> The back-end system runs on a server. It holds and updates the campaign databases and runs the annotation checking and inter-rater agreement evaluation services. We chose Python, given its widespread use in the scientific community, with a wide array of speech and linguistic packages. Moreover, its usage on the back-end side will allow the future integration of powerful speech processing tools like Pyannote BIBREF16 to semi-automatize annotations. We thus went for Python3.6 for Seshat's server back-end. We used the Flask-Smorest extension (which is based on Flask) to clearly and thoroughly document our API, which can be exported to the popular OpenAPI 3.0.2 RESTful API description format. The files and server data are stored on a MongoDB database, chosen for its flexible document model and general ease of use. We used the Object-Relational Mapping (ORM) MongoEngine to define our database schemas and interact with that database. MongoDB's GridFS system also allowed us to directly store annotation files (which are usually very light-weight) directly in the database, instead of going through the file system. <<</Back-end Choices>>> <<<Front-end Choices>>> The front-end handles all of the interactions between the users (campaing manager or annotator) with the databses. It is implemented as an App within their browser. We decided to base Seshat's front-end on the Angular Typescript framework. Despite its' steep learning curve, it enforces strict design patterns that guarantee that others can make additions to our code without jeopardising the stability of the App. Angular Typescript has a wide community support in the web development industry and is backed by Google and Microsoft. Moreover, the fact that it is based on TypeScript alleviates the numerous shortcomings of JavaScript, ensuring our implementation's readability and stability. <<</Front-end Choices>>> <<</Engineering choices>>> <<<UX/UI Choices>>> The interface and the features we selected for our implementation are the process of a year-long iterative process involving a team of annotators, two campaign managers and software engineers. We followed some guiding principles from the recent Material design language. Our goal while designing our interface (with the help of a professional designer) was to make it fully usable by non-technical people. We also put some extra care into the annotators' interface to give them a clear sense of what is to be done, how they should follow the annotation protocol, and how to correct potential errors in their annotations (See Figure FIGREF21) The goal was to reduce the number of actions to perform for annotators and enable to focus only on the annotations content. <<</UX/UI Choices>>> <<</Development>>> <<<Using Seshat>>> <<<Installation and Setup>>> Setting up a modern fully-fledged web service is a arduous task, usually requiring a seasoned system administrator as well as sometimes having very precise system requirements. Luckily, the Docker virtualisation platform ensures that anyone with a recent-enough install of that software can set up Seshat in about one command (while still allowing some flexibility via a configuration file). For those willing to have a more tightly-controlled installation of Seshat on their system, we also fully specify the manual installation steps in our online documentation). Importing an audio corpus that you are willing to annotate is easy as dropping files into a default `corpora/` folder. It is possible to either drop a folder containing audio files (with no constraints on the folder's structure), or a CSV file listing audio filenames along with their durations (in case the files are sensitive and you're not willing to risk them being hosted on the server). It is then possible to review the automatically imported files via the web interface. <<</Installation and Setup>>> <<<Launching and monitoring an annotation campaign>>> The Campaign manager can easily define and monitor annotation campaign. As shown in Figure FIGREF33, the online form enable to choose corpora, pre-define and pre-configure the annotations scheme (tiers and parsers). There are 2 types of tiers already implemented by default: one with no check at all, and one with pre-defined categories. For the latter, these categories are pre-defined when the campaign is created. Only Campaign managers can access and build new campaigns. If Campaign manager have several campaigns they can easily switch between them via the menu bar or get a full overview with the dashboard (See Figure FIGREF26). The campaign managers can visualise the progress of the assigned tasks at the campaign level or more precisely at the task level. They can retrieve all the intermediate files that have been created for each task. For instance, the campaign manager can examine qualitatively and quantitatively what are the annotation differences before the merge phases of the double annotator task. <<</Launching and monitoring an annotation campaign>>> <<<Scripting API>>> For those willing to interact with Seshat using code, it is possible to interact with Seshat using either its RESTful API or its command-line interface (CLI). The API endpoints that can be called are all listed in a simple interface, and can be made from any programming language able to make HTTP requests. The CLI interface can be used via your terminal, and therefore can be interacted with using Bash scripts. A typical usage of these features would be to assign annotation tasks from a large speech corpus (spoken by several speakers) to a large pool of annotators, all the while making sure each annotator has a similar number of tasks, with each speaker being evenly distributed among annotators as well. This would be tedious to do manually via the user interface, but easy to program in any scripting language. <<</Scripting API>>> <<<Annotation Parser Customisation>>> We aimed at a reasonable trade-off between simplicity and flexibility for the TextGrid annotations checking component. However, we understand (from our own experience in particular) that sometimes annotations can follow a very specific and complex standard (for instance, parsing SAMPA phonemes strings). To allow users to define their own annotation standards, we added the possibility for users to define an annotation parser, via a simple package-based extension system (taking inspiration from pyannote's extension system). Anyone willing to create a new annotation parser has to be able to program in Python and have a minimal understanding of its packaging system. As presented in our example French SAMPA Parser (Algorithm ), implementing a custom annotation parsers only requires the overload of two methods from Seshat's BaseCustomParser class: check-annotation: takes an annotation string as input and raises an error if and only if the annotation is deemed to be invalid. It doesn't return anything. distance: takes two annotations as input and should return a float corresponding to the distance between these two annotations. <<</Annotation Parser Customisation>>> <<<Inter-rater agreement: the @!START@$\gamma $@!END@ measure>>> It is necessary have a measure of confidence to obtain high-quality datasets and therefore to draw valid conclusions from annotations. Annotations tasks of audio and speech data usually have some specificities. The items to annotate have to be both segmented in time and categorised. The segments can be hierarchically defined or overlapping. In addition, the audio stream may require only sparse annotations (especially in-the-wild recordings which contain a lot of non-speech segments). To evaluate speech annotations, the measure needs to take these characteristics into account. That is why we decided to re-implement and compute the $\gamma $ measure (see mathet2015unified for its design and the advantages of this measure over previous agreement measures). First, the $\gamma $ software aligns (tier-wise) the annotations of the different annotators. To align the two sets of annotations the $\gamma $ measure the distance between all the individual units. The difference of position of two annotated units $u$ and $v$ is measured with the positional distance: If the tiers are categorical, the distance for the content of the annotated units $u$ and $v$ is defined as: This distance can be over-written by the custom parser as mentioned above. These two distance are summed with equal weights to obtain the distance between every annotated units from 2 annotators. Then, it is possible to obtain the disorder $\delta (a)$ of a specific alignment $a$ by summing the distance of all the aligned units in $a$. All possible alignments $a$ are considered and the one that minimises the disorder $\delta (a)$ is kept. To get the value of $\gamma $, the disorder is chance-corrected to obtain an expected disorder. It is obtained by re-sampling randomly the annotations of the annotators. This means that real annotations are drawn from the annotators, and one position in the audio is randomly chosen. The annotation is split at this random position and the two parts are permuted. It is then possible to obtain an approximation of the expected disorder $\delta _e$. The final agreement measure is defined as: This $\gamma $ measure is automatically computed by the back-end server for the double-annotator tasks. The Campaign manager can retrieve these measures in Seshat by downloading a simple CSV file. <<</Inter-rater agreement: the @!START@$\gamma $@!END@ measure>>> <<</Using Seshat>>> <<<Use cases>>> We present two use cases on which Seshat was developped: clinical interviews, and daylong child-centered recordings. <<<Clinical interviews>>> Seshat was intially developped to study the impact of Huntington's Disease BIBREF17 on speech and language production. One hundred and fifty two interviews between a neuropsychologist and a patient with the Huntington's Disease (HD) were recorded between June 2018 and November 2019. The campaign manager created a campaign with multiple tiers to annotate the turn takings and the speech/non speech boundaries of the utterances of the patient. For both tasks, the annotations did not need to cover completely the audio (sparsity property mentioned above). For the Turn-taking annotations, there are 3 pre-defined tiers, each one with a single class ('Patient', 'Non-Patient', and 'Noise'), which results in possible overlap between these classes. For the Utterance annotations, there is only one pre-defined class ('Utterance'). To this date, a total of 67 files have been fully annotated with the help of Seshat by a cohort of 18 speech pathologist students (see Figure FIGREF33). Among these, 16 have been done by 2 different annotators independently with the Double-annotator task. The results are summarised in Table TABREF34. Even though there are more categories for Turn-Takings than Utterance (gut2004measuring reported that the more categories the more the task is difficult in speech annotations), the mean $\gamma $ for the Turn-Takings $\gamma = 0.64$ is slightly higher than the one for Utterance $\gamma = 0.61$. And the range of values for the Turn-Takings is smaller than the Utterance. Indeed, the speech pathologists reported the difficulty to annotate the boundary of utterances in spontaneous speech, with several ambiguous cases due to pauses. These results will help us to redefine the protocol and be more precise on the given instructions. <<</Clinical interviews>>> <<<In-the-wild child-centered recordings>>> The Seshat software is also currently used to annotate audio files in a study of day-long audio-recordings captured by two devices (LENA BIBREF18, and a BabyCloud baby-logger device) worn by young children growing up in remote Papua New Guinea. The project aims at establishing language input and outcomes in this seldom-studied population. To establish reliability levels, 20 1-min files were double-annotated by 2 speech pathology students. Among the tasks given to the annotators there was: (1) locating the portions of Speech (Speech activity), (2) locating the speech produced by an adult that is directed to a child or not (Adult-Directed Speech versus Child-Directed Speech). As in the previous example, the annotations do not need to cover the full audio file. The Speech Activity task has only 1 class ('Speech') and the Addressee task has 2 classes ('ADS', 'CDS'). These recordings have been done in naturalistic and noisy conditions; moreover, the annotators do not understand the language. Probably as a result of these challenges, agreement between annotators is lower than in the Clinical interviews use case. This information is nonetheless valuable to the researchers, as it can help them appropriately lower their confidence in the ensuing speech quantity estimates. <<</In-the-wild child-centered recordings>>> <<</Use cases>>> <<<Conclusion and Future work>>> Seshat is a new tool for the management of audio annotation efforts. Seshat enables users to define their own campaign of annotations. Based on this configuration, Seshat automatically enforces the format of the annotations returned by the annotators. Besides, we also add the capability to finely tailor the parsing of the annotations. Finally, Seshat provides automatic routines to compute the inter-rate agreements that are specifically designed for audio annotations. Seshat lays some foundations for more advanced features, either for the interface or the annotation capabilities. In future work, we plan to implement an automatic task assignments and an integration of a diarization processing step to reduce human effort. Another planned feature is to add possibility for the campaign manager to design more complex annotation workflows such as, for instance, dependencies between tiers or more intermediate steps of annotations. <<</Conclusion and Future work>>> <<</Title>>>
{ "references": [ "Abstract, Conclusion and Future work" ], "type": "disordered_section" }
2003.01472
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Seshat: A tool for managing and verifying annotation campaigns of audio data <<<Abstract>>> We introduce Seshat, a new, simple and open-source software to efficiently manage annotations of speech corpora. The Seshat software allows users to easily customise and manage annotations of large audio corpora while ensuring compliance with the formatting and naming conventions of the annotated output files. In addition, it includes procedures for checking the content of annotations following specific rules are implemented in personalised parsers. Finally, we propose a double-annotation mode, for which Seshat computes automatically an associated inter-annotator agreement with the $\gamma$ measure taking into account the categorisation and segmentation discrepancies. <<</Abstract>>> <<<Introduction>>> Large corpora of speech, obtained in the laboratory and in naturalistic conditions, become easier to collect. This new trend broadens the scope of scientific questions on speech and language that can be answered. However, this poses an important challenge for the construction of reliable and usable annotations. Managing annotators and ensuring the quality of their annotations are highly demanding tasks for research endeavours and industrial projects BIBREF0. When organised manually, the manager of annotation campaigns usually faces three major problems: the mishandling of files (e.g., character-encoding problems, incorrect naming of files), the non-conformity of the annotations BIBREF1, and the inconsistency of the annotations BIBREF2. In this paper, we introduce Seshat, a system for the automated management of annotation campaigns for audio/speech data which addresses these challenges. It is built on two components that communicate via a Restful API: a back-end (server) written in Flask and a front-end (client) in Angular Typescript. Seshat is easy to install for non-developers and easy to use for researchers and annotators while having some extension capabilities for developers. In Section SECREF2, we describe the related work on annotations tools, which do not provide solutions to all the aforementioned challenges during corpus creation. In Section SECREF3, we make an overview of the different functionalities of the software. Then, we explain, in Section SECREF4, the architecture of the software, and also the several UX/UI design and engineering choices that have been made to facilitate the usage of the platform. We describe how to use of Seshat in Section SECREF5 and Section SECREF6 presents two specific use-cases. Finally, we conclude and describe future plans for Seshat in Section SECREF7. <<</Introduction>>> <<<Related Work>>> Self-hosted annotation systems. There are many standalone solutions for the transcription of speech data that are already used by researchers: Transcriber BIBREF3, Wavesurfer BIBREF4, Praat BIBREF5, ELAN BIBREF6, XTrans BIBREF7. These systems allow the playback of sound data and the construction of different layers of annotations with various specifications, with some advanced capabilities (such as annotations with hierarchical or no relationship between layers, number of audio channels, video support). Yet, these solutions lack a management system: each researcher must track the files assigned to annotators and build a pipeline to parse (and eventually check) the output annotation files. Moreover, checking can only be done once the annotations have been submitted to the researchers. This task becomes quickly untraceable as the number of files and annotators grow. In addition, most of these transcription systems do not provide a way to evaluate consistency (intra- and inter-annotator agreement) that would be appropriate for speech data BIBREF8. Web-based annotations systems. There are several web-based annotation systems for the annotation of audio data. Among them we find light-weight systems, like the VIA software BIBREF9 or Praat on the web BIBREF10 that allow to build simple layers of annotations. However, they do not provide a proper management system for a pool of annotators nor do they integrate annotation checking. On the other side of the spectrum, there are more sophisticated systems with various capabilities. Camomille BIBREF11 and the EMU-SDMS system (that can also be used offline) BIBREF12 allow to work with speech data and to distribute the tasks to several annotators. But these systems require expertise in web hosting and technologies to deploy and modify them. Finally, WebAnno BIBREF13 and GATE Teamware BIBREF14 are the tools that most closely match our main contributions regarding quality control (conformity and consistency checking), annotators' management and flexibility. WebAnno includes consistency checking with the integration of different metrics BIBREF15. However, these tools have only been built for text data. The format and all the custom layers have been designed for Natural Language Processing tasks. Porting WebAnno to support speech data seemed a major engineering challenge. That is why it appeared necessary to develop a new and user-friendly tool addressed to the speech community. <<</Related Work>>> <<<Overview of Seshat>>> Seshat is a user-friendly web-based interface whose objective is to smoothly manage large campaigns of audio data annotation, see Figure FIGREF8. Below, we describe the several terms used in Seshat's workflow: [font=, leftmargin=1cm, style=nextline] A set of audio/speech files that a Campaign Manager wants to annotate. It is indicated either by a folder containing sound files, or by a CSV summarizing a set of files. We support the same formats as Praat so far: WAV, Flac and MP3. An object that enables the Campaign Manager to assign Annotation Tasks to the Annotators. It references a Corpus, and allows the Manager to track the annotation's tasks progress and completion in real time. At its creation, a Textgrid Checking Scheme can also be defined for that campaign. It is contained in an Annotation Campaign, it references an audio file from the campaign's designated Audio Corpus, and assigned to Annotators. It can either be a Single Annotator Task (assigned to one Annotator) or a Double Annotator Task (assigned to two annotators, who will annotatote the assigned task in parallel). A set of rules defining the TextGrid files' structure and content of the annotations. It is set at the beginning of the Annotation Campaign's creation, and is used to enforce that all TextGrids from the campaign contain the same amount of Tiers, with the same names. It can also enforce, for certain chosen tiers, a set of valid annotations. Users with the rights to create Annotation Campaigns and Annotators user accounts, and assign Annotation Tasks to Annotators. Users who are assigned a set of Annotation Tasks. Their job is to complete the annotation of the audio files with the Praat software. If the TextGrid file they submit does not comply with their Annotation Task's TextGrid Checking Scheme, Seshat pinpoint their annotation errors with detailed messages. The annotator can re-submit the concerned file to the platform based on these different feedbacks. Once they they connected to their instance of Seshat, campaign managers can access ongoing annotation campaigns or create new ones. Campaign managers are able to add annotators, assign annotation tasks and track progress. Annotator see a list of assigned tasks. The first step for them is to download the sound file with its corresponding auto-generated template TextGrid. In the current implementation, the annotation work has to be done locally with Praat. An upcoming version will use of web tools like Praat on the web BIBREF10. Once the task is completed, the TextGrid file is to be uploaded to Seshat via the web interface. We used the TextGrid format because of the wide acceptance of the Praat software in the speech science community (e.g., language acquisition research, clinical linguistics, phonetics and phonology). The Textgrid Checking Scheme that encompasses rules on the tier's naming, file structure, and the content of the annotations, is associated with a specific campaign and defined at the creation of the campaign. Seshat back-end will automatically check that the submitted TextGrid file conforms to the Annotation Campaign's Textgrid Checking Scheme. Seshat allows the campaign manager to create two type of tasks: single annotator, and double annotator. Regarding the first task, one audio file is attributed to one annotator. Once the annotation is completed, Sesha automatically checks the conformity of the annotation, and only declares a tasks completed if the conformity checks is passed. Regarding the second task, one audio file is attributed to two annotators. The two annotators annotate the same file independently, then the two versions are merged and the annotators are guided through a compare and review process to agree one final version. We summarise in the Figure FIGREF7 the different steps for the double-annotator task. At each step during merging, the two annotators are provided feedbacks to focus on where are the disagreements. This process also results in the computation of an Inter-annotator agreement for each file. The double annotator task can be used to train new annotators alongside experts. Annotating speech data is a joint task of segmentation and categorisation of audio events. That is why we adopted the $\gamma $ measure BIBREF8 to evaluate the inter- or intra- annotator agreement in each individual tier. Campaign manager can customise the distance used by $\gamma $ by inserting a custom distance along their own parser (See short snippet of code for a parser of French Phonetics with the SAMPA alphabet in Algorithm ). <<</Overview of Seshat>>> <<<Development>>> <<<Engineering choices>>> Our utmost priority when building Seshat was to make it as easy as possible for others to deploy, use, administer and eventually contribute to. To do so, we chose the most common frameworks that are free and open-source, all of which are detailed in the following sections. Additionally, to match the current trend in web development, we decided to use the so-called "web-app" architecture for Seshat, i.e., we separated the application into two distinct entities: a front-end, running on the browser, and a back-end, serving data to the front-end and interacting with the database. <<<Back-end Choices>>> The back-end system runs on a server. It holds and updates the campaign databases and runs the annotation checking and inter-rater agreement evaluation services. We chose Python, given its widespread use in the scientific community, with a wide array of speech and linguistic packages. Moreover, its usage on the back-end side will allow the future integration of powerful speech processing tools like Pyannote BIBREF16 to semi-automatize annotations. We thus went for Python3.6 for Seshat's server back-end. We used the Flask-Smorest extension (which is based on Flask) to clearly and thoroughly document our API, which can be exported to the popular OpenAPI 3.0.2 RESTful API description format. The files and server data are stored on a MongoDB database, chosen for its flexible document model and general ease of use. We used the Object-Relational Mapping (ORM) MongoEngine to define our database schemas and interact with that database. MongoDB's GridFS system also allowed us to directly store annotation files (which are usually very light-weight) directly in the database, instead of going through the file system. <<</Back-end Choices>>> <<<Front-end Choices>>> The front-end handles all of the interactions between the users (campaing manager or annotator) with the databses. It is implemented as an App within their browser. We decided to base Seshat's front-end on the Angular Typescript framework. Despite its' steep learning curve, it enforces strict design patterns that guarantee that others can make additions to our code without jeopardising the stability of the App. Angular Typescript has a wide community support in the web development industry and is backed by Google and Microsoft. Moreover, the fact that it is based on TypeScript alleviates the numerous shortcomings of JavaScript, ensuring our implementation's readability and stability. <<</Front-end Choices>>> <<</Engineering choices>>> <<<UX/UI Choices>>> The interface and the features we selected for our implementation are the process of a year-long iterative process involving a team of annotators, two campaign managers and software engineers. We followed some guiding principles from the recent Material design language. Our goal while designing our interface (with the help of a professional designer) was to make it fully usable by non-technical people. We also put some extra care into the annotators' interface to give them a clear sense of what is to be done, how they should follow the annotation protocol, and how to correct potential errors in their annotations (See Figure FIGREF21) The goal was to reduce the number of actions to perform for annotators and enable to focus only on the annotations content. <<</UX/UI Choices>>> <<</Development>>> <<<Using Seshat>>> <<<Installation and Setup>>> Setting up a modern fully-fledged web service is a arduous task, usually requiring a seasoned system administrator as well as sometimes having very precise system requirements. Luckily, the Docker virtualisation platform ensures that anyone with a recent-enough install of that software can set up Seshat in about one command (while still allowing some flexibility via a configuration file). For those willing to have a more tightly-controlled installation of Seshat on their system, we also fully specify the manual installation steps in our online documentation). Importing an audio corpus that you are willing to annotate is easy as dropping files into a default `corpora/` folder. It is possible to either drop a folder containing audio files (with no constraints on the folder's structure), or a CSV file listing audio filenames along with their durations (in case the files are sensitive and you're not willing to risk them being hosted on the server). It is then possible to review the automatically imported files via the web interface. <<</Installation and Setup>>> <<<Launching and monitoring an annotation campaign>>> The Campaign manager can easily define and monitor annotation campaign. As shown in Figure FIGREF33, the online form enable to choose corpora, pre-define and pre-configure the annotations scheme (tiers and parsers). There are 2 types of tiers already implemented by default: one with no check at all, and one with pre-defined categories. For the latter, these categories are pre-defined when the campaign is created. Only Campaign managers can access and build new campaigns. If Campaign manager have several campaigns they can easily switch between them via the menu bar or get a full overview with the dashboard (See Figure FIGREF26). The campaign managers can visualise the progress of the assigned tasks at the campaign level or more precisely at the task level. They can retrieve all the intermediate files that have been created for each task. For instance, the campaign manager can examine qualitatively and quantitatively what are the annotation differences before the merge phases of the double annotator task. <<</Launching and monitoring an annotation campaign>>> <<<Scripting API>>> For those willing to interact with Seshat using code, it is possible to interact with Seshat using either its RESTful API or its command-line interface (CLI). The API endpoints that can be called are all listed in a simple interface, and can be made from any programming language able to make HTTP requests. The CLI interface can be used via your terminal, and therefore can be interacted with using Bash scripts. A typical usage of these features would be to assign annotation tasks from a large speech corpus (spoken by several speakers) to a large pool of annotators, all the while making sure each annotator has a similar number of tasks, with each speaker being evenly distributed among annotators as well. This would be tedious to do manually via the user interface, but easy to program in any scripting language. <<</Scripting API>>> <<<Annotation Parser Customisation>>> We aimed at a reasonable trade-off between simplicity and flexibility for the TextGrid annotations checking component. However, we understand (from our own experience in particular) that sometimes annotations can follow a very specific and complex standard (for instance, parsing SAMPA phonemes strings). To allow users to define their own annotation standards, we added the possibility for users to define an annotation parser, via a simple package-based extension system (taking inspiration from pyannote's extension system). Anyone willing to create a new annotation parser has to be able to program in Python and have a minimal understanding of its packaging system. As presented in our example French SAMPA Parser (Algorithm ), implementing a custom annotation parsers only requires the overload of two methods from Seshat's BaseCustomParser class: check-annotation: takes an annotation string as input and raises an error if and only if the annotation is deemed to be invalid. It doesn't return anything. distance: takes two annotations as input and should return a float corresponding to the distance between these two annotations. <<</Annotation Parser Customisation>>> <<<Inter-rater agreement: the @!START@$\gamma $@!END@ measure>>> It is necessary have a measure of confidence to obtain high-quality datasets and therefore to draw valid conclusions from annotations. Annotations tasks of audio and speech data usually have some specificities. The items to annotate have to be both segmented in time and categorised. The segments can be hierarchically defined or overlapping. In addition, the audio stream may require only sparse annotations (especially in-the-wild recordings which contain a lot of non-speech segments). To evaluate speech annotations, the measure needs to take these characteristics into account. That is why we decided to re-implement and compute the $\gamma $ measure (see mathet2015unified for its design and the advantages of this measure over previous agreement measures). First, the $\gamma $ software aligns (tier-wise) the annotations of the different annotators. To align the two sets of annotations the $\gamma $ measure the distance between all the individual units. The difference of position of two annotated units $u$ and $v$ is measured with the positional distance: If the tiers are categorical, the distance for the content of the annotated units $u$ and $v$ is defined as: This distance can be over-written by the custom parser as mentioned above. These two distance are summed with equal weights to obtain the distance between every annotated units from 2 annotators. Then, it is possible to obtain the disorder $\delta (a)$ of a specific alignment $a$ by summing the distance of all the aligned units in $a$. All possible alignments $a$ are considered and the one that minimises the disorder $\delta (a)$ is kept. To get the value of $\gamma $, the disorder is chance-corrected to obtain an expected disorder. It is obtained by re-sampling randomly the annotations of the annotators. This means that real annotations are drawn from the annotators, and one position in the audio is randomly chosen. The annotation is split at this random position and the two parts are permuted. It is then possible to obtain an approximation of the expected disorder $\delta _e$. The final agreement measure is defined as: This $\gamma $ measure is automatically computed by the back-end server for the double-annotator tasks. The Campaign manager can retrieve these measures in Seshat by downloading a simple CSV file. <<</Inter-rater agreement: the @!START@$\gamma $@!END@ measure>>> <<</Using Seshat>>> <<<Use cases>>> We present two use cases on which Seshat was developped: clinical interviews, and daylong child-centered recordings. <<<Clinical interviews>>> Seshat was intially developped to study the impact of Huntington's Disease BIBREF17 on speech and language production. One hundred and fifty two interviews between a neuropsychologist and a patient with the Huntington's Disease (HD) were recorded between June 2018 and November 2019. The campaign manager created a campaign with multiple tiers to annotate the turn takings and the speech/non speech boundaries of the utterances of the patient. For both tasks, the annotations did not need to cover completely the audio (sparsity property mentioned above). For the Turn-taking annotations, there are 3 pre-defined tiers, each one with a single class ('Patient', 'Non-Patient', and 'Noise'), which results in possible overlap between these classes. For the Utterance annotations, there is only one pre-defined class ('Utterance'). To this date, a total of 67 files have been fully annotated with the help of Seshat by a cohort of 18 speech pathologist students (see Figure FIGREF33). Among these, 16 have been done by 2 different annotators independently with the Double-annotator task. The results are summarised in Table TABREF34. Even though there are more categories for Turn-Takings than Utterance (gut2004measuring reported that the more categories the more the task is difficult in speech annotations), the mean $\gamma $ for the Turn-Takings $\gamma = 0.64$ is slightly higher than the one for Utterance $\gamma = 0.61$. And the range of values for the Turn-Takings is smaller than the Utterance. Indeed, the speech pathologists reported the difficulty to annotate the boundary of utterances in spontaneous speech, with several ambiguous cases due to pauses. These results will help us to redefine the protocol and be more precise on the given instructions. <<</Clinical interviews>>> <<<In-the-wild child-centered recordings>>> The Seshat software is also currently used to annotate audio files in a study of day-long audio-recordings captured by two devices (LENA BIBREF18, and a BabyCloud baby-logger device) worn by young children growing up in remote Papua New Guinea. The project aims at establishing language input and outcomes in this seldom-studied population. To establish reliability levels, 20 1-min files were double-annotated by 2 speech pathology students. Among the tasks given to the annotators there was: (1) locating the portions of Speech (Speech activity), (2) locating the speech produced by an adult that is directed to a child or not (Adult-Directed Speech versus Child-Directed Speech). As in the previous example, the annotations do not need to cover the full audio file. The Speech Activity task has only 1 class ('Speech') and the Addressee task has 2 classes ('ADS', 'CDS'). These recordings have been done in naturalistic and noisy conditions; moreover, the annotators do not understand the language. Probably as a result of these challenges, agreement between annotators is lower than in the Clinical interviews use case. This information is nonetheless valuable to the researchers, as it can help them appropriately lower their confidence in the ensuing speech quantity estimates. <<</In-the-wild child-centered recordings>>> <<</Use cases>>> <<<Conclusion and Future work>>> Seshat is a new tool for the management of audio annotation efforts. Seshat enables users to define their own campaign of annotations. Based on this configuration, Seshat automatically enforces the format of the annotations returned by the annotators. Besides, we also add the capability to finely tailor the parsing of the annotations. Finally, Seshat provides automatic routines to compute the inter-rate agreements that are specifically designed for audio annotations. Seshat lays some foundations for more advanced features, either for the interface or the annotation capabilities. In future work, we plan to implement an automatic task assignments and an integration of a diarization processing step to reduce human effort. Another planned feature is to add possibility for the campaign manager to design more complex annotation workflows such as, for instance, dependencies between tiers or more intermediate steps of annotations. <<</Conclusion and Future work>>> <<</Title>>>
{ "references": [ "Conclusion and Future work, Introduction" ], "type": "disordered_section" }
2004.01980
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Hooks in the Headline: Learning to Generate Headlines with Controlled Styles <<<Abstract>>> Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure. We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), in order to attract more readers. With no style-specific article-headline pair (only a standard headline summarization dataset and mono-style corpora), our method TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework. We also introduced a novel parameter sharing scheme to further disentangle the style from the text. Through both automatic and human evaluation, we demonstrate that TitleStylist can generate relevant, fluent headlines with three target styles: humor, romance, and clickbait. The attraction score of our model generated headlines surpasses that of the state-of-the-art summarization model by 9.68%, and even outperforms human-written references. <<</Abstract>>> <<<Introduction>>> Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure FIGREF2, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What's That Chuckle You Hear? It May Be the New Frog From NYC.” To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG). Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity. It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others. SHG is a highly skilled creative process, and usually only possessed by expert writers. One of the most famous headlines in American publications, “Sticks Nix Hick Pix,” could be such an example. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly. More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset BIBREF5), obstructing the models from learning a distinct style. In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation. We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity. Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoencoder (DAE) on a style corpus. In particular, based on the transformer architecture BIBREF6, we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text. This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters. We validate the model on three tasks: humorous, romantic, and click-baity headline generation. Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure FIGREF2. The main contributions of our paper are listed below: To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data. Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones. Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box. <<</Introduction>>> <<<Related Work>>> Our work is related to summarization and text style transfer. <<<Headline Generation as Summarization>>> Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, BIBREF14 proposed attention-based summarization (ABS) to make BIBREF15's framework of summarization more powerful. Many recent works extended ABS by utilizing additional features BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Other variants of the standard headline generation setting include headlines for community question answering BIBREF23, multiple headline generation BIBREF24, user-specific generation using user embeddings in recommendation systems BIBREF25, bilingual headline generation BIBREF26 and question-style headline generation BIBREF27. Only a few works have recently started to focus on increasing the attractiveness of generated headlines BIBREF28, BIBREF29. BIBREF28 focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail. These controls serve as a way to boost the model performance, and the CNN- and DailyMail-style control shows a negligible improvement. BIBREF29 utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers' comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines. BIBREF30 proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one. This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines. Our model does not have this limitation, thus enabling transferring to many more styles. <<</Headline Generation as Summarization>>> <<<Text Style Transfer>>> Our work is also related to text style transfer, which aims to change the style attribute of the text while preserving its content. First proposed by BIBREF31, it has achieved great progress in recent years BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38. However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem. <<</Text Style Transfer>>> <<</Related Work>>> <<<Methods>>> <<<Problem Formulation>>> The model is trained on a source dataset $S$ and target dataset $T$. The source dataset $S=\lbrace (\mathbf {a^{(i)}},\mathbf {h^{(i)}})\rbrace _{i=1}^N$ consists of pairs of a news article $\mathbf {a}$ and its plain headline $\mathbf {h}$. We assume that the source corpus has a distribution $P(A, H)$, where $A=\lbrace \mathbf {a^{(i)}}\rbrace _{i=1}^N$, and $H=\lbrace \mathbf {h^{(i)}}\rbrace _{i=1}^N$. The target corpus $T=\lbrace \mathbf {t^{(i)}}\rbrace _{i=1}^{M}$ comprises of sentences $\mathbf {t}$ written in a specific style (e.g., humor). We assume that it conforms to the distribution $P(T)$. Note that the target corpus $T$ only contains style-carrying sentences, not necessarily headlines — it can be just book text. Also no sentence $\mathbf {t}$ is paired with a news article. Overall, our task is to learn the conditional distribution $P(T|A)$ using only $S$ and $T$. This task is fully unsupervised because there is no sample from the joint distribution $P(A, T)$. <<</Problem Formulation>>> <<<Seq2Seq Model Architecture>>> For summarization, we adopt a sequence-to-sequence (Seq2Seq) model based on the Transformer architecture BIBREF6. As in Figure FIGREF8, it consists of a 6-layer encoder $E(\mathbf {\cdot }; \mathbf {\theta _E})$ and a 6-layer decoder $G(\mathbf {\cdot }; \mathbf {\theta _G})$ with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model BIBREF3. MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG. <<</Seq2Seq Model Architecture>>> <<<Multitask Training Scheme>>> To disentangle the latent style from the text, we adopt a multitask learning framework BIBREF39, training on summarization and DAE simultaneously (as shown in Figure FIGREF10). <<<Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@>>> With the source domain dataset $S$, based on the encoder-decoder architecture, we can learn the conditional distribution $P(H|A)$ by training $\mathbf {z}_S=E_S(A)$ and $H_S=G_S(\mathbf {z_S})$ to solve the supervised Seq2Seq learning task, where $\mathbf {z_S}$ is the learned latent representation in the source domain. The loss function of this task is where $\mathbf {\theta _{E_S}}$ and $\mathbf {\theta _{G_S}}$ are the set of model parameters of the encoder and decoder in the source domain and $p(\mathbf {h}|\mathbf {a})$ denotes the overall probability of generating an output sequence $\mathbf {h}$ given the input article $\mathbf {a}$, which can be further expanded as follows: where $L$ is the sequence length. <<</Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@>>> <<<DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@>>> For the target style corpus $T$, since we only have the sentence $\mathbf {t}$ without paired news articles, we train $\mathbf {z_T}=E_T(\mathbf {\tilde{t}})$ and $\mathbf {t}=G_T(\mathbf {z_T})$ by solving an unsupervised reconstruction learning task, where $\mathbf {z_T}$ is the learned latent representation in the target domain, and $\mathbf {\tilde{t}}$ is the corrupted version of $\mathbf {t}$ by randomly deleting or blanking some words and shuffling the word orders. To train the model, we minimize the reconstruction error $\mathcal {L}_T$: where $\mathbf {\theta _{E_T}}$ and $\mathbf {\theta _{G_T}}$ are the set of model parameters for the encoder and generator in the target domain. We train the whole model by jointly minimizing the supervised Seq2Seq training loss $\mathcal {L}_S$ and the unsupervised denoised auto-encoding loss $\mathcal {L}_T$ via multitask learning, so the total loss becomes where $\lambda $ is a hyper-parameter. <<</DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@>>> <<</Multitask Training Scheme>>> <<<Parameter-Sharing Scheme>>> More constraints are necessary in the multitask training process. We aim to infer the conditional distribution as $ P(T|A)=G_T(E_S(A))$. However, without samples from $P(A, T)$, this is a challenging or even impossible task if $E_S$ and $E_T$, or $G_S$ and $G_T$ are completely independent of each other. Hence, we need to add some constraints to the network by relating $E_S$ and $E_T$, and $G_S$ and $G_T$. The simplest design is to share all parameters between $E_S$ and $E_T$, and apply the same strategy to $G_S$ and $G_T$. The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article. However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus $T$, we share all parameters of the encoder between two domains, i.e., between $E_S$ and $E_T$, whereas we divide the parameters of the decoder into two types: style-independent parameters $\mathbf {\theta _{\mathrm {ind}}}$ and style-dependent parameters $\mathbf {\theta _{\mathrm {dep}}}$. This means that only the style-independent parameters are shared between $G_S$ and $G_T$ while the style-dependent parameters are not. More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below. <<<Type 1. Style Layer Normalization>>> Inspired by previous work on image style transfer BIBREF40, we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style. This style layer normalization approach aims to transform a layer’s activation $\mathbf {x}$ into a normalized activation $\mathbf {z}$ specific to the style $s$: where $\mu $ and $\sigma $ are the mean and standard deviation of the batch of $\mathbf {x}$, and $\gamma _s$ and $\beta _s$ are style-specific parameters learned from data. Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers. <<</Type 1. Style Layer Normalization>>> <<<Type 2. Style-Guided Encoder Attention>>> Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states. The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature. We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows: where $\mathbf {\mathrm {query}}$, $\mathbf {\mathrm {key}}$, and $\mathbf {\mathrm {value}}$ denote the triple of inputs into the multi-head attention module; $\mathbf {W_q^s}$, $\mathbf {W_k}$, and $\mathbf {W_v}$ denote the scaled dot-product matrix for affine transformation; $d_{\mathrm {model}}$ is the dimension of the hidden states. We specialize the dot-product matrix $\mathbf {W_q^s}$ of the query for different styles, so that $\mathbf {Q}$ can be different to induce diverse attention patterns. <<</Type 2. Style-Guided Encoder Attention>>> <<</Parameter-Sharing Scheme>>> <<</Methods>>> <<<Experiments>>> <<<Datasets>>> We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text. The average sentence length in the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively. <<<Source Dataset>>> The source dataset contains news articles paired with corresponding headlines. To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K). After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set. We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus BIBREF41 and treat the abstracts as the news articles. Following the standard pre-processing procedures BIBREF42, we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs. We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models BIBREF43. We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages, and in total collected 90,236 news abstract-headline pairs. <<</Source Dataset>>> <<<Three Target Style Corpora>>> <<<Humor and Romance>>> For the target style datasets, we follow BIBREF44 to use humor and romance novel collections in BookCorpus BIBREF45 as the Humor and Romance datasets. We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets. <<</Humor and Romance>>> <<<Clickbait>>> We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers. Thus we used The Examiner - SpamClickBait News dataset, denoted as the Clickbait dataset. We collected 500K headlines for our use. Some examples from each style corpus are listed in Table TABREF32. <<</Clickbait>>> <<</Three Target Style Corpora>>> <<</Datasets>>> <<<Baselines>>> We compared the proposed TitleStylist against the following five strong baseline approaches. <<<Neural Headline Generation (NHG)>>> We train the state-of-the-art summarization model, MASS BIBREF3, on our collected news abstracts-headlines paired data. <<</Neural Headline Generation (NHG)>>> <<<Gigaword-MASS>>> We test an off-the-shelf headline generation model, MASS from BIBREF3, which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles. <<</Gigaword-MASS>>> <<<Neural Story Teller (NST)>>> It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines BIBREF46. In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation. Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated. More details of this method can refer to the official website. <<</Neural Story Teller (NST)>>> <<<Fine-Tuned>>> We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training. <<</Fine-Tuned>>> <<<Multitask>>> We share all parameters between $E_S$ and $E_T$, and between $G_S$ and $G_T$, and trained the model on both the summarization and DAE tasks. The model architecture is the same as NHG. <<</Multitask>>> <<</Baselines>>> <<<Evaluation Metrics>>> To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation. <<<Setup of Human Evaluation>>> We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices. <<</Setup of Human Evaluation>>> <<<Setup of Automatic Evaluation>>> Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency. Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range. Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness. <<<Summarization Quality>>> We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU BIBREF47, METEOR BIBREF48, ROUGE BIBREF49 and CIDEr BIBREF50. For ROUGE, we used the Files2ROUGE toolkit, and for other metrics, we used the pycocoeval toolkit. <<</Summarization Quality>>> <<<Language Fluency>>> We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs. <<</Language Fluency>>> <<</Setup of Automatic Evaluation>>> <<</Evaluation Metrics>>> <<<Experimental Details>>> We used the fairseq code base BIBREF52. During training, we use Adam optimizer with an initial learning rate of $5\times 10^{-4}$, and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as 4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of $0.2$, and randomly shuffled the word order within 5 tokens. All datasets are lower-cased. $\lambda $ is set as 0.5 in experiments. For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to $\lambda $. <<</Experimental Details>>> <<</Experiments>>> <<<Results and Discussion>>> <<<Human Evaluation Results>>> The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters. <<<Relevance>>> We first look at the relevance scores in Table TABREF51. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score. The outputs from NHG are usually like an organic reorganization of several keywords in the source context (as shown in Table TABREF52), thus appearing most relevant. It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original human-written headlines in terms of relevance, validating that our generation results are qualified in this aspect. Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity. <<</Relevance>>> <<<Attraction>>> In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores. <<</Attraction>>> <<<Fluency>>> The human-annotated fluency scores in Table TABREF51 verified that our TitleStylist generated headlines are comparable or superior to the human-written headlines in terms of readability. <<</Fluency>>> <<<Style Strength>>> We also validated that our TitleStylist can carry more styles compared with the Multitask and NHG baselines by summarizing the percentage of choices by humans for the most humorous or romantic headlines in Table TABREF57. <<</Style Strength>>> <<</Human Evaluation Results>>> <<<Automatic Evaluation Results>>> Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality. This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability. Table TABREF59 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines. We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines. In Table TABREF59, the first row “NHG” shows the performance of the current state-of-the-art summarization model on our data, and Table TABREF52 provides two examples of its generation output. Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body. From Table TABREF59, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset. Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability. In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG. This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task. More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data. However, unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation. We find that in Table TABREF59 TitleStylist-F achieves the best summarization performance. This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make $G_S$ focus more on summarization. It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG. This agrees with the fact that the $G_T$ branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree. However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on. Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation. We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table TABREF59. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines. <<</Automatic Evaluation Results>>> <<<Extension to Multi-Style>>> We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model. That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora. And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters. We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table TABREF61. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles. Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent. The result is presented in the last column of Table TABREF61, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist. TitleStylist-Versatile thus generates multiple headlines in different styles altogether, which is a novel and efficient feature. <<</Extension to Multi-Style>>> <<</Results and Discussion>>> <<<Conclusion>>> We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction. To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities. Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Methods, Experiments" ], "type": "disordered_section" }
1911.03597
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Zero-Shot Paraphrase Generation with Multilingual Language Models <<<Abstract>>> Leveraging multilingual parallel texts to automatically generate paraphrases has drawn much attention as size of high-quality paraphrase corpus is limited. Round-trip translation, also known as the pivoting method, is a typical approach to this end. However, we notice that the pivoting process involves multiple machine translation models and is likely to incur semantic drift during the two-step translations. In this paper, inspired by the Transformer-based language models, we propose a simple and unified paraphrasing model, which is purely trained on multilingual parallel data and can conduct zero-shot paraphrase generation in one step. Compared with the pivoting approach, paraphrases generated by our model is more semantically similar to the input sentence. Moreover, since our model shares the same architecture as GPT (Radford et al., 2018), we are able to pre-train the model on large-scale unparallel corpus, which further improves the fluency of the output sentences. In addition, we introduce the mechanism of denoising auto-encoder (DAE) to improve diversity and robustness of the model. Experimental results show that our model surpasses the pivoting method in terms of relevance, diversity, fluency and efficiency. <<</Abstract>>> <<<Introduction>>> Paraphrasing is to express the same meaning using different expressions. Paraphrase generation plays an important role in various natural language processing (NLP) tasks such as response diversification in dialogue system, query reformulation in information retrieval, and data augmentation in machine translation. Recently, models based on Seq2Seq learning BIBREF1 have achieved the state-of-the-art results on paraphrase generation. Most of these models BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 focus on training the paraphrasing models based on a paraphrase corpus, which contains a number of pairs of paraphrases. However, high-quality paraphrases are usually difficult to acquire in practice, which becomes the major limitation of these methods. Therefore, we focus on zero-shot paraphrase generation approach in this paper, which aims to generate paraphrases without requiring a paraphrase corpus. A natural choice is to leverage the bilingual or multilingual parallel data used in machine translation, which are of great quantity and quality. The basic assumption is that if two sentences in one language (e.g., English) have the same translation in another language (e.g., French), they are assumed to have the same meaning, i.e., they are paraphrases of each other. Therefore, one typical solution for paraphrasing in one language is to pivot over a translation in another language. Specifically, it is implemented as the round-trip translation, where the input sentence is translated into a foreign sentence, then back-translated into a sentence in the same language as input BIBREF7. The process is shown in Figure FIGREF1. Apparently, two machine translation systems (English$\rightarrow $French and French$\leftarrow $English) are needed to conduct the generation of a paraphrase. Although the pivoting approach works in general, there are several intrinsic defects. First, the round-trip system can hardly explore all the paths of paraphrasing, since it is pivoted through the finite intermedia outputs of a translation system. More formally, let $Z$ denote the meaning representation of a sentence $X$, and finding paraphrases of $X$ can be treated as sampling another sentence $Y$ conditioning on the representation $Z$. Ideally, paraphrases should be generated by following $P(Y|X) = \int _{Z} P(Y|Z)P(Z|X)dZ$, which is marginalized over all possible values of $Z$. However, in the round-trip translation, only one or several $Z$s are sampled from the machine translation system $P(Z|X)$, which can lead to an inaccurate approximation of the whole distribution and is prone to the problem of semantic drift due to the sampling variances. Second, the results are determined by the pre-existing translation systems, and it is difficult to optimize the pipeline end-to-end. Last, the system is not efficient especially at the inference stage, because it needs two rounds of translation decoding. To address these issues, we propose a single-step zero-shot paraphrase generation model, which can be trained on machine translation corpora in an end-to-end fashion. Unlike the pivoting approach, our proposed model does not involve explicit translation between multiple languages. Instead, it directly learns the paraphrasing distribution $P(Y|X)$ from the parallel data sampled from $P(Z|X)$ and $P(Y|Z)$. Specifically, we build a Transformer-based BIBREF8 language model, which is trained on the concatenated bilingual parallel sentences with language indicators. At inference stage, given a input sentence in a particular language, the model is guided to generate sentences in the same language, which are deemed as paraphrases of the input. Our model is simple and compact, and can empirically reduce the risk of semantic drift to a large extent. Moreover, we can initialize our model with generative pre-training (GPT) BIBREF0 on monolingual data, which can benefit the generation in low-resource languages. Finally, we borrow the idea of denoising auto-encoder (DAE) to further enhance robustness in paraphrase generation. We conduct experiments on zero-shot paraphrase generation task, and find that the proposed model significantly outperforms the pivoting approach in terms of both automatic and human evaluations. Meanwhile, the training and inference cost are largely reduced compared to the pivot-based methods which involves multiple systems. <<</Introduction>>> <<<Methodology>>> <<<Transformer-based Language Model>>> Transformer-based language model (TLM) is a neural language model constructed with a stack of Transformer decoder layers BIBREF8. Given a sequence of tokens, TLM is trained with maximizing the likelihood: where $X=[x_1,x_2,\ldots ,x_n]$ is a sentence in a language (e.g., English), and $\theta $ denotes the parameters of the model. Each Transformer layer is composed of multi-head self-attention, layer normalization and a feed-forward network. We refer reader to the original paper for details of each component. Formally, the decoding probability is given by where $x_i$ denotes the token embedding, $p_i$ denote the positional embedding and $h_i$ denotes the output states of the $i$-th token, and $W_e$ and $W_o$ are the input and output embedding matrices. Although TLM is normally employed to model monolingual sequences, there is no barrier to utilize TLM to model sequences in multiple languages. In this paper, inspired by BIBREF9, we concatenate pairs of sentences from bilingual parallel corpora (e.g., English$\rightarrow $French) as training instances to the model. Let $X$ and $Y$ denote the parallel sentences in two different languages, the training objective becomes This bilingual language model can be regarded as the decoder-only model compared to the traditional encoder-decoder model. It has been proved to work effectively on monolingual text-to-text generation tasks such as summarization BIBREF10. The advantages of such architecture include less model parameters, easier optimization and potential better performance for longer sequences. Furthermore, it naturally integrates with language models pre-training on monolingual corpus. For each input sequence of concatenated sentences, we add special tokens $\langle $bos$\rangle $ and $\langle $eos$\rangle $ at the beginning and the end, and $\langle $delim$\rangle $ in between the sentences. Moreover, at the beginning of each sentence, we add a special token as its language identifier, for instance, $\langle $en$\rangle $ for English, $\langle $fr$\rangle $ for French. One example of English$\rightarrow $French training sequence is “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $ $\langle $fr$\rangle $ chat assis sur le tapis $\langle $eos$\rangle $". At inference stage, the model predicts the next word as the conventional auto-regressive model: <<</Transformer-based Language Model>>> <<<Zero-shot Paraphrase Generation>>> We train the bilingual language model on multiple bilingual corpora, for example, English$\leftrightarrow $French and German$\leftrightarrow $Chinese. Once the language model has been trained, we can conduct zero-shot paraphrase generation based on the model. Specifically, given an input sentence that is fed into the language model, we set the output language identifier the same as input, and then simply conduct decoding to generate paraphrases of the input sentence. Figure FIGREF2 illustrates the training and decoding process of our model. In the training stage, the model is trained to sequentially generate the input sentence and its translation in a specific language. Training is conducted in the way of teacher-forcing. In the decoding stage, after an English sentence “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $" is fed to the model, we intentionally set the output language identifier as “$\langle $en$\rangle $", in order to guide the model to continue to generate English words. At the same time, since the model has been trained on translation corpus, it implicitly learns to keep the semantic meaning of the output sentence the same as the input. Accordingly, the model will probably generate the paraphrases of the input sentence, such as “the cat sitting on the carpet $\langle $eos$\rangle $". It should be noted our model can obviously be trained on parallel paraphrase data without any modification. But in this paper, we will mainly focus on the research and evaluation in the zero-shot learning setting. In the preliminary experiments of zero-shot paraphrasing, we find the model does not perform consistently well and sometimes fails to generate the words in the correct language as indicated by the language identifier. Similar phenomenon has been observed in the research of zero-shot neural machine translation BIBREF11, BIBREF12, BIBREF13, which is referred as the degeneracy problem by BIBREF13. To address these problems in zero-shot paraphrase generation, we propose several techniques to improve the quality and diversity of the model as follows. <<<Language Embeddings>>> The language identifier prior to the sentence does not always guarantee the language of the sequences generated by the model. In order to keep the language consistency, we introduce language embeddings, where each language is assigned a specific vector representation. Supposing that the language embedding for the $i$-th token in a sentence is $a_i$, we concatenate the language embedding with the Transformer output states and feed it to the softmax layer for predicting each token: We empirically demonstrate that the language embedding added to each tokens can effectively guide the model to generate sentences in the required language. Note that we still let the model to learn the output distribution for each language rather than simply restricting the vocabularies of output space. This offers flexibility to handle coding switching cases commonly seen in real-world data, e.g., English words could also appear in French sentences. <<</Language Embeddings>>> <<<Pre-Training on Monolingual Corpora>>> Language model pre-training has shown its effectiveness in language generation tasks such as machine translation, text summarization and generative question answering BIBREF14, BIBREF15, BIBREF16. It is particularly helpful to the low/zero-resource tasks since the knowledge learned from large-scale monolingual corpus can be transferred to downstream tasks via the pre-training-then-fine-tuning approach. Since our model for paraphrase generation shares the same architecture as the language model, we are able to pre-train the model on massive monolingual data. Pre-training on monolingual data is conducted in the same way as training on parallel data, except that each training example contains only one sentence with the beginning/end of sequence tokens and the language identifier. The language embeddings are also employed. The pre-training objective is the same as Equation (DISPLAY_FORM4). In our experiments, we first pre-train the model on monolingual corpora of multiple languages respectively, and then fine-tune the model on parallel corpora. <<</Pre-Training on Monolingual Corpora>>> <<<Denoising Auto-Encoder>>> We adopt the idea of denoising auto-encoder (DAE) to further improve the robustness of our paraphrasing model. DAE is originally proposed to learn intermediate representations that are robust to partial corruption of the inputs in training auto-encoders BIBREF17. Specifically, the initial input $X$ is first partially corrupted as $\tilde{X}$, which can be treated as sampling from a noise distribution $\tilde{X}\sim {q(\tilde{X}|X)}$. Then, an auto-encoder is trained to recover the original $X$ from the noisy input $\tilde{X}$ by minimizing the reconstruction error. In the applications of text generation BIBREF18 and machine translation BIBREF19, DAE has shown to be able to learn representations that are more robust to input noises and also generalize to unseen examples. Inspired by BIBREF19, we directly inject three different types of noises into input sentence that are commonly encountered in real applications. 1) Deletion: We randomly delete 1% tokens from source sentences, for example, “cat sat on the mat $\mapsto $ cat on the mat." 2) Insertion: We insert a random token into source sentences in 1% random positions, for example, “cat sat on the mat $\mapsto $ cat sat on red the mat." 3) Reordering: We randomly swap 1% tokens in source sentences, and keep the distance between tokens being swapped within 5. “cat sat on the mat $\mapsto $ mat sat on the cat." By introducing such noises into the input sentences while keeping the target sentences clean in training, our model can be more stable in generating paraphrases and generalisable to unseen sentences in the training corpus. The training objective with DAE becomes Once the model is trained, we generate paraphrases of a given sentence based on $P(Y|X;\theta )$. <<</Denoising Auto-Encoder>>> <<</Zero-shot Paraphrase Generation>>> <<</Methodology>>> <<<Experiments>>> <<<Datasets>>> We adopt the mixture of two multilingual translation corpus as our training data: MultiUN BIBREF20 and OpenSubtitles BIBREF21. MultiUN consists of 463,406 official documents in six languages, containing around 300M words for each language. OpenSubtitles is a corpus consisting of movie and TV subtitles, which contains 2.6B sentences over 60 languages. We select four shared languages of the two corpora: English, Spanish, Russian and Chinese. Statistics of the training corpus are shown in Table TABREF14. Sentences are tokenized by Wordpiece as in BERT. A multilingual vocabulary of 50K tokens is used. For validation and testing, we randomly sample 10000 sentences respectively from each language pair. The rest data are used for training. For monolingual pre-training, we use English Wikipedia corpus, which contains 2,500M words. <<</Datasets>>> <<<Experimental Settings>>> We implement our model in Tensorflow BIBREF22. The size of our Transformer model is identical to BERT-base BIBREF23. The model is constituted by 12 layers of Transformer blocks. Number of dimension of token embedding, position embedding and transformer hidden state are 768, while that of states in position-wise feed-forward networks are 3072. The number of attention heads is 12. Models are train using Adam optimization BIBREF24 with a learning rate up to $1e-4$, $\beta _1=0.9$, $\beta _2=0.999$ and $L2$ weight decay of 0.01. We use top-k truncated random sampling strategy for inference that only sample from k candidate words with highest probabilities. Throughout our experiments, we train and evaluate two models for paraphrase generation: the bilingual model and the multilingual model. The bilingual models are trained only with English$\leftrightarrow $Chinese, while the multilingual models are trained with all the data between the four languages. The round-trip translation baseline is based on the Transformer-based neural translation model. <<</Experimental Settings>>> <<<Automatic Evaluation>>> We evaluate the relevance between input and generated paraphrase as well as the diversity among multiple generated paraphrases from the same input. For relevance, we use the cosine similarity between the sentential representations BIBREF25. Specifically, we use the Glove-840B embeddings BIBREF26 for word representation and Vector Extrema BIBREF25 for sentential representation. For generation diversity, we employ two evaluation metrics: Distinct-2 and inverse Self-BLEU (defined as: $1-$Self-BLEU) BIBREF27. Larger values of Distinct-2 and inverse Self-BLEU indicate higher diversity of the generation. For each model, we draw curves in Figure FIGREF15 with the aforementioned metrics as coordinates, and each data-point is obtained at a specific sampling temperature. Since a good paraphrasing model should generate both relevant and diverse paraphrases, the model with curve lying towards the up-right corner is regarded as with good performance. <<<Comparison with Baseline>>> First we compare our models with the conventional pivoting method, i.e., round-trip translation. As shown in Figure FIGREF15 (a)(b), either the bilingual or the multilingual model is better than the baseline in terms of relevance and diversity in most cases. In other words, with the same generation diversity (measured by both Distinct-2 and Self-BLEU), our models can generate paraphrase with more semantically similarity to the input sentence. Note that in Figure FIGREF15 (a), there is a cross point between the curve of the bilingual model and the baseline curve when relevance is around 0.71. We particularly investigate generated paraphrases around this point and find that the baseline actually achieves better relevance when Distinct-2 is at a high level ($>$0.3). It means our bilingual model is semantically drifting faster than the baseline model as the Distinct-2 diversity increases. The round-trip translation performs two-round of supervised translations, while the zero-shot paraphrasing performs single-round unsupervised `translation' (paraphrasing). We suspect that the unsupervised paraphrasing can be more sensitive to the decoding strategy. It also implies the latent, language-agnostic representation may be not well learned in our bilingual model. While on the other hand, our multilingual model alleviate this insufficiency. We further verify and analyze it as follows. <<</Comparison with Baseline>>> <<<Multilingual Models>>> As mentioned above, our bilingual model can be unstable in some cases due to the lack of a well-learned language-agnostic semantic representation. A natural method is to introduce multilingual corpus, which consists of various translation directions. Training over multilingual corpus forces the model to decouple the language type and semantic representation. Empirical results shows that our multilingual model performs significantly better than the bilingual model. The red and blue curves in Figure FIGREF15 (a)(b) demonstrates a great improvement of our multilingual model over the bilingual model. In addition, the multilingual model also significantly outperforms the baseline in the setting with the reasonable relevance scores. <<</Multilingual Models>>> <<<Monolingual Pre-Training>>> As shown in Figure FIGREF15 (a)(b), the model with language model pre-training almost performs equally to its contemporary without pre-training. However, evaluations on fluency uncover the value of pre-training. We evaluate a group of models over our test set in terms of fluency, using a n-grams language model trained on 14k public domain books. As depicted in Table TABREF25, models with language model pre-training stably achieves greater log-probabilities than the model without pre-training. Namely, language model pre-training brings better fluency. <<</Monolingual Pre-Training>>> <<</Automatic Evaluation>>> <<<Human Evaluation>>> 200 sentences are sampled from our test set for human evaluation. The human evaluation guidance generally follows that of BIBREF5 but with a compressed scoring range from [1, 5] to [1, 4]. We recruit five human annotators to evaluate models in semantic relevance and fluency. A test example consists of one input sentence, one generated sentence from baseline model and one generated sentence from our model. We randomly permute a pair of generated sentences to reduce annotators' bias on a certain model. Each example is evaluated by two annotators. As shown in Table TABREF28, our method outperforms the baseline in both relevance and fluency significantly. We further calculate agreement (Cohen's kappa) between two annotators. Both round-trip translation and our method performs well as to fluency. But the huge gap of relevance between the two systems draw much attention of us. We investigate the test set in details and find that round-trip approach indeed generate more noise as shown in case studies. <<</Human Evaluation>>> <<<Case Studies>>> We further study some generated cases from different models. All results in Table TABREF30 are generated over our test set using randomly sampling. For both baseline and multilingual model, we tune their sampling temperatures to control the Distinct-2 and the inverse Self-BLEU at 0.31 and 0.47 respectively. In the case studies, we find that our method usually generates sentences with better relevance to source inputs, while the round-trip translation method can sometimes run into serious semantic drift. In the second case, our model demonstrates a good feature that it maintains the meaning and even a proper noun $guide$ unchanged while modifies the source sentence by both changing and reordering words. This feature may be introduced by DAE perturbation strategies which improves model's robustness and diversity simultaneously. These results evidence that our methods outperforms the baseline in both relevance and diversity. <<</Case Studies>>> <<</Experiments>>> <<<Related Work>>> Generating paraphrases based on deep neural networks, especially Seq2Seq models, has become the mainstream approach. A majority of neural paraphrasing models tried to improve generation quality and diversity with high-quality paraphrase corpora. BIBREF2 starts a deep learning line of paraphrase generation through introducing stacked residual LSTM network. A word constraint model proposed by BIBREF3 improves both generation quality and diversity. BIBREF4 adopts variational auto-encoder to further improve generation diversity. BIBREF5 utilize neural reinforcement learning and adversarial training to promote generation quality. BIBREF6 decompose paraphrase generation into phrase-level and sentence-level. Several works tried to generate paraphrases from monolingual non-parallel or translation corpora. BIBREF28 exploits Markov Network model to extract paraphrase tables from monolingual corpus. BIBREF29, BIBREF30 and BIBREF31 create paraphrase corpus through clustering and aligning paraphrases from crawled articles or headlines. With parallel translation corpora, pivoting approaches such round-trip translation BIBREF7 and back-translation BIBREF32 are explored. However, to the best knowledge of us, none of these paraphrase generation models has been trained directly from parallel translation corpora as a single-round end-to-end model. <<</Related Work>>> <<<Conclusions>>> In this work, we have proposed a Transformer-based model for zero-shot paraphrase generation, which can leverage huge amount of off-the-shelf translation corpora. Moreover, we improve generation fluency of our model with language model pre-training. Empirical results from both automatic and human evaluation demonstrate that our model surpasses the conventional pivoting approaches in terms of relevance, diversity, fluency and efficiency. Nevertheless, there are some interesting directions to be explored. For instance, how to obtain a better latent semantic representation with multi-modal data and how to further improve the generation diversity without sacrificing relevance. We plan to strike these challenging yet valuable problems in the future. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Introduction, Methodology" ], "type": "disordered_section" }
1911.03597
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Zero-Shot Paraphrase Generation with Multilingual Language Models <<<Abstract>>> Leveraging multilingual parallel texts to automatically generate paraphrases has drawn much attention as size of high-quality paraphrase corpus is limited. Round-trip translation, also known as the pivoting method, is a typical approach to this end. However, we notice that the pivoting process involves multiple machine translation models and is likely to incur semantic drift during the two-step translations. In this paper, inspired by the Transformer-based language models, we propose a simple and unified paraphrasing model, which is purely trained on multilingual parallel data and can conduct zero-shot paraphrase generation in one step. Compared with the pivoting approach, paraphrases generated by our model is more semantically similar to the input sentence. Moreover, since our model shares the same architecture as GPT (Radford et al., 2018), we are able to pre-train the model on large-scale unparallel corpus, which further improves the fluency of the output sentences. In addition, we introduce the mechanism of denoising auto-encoder (DAE) to improve diversity and robustness of the model. Experimental results show that our model surpasses the pivoting method in terms of relevance, diversity, fluency and efficiency. <<</Abstract>>> <<<Introduction>>> Paraphrasing is to express the same meaning using different expressions. Paraphrase generation plays an important role in various natural language processing (NLP) tasks such as response diversification in dialogue system, query reformulation in information retrieval, and data augmentation in machine translation. Recently, models based on Seq2Seq learning BIBREF1 have achieved the state-of-the-art results on paraphrase generation. Most of these models BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 focus on training the paraphrasing models based on a paraphrase corpus, which contains a number of pairs of paraphrases. However, high-quality paraphrases are usually difficult to acquire in practice, which becomes the major limitation of these methods. Therefore, we focus on zero-shot paraphrase generation approach in this paper, which aims to generate paraphrases without requiring a paraphrase corpus. A natural choice is to leverage the bilingual or multilingual parallel data used in machine translation, which are of great quantity and quality. The basic assumption is that if two sentences in one language (e.g., English) have the same translation in another language (e.g., French), they are assumed to have the same meaning, i.e., they are paraphrases of each other. Therefore, one typical solution for paraphrasing in one language is to pivot over a translation in another language. Specifically, it is implemented as the round-trip translation, where the input sentence is translated into a foreign sentence, then back-translated into a sentence in the same language as input BIBREF7. The process is shown in Figure FIGREF1. Apparently, two machine translation systems (English$\rightarrow $French and French$\leftarrow $English) are needed to conduct the generation of a paraphrase. Although the pivoting approach works in general, there are several intrinsic defects. First, the round-trip system can hardly explore all the paths of paraphrasing, since it is pivoted through the finite intermedia outputs of a translation system. More formally, let $Z$ denote the meaning representation of a sentence $X$, and finding paraphrases of $X$ can be treated as sampling another sentence $Y$ conditioning on the representation $Z$. Ideally, paraphrases should be generated by following $P(Y|X) = \int _{Z} P(Y|Z)P(Z|X)dZ$, which is marginalized over all possible values of $Z$. However, in the round-trip translation, only one or several $Z$s are sampled from the machine translation system $P(Z|X)$, which can lead to an inaccurate approximation of the whole distribution and is prone to the problem of semantic drift due to the sampling variances. Second, the results are determined by the pre-existing translation systems, and it is difficult to optimize the pipeline end-to-end. Last, the system is not efficient especially at the inference stage, because it needs two rounds of translation decoding. To address these issues, we propose a single-step zero-shot paraphrase generation model, which can be trained on machine translation corpora in an end-to-end fashion. Unlike the pivoting approach, our proposed model does not involve explicit translation between multiple languages. Instead, it directly learns the paraphrasing distribution $P(Y|X)$ from the parallel data sampled from $P(Z|X)$ and $P(Y|Z)$. Specifically, we build a Transformer-based BIBREF8 language model, which is trained on the concatenated bilingual parallel sentences with language indicators. At inference stage, given a input sentence in a particular language, the model is guided to generate sentences in the same language, which are deemed as paraphrases of the input. Our model is simple and compact, and can empirically reduce the risk of semantic drift to a large extent. Moreover, we can initialize our model with generative pre-training (GPT) BIBREF0 on monolingual data, which can benefit the generation in low-resource languages. Finally, we borrow the idea of denoising auto-encoder (DAE) to further enhance robustness in paraphrase generation. We conduct experiments on zero-shot paraphrase generation task, and find that the proposed model significantly outperforms the pivoting approach in terms of both automatic and human evaluations. Meanwhile, the training and inference cost are largely reduced compared to the pivot-based methods which involves multiple systems. <<</Introduction>>> <<<Methodology>>> <<<Transformer-based Language Model>>> Transformer-based language model (TLM) is a neural language model constructed with a stack of Transformer decoder layers BIBREF8. Given a sequence of tokens, TLM is trained with maximizing the likelihood: where $X=[x_1,x_2,\ldots ,x_n]$ is a sentence in a language (e.g., English), and $\theta $ denotes the parameters of the model. Each Transformer layer is composed of multi-head self-attention, layer normalization and a feed-forward network. We refer reader to the original paper for details of each component. Formally, the decoding probability is given by where $x_i$ denotes the token embedding, $p_i$ denote the positional embedding and $h_i$ denotes the output states of the $i$-th token, and $W_e$ and $W_o$ are the input and output embedding matrices. Although TLM is normally employed to model monolingual sequences, there is no barrier to utilize TLM to model sequences in multiple languages. In this paper, inspired by BIBREF9, we concatenate pairs of sentences from bilingual parallel corpora (e.g., English$\rightarrow $French) as training instances to the model. Let $X$ and $Y$ denote the parallel sentences in two different languages, the training objective becomes This bilingual language model can be regarded as the decoder-only model compared to the traditional encoder-decoder model. It has been proved to work effectively on monolingual text-to-text generation tasks such as summarization BIBREF10. The advantages of such architecture include less model parameters, easier optimization and potential better performance for longer sequences. Furthermore, it naturally integrates with language models pre-training on monolingual corpus. For each input sequence of concatenated sentences, we add special tokens $\langle $bos$\rangle $ and $\langle $eos$\rangle $ at the beginning and the end, and $\langle $delim$\rangle $ in between the sentences. Moreover, at the beginning of each sentence, we add a special token as its language identifier, for instance, $\langle $en$\rangle $ for English, $\langle $fr$\rangle $ for French. One example of English$\rightarrow $French training sequence is “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $ $\langle $fr$\rangle $ chat assis sur le tapis $\langle $eos$\rangle $". At inference stage, the model predicts the next word as the conventional auto-regressive model: <<</Transformer-based Language Model>>> <<<Zero-shot Paraphrase Generation>>> We train the bilingual language model on multiple bilingual corpora, for example, English$\leftrightarrow $French and German$\leftrightarrow $Chinese. Once the language model has been trained, we can conduct zero-shot paraphrase generation based on the model. Specifically, given an input sentence that is fed into the language model, we set the output language identifier the same as input, and then simply conduct decoding to generate paraphrases of the input sentence. Figure FIGREF2 illustrates the training and decoding process of our model. In the training stage, the model is trained to sequentially generate the input sentence and its translation in a specific language. Training is conducted in the way of teacher-forcing. In the decoding stage, after an English sentence “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $" is fed to the model, we intentionally set the output language identifier as “$\langle $en$\rangle $", in order to guide the model to continue to generate English words. At the same time, since the model has been trained on translation corpus, it implicitly learns to keep the semantic meaning of the output sentence the same as the input. Accordingly, the model will probably generate the paraphrases of the input sentence, such as “the cat sitting on the carpet $\langle $eos$\rangle $". It should be noted our model can obviously be trained on parallel paraphrase data without any modification. But in this paper, we will mainly focus on the research and evaluation in the zero-shot learning setting. In the preliminary experiments of zero-shot paraphrasing, we find the model does not perform consistently well and sometimes fails to generate the words in the correct language as indicated by the language identifier. Similar phenomenon has been observed in the research of zero-shot neural machine translation BIBREF11, BIBREF12, BIBREF13, which is referred as the degeneracy problem by BIBREF13. To address these problems in zero-shot paraphrase generation, we propose several techniques to improve the quality and diversity of the model as follows. <<<Language Embeddings>>> The language identifier prior to the sentence does not always guarantee the language of the sequences generated by the model. In order to keep the language consistency, we introduce language embeddings, where each language is assigned a specific vector representation. Supposing that the language embedding for the $i$-th token in a sentence is $a_i$, we concatenate the language embedding with the Transformer output states and feed it to the softmax layer for predicting each token: We empirically demonstrate that the language embedding added to each tokens can effectively guide the model to generate sentences in the required language. Note that we still let the model to learn the output distribution for each language rather than simply restricting the vocabularies of output space. This offers flexibility to handle coding switching cases commonly seen in real-world data, e.g., English words could also appear in French sentences. <<</Language Embeddings>>> <<<Pre-Training on Monolingual Corpora>>> Language model pre-training has shown its effectiveness in language generation tasks such as machine translation, text summarization and generative question answering BIBREF14, BIBREF15, BIBREF16. It is particularly helpful to the low/zero-resource tasks since the knowledge learned from large-scale monolingual corpus can be transferred to downstream tasks via the pre-training-then-fine-tuning approach. Since our model for paraphrase generation shares the same architecture as the language model, we are able to pre-train the model on massive monolingual data. Pre-training on monolingual data is conducted in the same way as training on parallel data, except that each training example contains only one sentence with the beginning/end of sequence tokens and the language identifier. The language embeddings are also employed. The pre-training objective is the same as Equation (DISPLAY_FORM4). In our experiments, we first pre-train the model on monolingual corpora of multiple languages respectively, and then fine-tune the model on parallel corpora. <<</Pre-Training on Monolingual Corpora>>> <<<Denoising Auto-Encoder>>> We adopt the idea of denoising auto-encoder (DAE) to further improve the robustness of our paraphrasing model. DAE is originally proposed to learn intermediate representations that are robust to partial corruption of the inputs in training auto-encoders BIBREF17. Specifically, the initial input $X$ is first partially corrupted as $\tilde{X}$, which can be treated as sampling from a noise distribution $\tilde{X}\sim {q(\tilde{X}|X)}$. Then, an auto-encoder is trained to recover the original $X$ from the noisy input $\tilde{X}$ by minimizing the reconstruction error. In the applications of text generation BIBREF18 and machine translation BIBREF19, DAE has shown to be able to learn representations that are more robust to input noises and also generalize to unseen examples. Inspired by BIBREF19, we directly inject three different types of noises into input sentence that are commonly encountered in real applications. 1) Deletion: We randomly delete 1% tokens from source sentences, for example, “cat sat on the mat $\mapsto $ cat on the mat." 2) Insertion: We insert a random token into source sentences in 1% random positions, for example, “cat sat on the mat $\mapsto $ cat sat on red the mat." 3) Reordering: We randomly swap 1% tokens in source sentences, and keep the distance between tokens being swapped within 5. “cat sat on the mat $\mapsto $ mat sat on the cat." By introducing such noises into the input sentences while keeping the target sentences clean in training, our model can be more stable in generating paraphrases and generalisable to unseen sentences in the training corpus. The training objective with DAE becomes Once the model is trained, we generate paraphrases of a given sentence based on $P(Y|X;\theta )$. <<</Denoising Auto-Encoder>>> <<</Zero-shot Paraphrase Generation>>> <<</Methodology>>> <<<Experiments>>> <<<Datasets>>> We adopt the mixture of two multilingual translation corpus as our training data: MultiUN BIBREF20 and OpenSubtitles BIBREF21. MultiUN consists of 463,406 official documents in six languages, containing around 300M words for each language. OpenSubtitles is a corpus consisting of movie and TV subtitles, which contains 2.6B sentences over 60 languages. We select four shared languages of the two corpora: English, Spanish, Russian and Chinese. Statistics of the training corpus are shown in Table TABREF14. Sentences are tokenized by Wordpiece as in BERT. A multilingual vocabulary of 50K tokens is used. For validation and testing, we randomly sample 10000 sentences respectively from each language pair. The rest data are used for training. For monolingual pre-training, we use English Wikipedia corpus, which contains 2,500M words. <<</Datasets>>> <<<Experimental Settings>>> We implement our model in Tensorflow BIBREF22. The size of our Transformer model is identical to BERT-base BIBREF23. The model is constituted by 12 layers of Transformer blocks. Number of dimension of token embedding, position embedding and transformer hidden state are 768, while that of states in position-wise feed-forward networks are 3072. The number of attention heads is 12. Models are train using Adam optimization BIBREF24 with a learning rate up to $1e-4$, $\beta _1=0.9$, $\beta _2=0.999$ and $L2$ weight decay of 0.01. We use top-k truncated random sampling strategy for inference that only sample from k candidate words with highest probabilities. Throughout our experiments, we train and evaluate two models for paraphrase generation: the bilingual model and the multilingual model. The bilingual models are trained only with English$\leftrightarrow $Chinese, while the multilingual models are trained with all the data between the four languages. The round-trip translation baseline is based on the Transformer-based neural translation model. <<</Experimental Settings>>> <<<Automatic Evaluation>>> We evaluate the relevance between input and generated paraphrase as well as the diversity among multiple generated paraphrases from the same input. For relevance, we use the cosine similarity between the sentential representations BIBREF25. Specifically, we use the Glove-840B embeddings BIBREF26 for word representation and Vector Extrema BIBREF25 for sentential representation. For generation diversity, we employ two evaluation metrics: Distinct-2 and inverse Self-BLEU (defined as: $1-$Self-BLEU) BIBREF27. Larger values of Distinct-2 and inverse Self-BLEU indicate higher diversity of the generation. For each model, we draw curves in Figure FIGREF15 with the aforementioned metrics as coordinates, and each data-point is obtained at a specific sampling temperature. Since a good paraphrasing model should generate both relevant and diverse paraphrases, the model with curve lying towards the up-right corner is regarded as with good performance. <<<Comparison with Baseline>>> First we compare our models with the conventional pivoting method, i.e., round-trip translation. As shown in Figure FIGREF15 (a)(b), either the bilingual or the multilingual model is better than the baseline in terms of relevance and diversity in most cases. In other words, with the same generation diversity (measured by both Distinct-2 and Self-BLEU), our models can generate paraphrase with more semantically similarity to the input sentence. Note that in Figure FIGREF15 (a), there is a cross point between the curve of the bilingual model and the baseline curve when relevance is around 0.71. We particularly investigate generated paraphrases around this point and find that the baseline actually achieves better relevance when Distinct-2 is at a high level ($>$0.3). It means our bilingual model is semantically drifting faster than the baseline model as the Distinct-2 diversity increases. The round-trip translation performs two-round of supervised translations, while the zero-shot paraphrasing performs single-round unsupervised `translation' (paraphrasing). We suspect that the unsupervised paraphrasing can be more sensitive to the decoding strategy. It also implies the latent, language-agnostic representation may be not well learned in our bilingual model. While on the other hand, our multilingual model alleviate this insufficiency. We further verify and analyze it as follows. <<</Comparison with Baseline>>> <<<Multilingual Models>>> As mentioned above, our bilingual model can be unstable in some cases due to the lack of a well-learned language-agnostic semantic representation. A natural method is to introduce multilingual corpus, which consists of various translation directions. Training over multilingual corpus forces the model to decouple the language type and semantic representation. Empirical results shows that our multilingual model performs significantly better than the bilingual model. The red and blue curves in Figure FIGREF15 (a)(b) demonstrates a great improvement of our multilingual model over the bilingual model. In addition, the multilingual model also significantly outperforms the baseline in the setting with the reasonable relevance scores. <<</Multilingual Models>>> <<<Monolingual Pre-Training>>> As shown in Figure FIGREF15 (a)(b), the model with language model pre-training almost performs equally to its contemporary without pre-training. However, evaluations on fluency uncover the value of pre-training. We evaluate a group of models over our test set in terms of fluency, using a n-grams language model trained on 14k public domain books. As depicted in Table TABREF25, models with language model pre-training stably achieves greater log-probabilities than the model without pre-training. Namely, language model pre-training brings better fluency. <<</Monolingual Pre-Training>>> <<</Automatic Evaluation>>> <<<Human Evaluation>>> 200 sentences are sampled from our test set for human evaluation. The human evaluation guidance generally follows that of BIBREF5 but with a compressed scoring range from [1, 5] to [1, 4]. We recruit five human annotators to evaluate models in semantic relevance and fluency. A test example consists of one input sentence, one generated sentence from baseline model and one generated sentence from our model. We randomly permute a pair of generated sentences to reduce annotators' bias on a certain model. Each example is evaluated by two annotators. As shown in Table TABREF28, our method outperforms the baseline in both relevance and fluency significantly. We further calculate agreement (Cohen's kappa) between two annotators. Both round-trip translation and our method performs well as to fluency. But the huge gap of relevance between the two systems draw much attention of us. We investigate the test set in details and find that round-trip approach indeed generate more noise as shown in case studies. <<</Human Evaluation>>> <<<Case Studies>>> We further study some generated cases from different models. All results in Table TABREF30 are generated over our test set using randomly sampling. For both baseline and multilingual model, we tune their sampling temperatures to control the Distinct-2 and the inverse Self-BLEU at 0.31 and 0.47 respectively. In the case studies, we find that our method usually generates sentences with better relevance to source inputs, while the round-trip translation method can sometimes run into serious semantic drift. In the second case, our model demonstrates a good feature that it maintains the meaning and even a proper noun $guide$ unchanged while modifies the source sentence by both changing and reordering words. This feature may be introduced by DAE perturbation strategies which improves model's robustness and diversity simultaneously. These results evidence that our methods outperforms the baseline in both relevance and diversity. <<</Case Studies>>> <<</Experiments>>> <<<Related Work>>> Generating paraphrases based on deep neural networks, especially Seq2Seq models, has become the mainstream approach. A majority of neural paraphrasing models tried to improve generation quality and diversity with high-quality paraphrase corpora. BIBREF2 starts a deep learning line of paraphrase generation through introducing stacked residual LSTM network. A word constraint model proposed by BIBREF3 improves both generation quality and diversity. BIBREF4 adopts variational auto-encoder to further improve generation diversity. BIBREF5 utilize neural reinforcement learning and adversarial training to promote generation quality. BIBREF6 decompose paraphrase generation into phrase-level and sentence-level. Several works tried to generate paraphrases from monolingual non-parallel or translation corpora. BIBREF28 exploits Markov Network model to extract paraphrase tables from monolingual corpus. BIBREF29, BIBREF30 and BIBREF31 create paraphrase corpus through clustering and aligning paraphrases from crawled articles or headlines. With parallel translation corpora, pivoting approaches such round-trip translation BIBREF7 and back-translation BIBREF32 are explored. However, to the best knowledge of us, none of these paraphrase generation models has been trained directly from parallel translation corpora as a single-round end-to-end model. <<</Related Work>>> <<<Conclusions>>> In this work, we have proposed a Transformer-based model for zero-shot paraphrase generation, which can leverage huge amount of off-the-shelf translation corpora. Moreover, we improve generation fluency of our model with language model pre-training. Empirical results from both automatic and human evaluation demonstrate that our model surpasses the conventional pivoting approaches in terms of relevance, diversity, fluency and efficiency. Nevertheless, there are some interesting directions to be explored. For instance, how to obtain a better latent semantic representation with multi-modal data and how to further improve the generation diversity without sacrificing relevance. We plan to strike these challenging yet valuable problems in the future. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Abstract, Methodology" ], "type": "disordered_section" }
2003.08132
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Gender Representation in Open Source Speech Resources <<<Abstract>>> With the rise of artificial intelligence (AI) and the growing use of deep-learning architectures, the question of ethics, transparency and fairness of AI systems has become a central concern within the research community. We address transparency and fairness in spoken language systems by proposing a study about gender representation in speech resources available through the Open Speech and Language Resource platform. We show that finding gender information in open source corpora is not straightforward and that gender balance depends on other corpus characteristics (elicited/non elicited speech, low/high resource language, speech task targeted). The paper ends with recommendations about metadata and gender information for researchers in order to assure better transparency of the speech systems built using such corpora. <<</Abstract>>> <<<>>> 1.1em <<</>>> <<<Introduction>>> The ever growing use of machine learning has put data at the center of the industrial and research spheres. Indeed, for a system to learn how to associate an input X to an output Y, many paired examples are needed to learn this mapping process. This need for data coupled with the improvement in computing power and algorithm efficiency has led to the era of big data. But data is not only needed in mass, but also with a certain level of quality. In this paper we argue that one of the main quality of data is its transparency. In recent years, concerns have been raised about the biases existing in the systems. A well-known case in Natural Language Processing (NLP) is the example of word embeddings, with the studies of bolukbasi2016man and caliskan2017semantics which showed that data are socially constructed and hence encapsulate a handful of social representations and power structures, such as gender stereotypes. Gender-bias has also been found in machine translation tasks BIBREF0, as well as facial recognition BIBREF1 and is now at the center of research debates. In previous work, we investigated the impact of gender imbalance in training data on the performance of an automatic speech recognition (ASR) system, showing that the under-representation of women led to a performance bias of the system for female speakers BIBREF2. In this paper, we survey the gender representation within an open platform gathering speech and language resources to develop speech processing tools. The aim of this survey is twofold: firstly, we investigate the gender balance within speech corpora in terms of speaker representation but also in terms of speech time available for each gender category. Secondly we propose a reflection about general practices when releasing resources, basing ourselves on some recommendations from previous work. Contributions. The contributions of our work are the following: an exploration of 66 different speech corpora in terms of gender, showing that gender balance is achieved in terms of speakers in elicited corpora, but that it is not the case for non-elicited speech, nor for the speech time allocated to each gender category an assessment of the global lack of meta-data within free open source corpora, alongside recommendations and guidelines for resources descriptions, based on previous work <<</Introduction>>> <<<OpenSLR>>> Open Speech Language Resources (OpenSLR) is a platform created by Daniel Povey. It provides a central hub to gather open speech and language resources, allowing them to be accessed and downloaded freely. OpenSLR currently hosts 83 resources. These resources consist of speech recordings with transcriptions but also of softwares as well as lexicons and textual data for language modeling. As resources are costly to produce, they are most of the time a paying service. Therefore it is hard to study gender representation at scale. We thus focus on the corpora available on OpenSLR due to their free access and to the fact that OpenSLR is explicitly made to help develop speech systems (mostly ASR but also text-to-speech (TTS) systems). In our work, we focus on speech data only. Out of the 83 resources gathered on the platform, we recorded 53 speech resources. We did not take into account multiple releases of the same corpora but only kept the last version (e.g. TED LIUM BIBREF3) and we also removed subsets of bigger corpora (e.g. LibriTTS corpus BIBREF4). We make the distinction between a resource and a corpus, as each resource can contain several languages (e.g. Vystadial korvas2014) or several accent/dialect of a same language (e.g. the crowdsourced high-quality UK and Ireland English Dialect speech data set googleuken2019). In our terminology, we define a corpus as monolingual and monodialectal, so resources containing different dialects or languages will be considered as containing different corpora. We ended up with 66 corpora, in 33 different languages with 51 dialect/accent variations. The variety is also great in terms of speech types (elicited and read speech, broadcast news, TEDTalks, meetings, phonecalls, audiobooks, etc.), which is not suprising, given the many different actors who contributed to this platform. We consider this sample to be of reasonable size to tackle the question of gender representation in speech corpora. OpenSLR also constitutes a good indicator of general practice as it does not expect a defined format nor does have explicit requirements about data structures, hence attesting of what metadata resources creators consider important to share when releasing resources for free on the Web. <<</OpenSLR>>> <<<Methodology>>> In order to study gender representation within speech resources, let us start by defining what gender is. In this work, we consider gender as a binary category (male and female speakers). Nevertheless, we are aware that gender as an identity also exists outside of these two categories, but we did not find any mention of non-binary speakers within the corpora surveyed in our study. Following work by doukhan2018open, we wanted to explore the corpora looking at the number of speakers of each gender category as well as their speech duration, considering both variables as good features to account for gender representation. After the download, we manually extracted information about gender representation in each corpus. <<<Speaker Information and Lack of Meta-Data>>> The first difficulty we came across was the general absence of information. As gender in technology is a relatively recent research interest, most of the time gender demographics are not made available by the resources creators. So, on top of the further-mentioned general corpus characteristics (see Section SECREF11), we also report in our final table where the gender information was found and whether it was provided in the first place or not. The provided attribute corresponds to whether gender info was given somewhere, and the found_in attribute corresponds to where we extracted the gender demographics from. The different modalities are paper, if a paper was explicitly cited along the resource, metadata if a metadata file was included, indexed if the gender was explicitly indexed within data or if data was structured in terms of gender and manually if the gender information are the results of a manual research made by ourselves, trying to either find a paper describing the resources, or by relying on regularities that seems like speaker ID and listening to the recordings. We acknowledge that this last method has some methodological shortcomings: we relied on our perceptual stereotypes to distinguish male from female speakers, most of the time for languages we have no knowledge of, but considering the global lack of data, we used it when corpora were small enough in order to increase our sample size. <<</Speaker Information and Lack of Meta-Data>>> <<<Speech Time Information and Data Consistency>>> The second difficulty regards the fact that speech time information are not standardised, making impossible to obtain speech time for individual speakers or gender categories. When speech time information is provided, the statistics given do not all refer to the same measurements. Some authors report speech duration in hours e.g. panayotov2015librispeech,hernandez2018ted, some the number of utterances (e.g BIBREF5) or sentences (e.g. googleuken2019), the definition of these two terms never being clearly defined. We gathered all information available, meaning that our final table contains some empty cells, and we found that there was no consistency between speech duration and number of utterances, excluding the possibility to approximate one by the other. As a result, we decided to rely on the size of the corpora as a (rough) approximation of the amount of speech data available, the text files representing a small proportion of the resources size. This method however has drawbacks as not all corpora used the same file format, nor the same sampling rate. Sampling rate has been provided as well in the final table, but we decided to rely on qualitative categories, a corpus being considered small if its size is under 5GB, medium if it is between 5 and 50GB and large if above. <<</Speech Time Information and Data Consistency>>> <<<Corpora Characteristics>>> The final result consists of a table reporting all the characteristics of the corpora. The chosen features are the following: the resource identifier (id) as defined on OpenSLR the language (lang) the dialect or accent if specified (dial) the total number of speakers as well as the number of male and female speakers (#spk, #spk_m, #spk_f) the total number of utterances as well as the total number of utterances for male and female speakers (#utt, #utt_m, #utt_f) the total duration, or speech time, as well as the duration for male and female speakers (dur, dur_m, dur_f) the size of the resource in gigabytes (sizeGB) as well as a qualitative label (size, taking its value between “big", “medium", “small") the sampling rate (sampling) the speech task targeted for the resource (task) is it elicited speech or not: we define as non-elicited speech data which would have existed without the creation of the resources (e.g TedTalks, audiobooks, etc.), other speech data are considered as elicited the language status (lang_status): a language is considered either as high- or low-resourced. The language status is defined from a technological point of view (i.e. are there resources or NLP systems available for this language?). It is fixed at the language granularity (hence the name), regardless of the dialect or accent (if provided). the year of the release (year) the authors of the resource (producer) <<</Corpora Characteristics>>> <<</Methodology>>> <<<Analysis>>> <<<Gender Information Availability>>> Before diving into the gender analysis, we report the number of corpora for which gender information was provided. Indeed, 36.4% of the corpora do not give any gender information regarding the speakers. Moreover, almost 20% of the corpora do not provide any speaker information whatsoever. Table sums up the number of corpora for which speaker's gender information was provided and if it was, where it was found. We first looked at the metadata file if available. If no metadata was provided, we searched whether gender was indexed within the data structure. At last, if we still could not find anything, we looked for a paper describing the data set. This search pipeline results in ordered levels for our found_in category, meaning papers might also be available for corpora with the “metadata" or “indexed" modalities. When gender information was given it was most of the time in terms of number of speakers in each gender categories, as only five corpora provide speech time for each category. Table reports what type of information was provided in terms of gender, in the subset of the 42 corpora containing gender information. We observe that gender information is easier to find when it regards the number of speakers, than when it accounts for the quantity of data available for each gender group. Due to this lack of data, we did not study the speech time per gender category as intended, but we relied on utterance count when available. It is worth noticing however, that we did not find any consistency between speech time and number of utterances, so such results must be taken with caution. Out of the 42 corpora providing gender information, 41 reported speaker counts for each gender category. We manually gathered speaker gender information for 7 more corpora, as explained in the previous section, reaching a final sample size of 47 corpora. <<</Gender Information Availability>>> <<<Gender Distribution Among Speakers>>> <<<Elicited vs Non-Elicited Data>>> Generally, when gender demographics are provided, we observe the following distribution: out of the 6,072 speakers, 3,050 are women and 3,022 are men, so parity is almost achieved. We then look at whether data was elicited or not, non-elicited speech being speech that would have existed without the corpus creation such as TEDTalks, interviews, radio broadcast and so on. We assume that if data was not elicited, gender imbalance might emerge. Indeed, non-elicited data often comes from the media, and it has been shown, that women are under-represented in this type of data BIBREF6. This disparity of gender representation in French media BIBREF7, BIBREF8 precisely led us to the present survey. Our expectations are reinforced by examples such as the resource of Spanish TEDTalks, which states in its description regarding the speakers that “most of them are men" mena2019. We report results in Table . In both cases (respectively elicited and non-elicited speech), gender difference is relatively small (respectively 5.6 percentage points and 5.8 points), far from the 30 percentage points difference observed in BIBREF2. A possible explanation is that either elicited or not, corpora are the result of a controlled process, so gender disparity will be reduced as much as possible by the corpus authors. However, we notice that, apart from Librispeech BIBREF9, all the non-elicited corpora are small corpora. When removing Librispeech from the analysis, we observe a 1/3-2/3 female to male ratio, coherent with our previous findings. This can be explained by the care put by the creators of the Librispeech data set to "[ensure] a gender balance at the speaker level and in terms of the amount of data available for each gender" BIBREF9, while general gender disparity is observed in smaller corpora. What emerges from these results is that when data sets are not elicited or carefully balanced, gender disparity creeps in. This gender imbalance is not observed at the scale of the entire OpenSLR platform, due to the fact that most of the corpora are elicited (89.1%). Hence, the existence of such gender gap is prevented by a careful control during the data set creation process. <<</Elicited vs Non-Elicited Data>>> <<<High-resource vs Low-resource Languages>>> In the elicited corpora made available on OpenSLR, some are of low-resource languages other high-resource languages (mostly regional variation of high-resources languages). When looking at gender in these elicited corpora, we do not observe a difference depending on the language status. However, we can notice that high-resource corpora contain twice as many speakers, all low-resource language corpora being small corpora. <<</High-resource vs Low-resource Languages>>> <<<“How Can I Help?": Spoken Language Tasks>>> Speech corpora are built in order to train systems, most of the time ASR or TTS ones. We carry out our gender analysis taking into account the task addressed and obtain the results reported in Table . We observe that if gender representation is almost balanced within ASR corpora, women are better represented in TTS-oriented data sets. This can be related to the UN report of recommendation for gender-equal digital education stating that nowadays, most of the vocal assistants are given female voices which raises educational and societal problems BIBREF10. This gendered design of vocal assistants is sometimes justified by relying on gender stereotypes such as “female voices are perceived as more helpful, sympathetic or pleasant." TTS systems being often used to create such assistants, we can assume that using female voices has become general practice to ensure the adoption of the system by the users. This claim can however be nuanced by nass2005wired who showed that other factors might be worth taking into account to design gendered voices, such as social identification and cultural gender stereotypes. <<</“How Can I Help?": Spoken Language Tasks>>> <<</Gender Distribution Among Speakers>>> <<<Speech Time and Gender>>> Due to a global lack of speech time information, we did not analyse the amount of data available per speaker category. However, utterance counts were often reported, or easily found within the corpora. We gathered utterance counts for a total of 32 corpora. We observe that if gender balance is almost achieved in terms of number of speakers, at the utterance level, men speech is more represented. But this disparity is only the effect of three corpora containing 51,463 and 26,567 korvas2014 and 8376 mena2019 utterances for male speakers, while the mean number of utterances per corpora is respectively 1942 for male speakers and 1983 for female speakers. Removing these three outliers, we observe that utterances count is balanced between gender categories. It is worth noticing, that the high amount of utterances of the outliers is surprising considering that these three corpora are small (2.1GB, 2.8GB) and medium (5.2GB). This highlights the problem of the notion of utterance which is never being explicitly defined. Such difference in granularity is thus preventing comparison between corpora. <<</Speech Time and Gender>>> <<<Evolution over Time>>> When collecting data, we noticed that the more recent the resources, the easier it was to find gender information, attesting of the emergence of gender in technology as a relevant topic. As pointed out by Kate crawford2017nips in her NeurIPS keynote talk, fairness in AI has recently become a huge part of the research effort in AI and machine learning. As a result, methodology papers have been published, with for example the work of bender2018data, for NLP data and systems, encouraging the community towards rich and explicit data statements. Figure FIGREF34 shows the evolution of gender information availability in the last 10 years. We can see that this peek of interest is also present in our data, with more resources provided with gender information after 2017. <<</Evolution over Time>>> <<</Analysis>>> <<<Recommendations>>> The social impact of big data and the ethical problems raised by NLP systems have already been discussed by previous work. wilkinson2016fair developed principles for scientific data management and stewardship, the FAIR Data Principles, based on four foundational data characteristics that are Findability, Accessibility, Interoperability and Reusability BIBREF11. In our case, findability and accessibility are taken into account by design, resources on OpenSLR being freely accessible. Interoperability and Reusability of data are however not yet achieved. Another attempt to integrate this discussion about data description within the NLP community has been made by COUILLAULT14.424, who proposed an Ethics and Big Data Charter, to help resources creators describe data from a legal and ethical point of view. hovy2016social highlighted the different social implications of NLP systems, such as exclusion, overgeneralisation and exposure problems. More recently, work by bender2018data proposed the notion of data statement to ensure data transparency. The common point of all these studies is that information is key. The FAIR Principles are a baseline to guarantee the reproducibility of scientific findings. We need data to be described exhaustively in order to acknowledge demographic bias that may exist within our corpora. As pointed out by hovy2016social, language is always situated and so are language resources. This demographic bias in itself will always exist, but by not mentioning it in the data description we might create tools and systems that will have negative impacts on society. The authors presented the notion of exclusion as a demographic misrepresentation leading to exclusion of certain groups in the use of a technology, due to the fact that this technology fail to take them into account during its developing process. This directly relates to our work on ASR performance on women speech, and we can assume that this can be extended to other speaker characteristics, such as accent or age. To prevent such collateral consequences of NLP systems, bender2018data advocated the use of data statement, as a professional and research practice. We hope the present study will encourage researchers and resources creators to describe exhaustively their data sets, following the guidelines proposed by these authors. <<<On the Importance of Meta-Data>>> The first take-away of our survey is that obtaining an exhaustive description of the speakers within speech resources is not straightforward. This lack of meta-data is a problem in itself as it prevents guaranteeing the generalisability of systems or linguistics findings based on these corpora, as pointed out by bender2018data. As they rightly highlighted in their paper, the problem is also an ethical one as we have no way of controlling the existence of representation disparity in data. And this disparity may lead to bias in our systems. We observed that most of the speech resources available contain elicited speech and that on average, researchers are careful as to balance the speakers in terms of gender when crafting data. But this cannot be said about corpora containing non-elicited speech. And apart from Librispeech, we observed a general gender imbalance, which can lead to a performance decrease on female speech BIBREF2. Speech time measurements are not consistent throughout our panel of resources and utterance counts are not reliable. We gathered the size of the corpora as well as the sampling rate in order to estimate the amount of speech time available, but variation in terms of precision, bit-rate, encoding and containers prevent us from reaching reliable results. Yet, speech time information enables us to know the quantity of data available for each category and this directly impacts the systems. This information is now given in papers such as the one describing the latest version of TEDLIUM, as this information is paramount for speaker adaptation. bender2018data proposed to provide the following information alongside corpus releases: curation rationale, language variety, speaker demographic, annotator demographic, speech situation, text characteristics, recording quality and others. Information we can add to their recommendations relates to the duration of the data sets in hours or minutes, globally and per speaker and/or gender category. This could allow to quickly check the gender balance in terms of quantity of data available for each category, without relying on an unreliable notion of utterance. This descriptive work is of importance for the future corpora, but should also be made for the data sets already released as they are likely to be used again by the community. <<</On the Importance of Meta-Data>>> <<<Transparency in Evaluation>>> Word Error Rate (WER) is usually computed as the sum of the errors made on the test data set divided by the total number of words. But if such an evaluation allows for an easy comparison of the systems, it fails to acknowledge for their performance variations. In our survey, 13 of the 66 corpora had a paper describing the resources. When the paper reported ASR results, none of them reported gendered evaluation even if gender information about the data was provided. Reporting results for different categories is the most straightforward way to check for performance bias or overfitting behaviours. Providing data statements is a first step towards, but for an open and fair science, the next step should be to also take into account such information in the evaluation process. A recent work in this direction has been made by mitchell2019model who proposed to describe model performance in model cards, thus encouraging a transparent report of model results. <<</Transparency in Evaluation>>> <<</Recommendations>>> <<<Conclusion>>> In our gender survey of the corpora available on the OpenSLR platform, we observe the following trends: parity is globally achieved on the whole, but interactions with other corpus characteristics reveal that gender misrepresentation needs more than just a number of speakers to be identified. In non-elicited data (meaning type of speech that would have existed without the creation of the corpus, such as TEDTalks or radio broadcast), we found that, except in Librispeech where gender balance is controlled, men are more represented than women. It also seems that most of the corpora aimed at developing TTS systems contain mostly female voices, maybe due to the stereotype associating female voice with caring activities. We also observe that gender description of data has been taken into account by the community, with an increased number of corpora provided with gender meta-data in the last two years. Our sample containing only 66 corpora, we acknowledge that our results cannot necessarily be extended to all language resources, however it allows us to open discussion about general corpus description practices, pointing out a lack of meta-data and to actualise the discourse around the social implications of NLP systems. We advocate for a more open science and technology by following guidelines such as the FAIR Data Principle or providing data statements, in order to ensure scientific generalisation and interoperability while preventing social harm. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Methodology, Introduction" ], "type": "disordered_section" }
2003.08132
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Gender Representation in Open Source Speech Resources <<<Abstract>>> With the rise of artificial intelligence (AI) and the growing use of deep-learning architectures, the question of ethics, transparency and fairness of AI systems has become a central concern within the research community. We address transparency and fairness in spoken language systems by proposing a study about gender representation in speech resources available through the Open Speech and Language Resource platform. We show that finding gender information in open source corpora is not straightforward and that gender balance depends on other corpus characteristics (elicited/non elicited speech, low/high resource language, speech task targeted). The paper ends with recommendations about metadata and gender information for researchers in order to assure better transparency of the speech systems built using such corpora. <<</Abstract>>> <<<>>> 1.1em <<</>>> <<<Introduction>>> The ever growing use of machine learning has put data at the center of the industrial and research spheres. Indeed, for a system to learn how to associate an input X to an output Y, many paired examples are needed to learn this mapping process. This need for data coupled with the improvement in computing power and algorithm efficiency has led to the era of big data. But data is not only needed in mass, but also with a certain level of quality. In this paper we argue that one of the main quality of data is its transparency. In recent years, concerns have been raised about the biases existing in the systems. A well-known case in Natural Language Processing (NLP) is the example of word embeddings, with the studies of bolukbasi2016man and caliskan2017semantics which showed that data are socially constructed and hence encapsulate a handful of social representations and power structures, such as gender stereotypes. Gender-bias has also been found in machine translation tasks BIBREF0, as well as facial recognition BIBREF1 and is now at the center of research debates. In previous work, we investigated the impact of gender imbalance in training data on the performance of an automatic speech recognition (ASR) system, showing that the under-representation of women led to a performance bias of the system for female speakers BIBREF2. In this paper, we survey the gender representation within an open platform gathering speech and language resources to develop speech processing tools. The aim of this survey is twofold: firstly, we investigate the gender balance within speech corpora in terms of speaker representation but also in terms of speech time available for each gender category. Secondly we propose a reflection about general practices when releasing resources, basing ourselves on some recommendations from previous work. Contributions. The contributions of our work are the following: an exploration of 66 different speech corpora in terms of gender, showing that gender balance is achieved in terms of speakers in elicited corpora, but that it is not the case for non-elicited speech, nor for the speech time allocated to each gender category an assessment of the global lack of meta-data within free open source corpora, alongside recommendations and guidelines for resources descriptions, based on previous work <<</Introduction>>> <<<OpenSLR>>> Open Speech Language Resources (OpenSLR) is a platform created by Daniel Povey. It provides a central hub to gather open speech and language resources, allowing them to be accessed and downloaded freely. OpenSLR currently hosts 83 resources. These resources consist of speech recordings with transcriptions but also of softwares as well as lexicons and textual data for language modeling. As resources are costly to produce, they are most of the time a paying service. Therefore it is hard to study gender representation at scale. We thus focus on the corpora available on OpenSLR due to their free access and to the fact that OpenSLR is explicitly made to help develop speech systems (mostly ASR but also text-to-speech (TTS) systems). In our work, we focus on speech data only. Out of the 83 resources gathered on the platform, we recorded 53 speech resources. We did not take into account multiple releases of the same corpora but only kept the last version (e.g. TED LIUM BIBREF3) and we also removed subsets of bigger corpora (e.g. LibriTTS corpus BIBREF4). We make the distinction between a resource and a corpus, as each resource can contain several languages (e.g. Vystadial korvas2014) or several accent/dialect of a same language (e.g. the crowdsourced high-quality UK and Ireland English Dialect speech data set googleuken2019). In our terminology, we define a corpus as monolingual and monodialectal, so resources containing different dialects or languages will be considered as containing different corpora. We ended up with 66 corpora, in 33 different languages with 51 dialect/accent variations. The variety is also great in terms of speech types (elicited and read speech, broadcast news, TEDTalks, meetings, phonecalls, audiobooks, etc.), which is not suprising, given the many different actors who contributed to this platform. We consider this sample to be of reasonable size to tackle the question of gender representation in speech corpora. OpenSLR also constitutes a good indicator of general practice as it does not expect a defined format nor does have explicit requirements about data structures, hence attesting of what metadata resources creators consider important to share when releasing resources for free on the Web. <<</OpenSLR>>> <<<Methodology>>> In order to study gender representation within speech resources, let us start by defining what gender is. In this work, we consider gender as a binary category (male and female speakers). Nevertheless, we are aware that gender as an identity also exists outside of these two categories, but we did not find any mention of non-binary speakers within the corpora surveyed in our study. Following work by doukhan2018open, we wanted to explore the corpora looking at the number of speakers of each gender category as well as their speech duration, considering both variables as good features to account for gender representation. After the download, we manually extracted information about gender representation in each corpus. <<<Speaker Information and Lack of Meta-Data>>> The first difficulty we came across was the general absence of information. As gender in technology is a relatively recent research interest, most of the time gender demographics are not made available by the resources creators. So, on top of the further-mentioned general corpus characteristics (see Section SECREF11), we also report in our final table where the gender information was found and whether it was provided in the first place or not. The provided attribute corresponds to whether gender info was given somewhere, and the found_in attribute corresponds to where we extracted the gender demographics from. The different modalities are paper, if a paper was explicitly cited along the resource, metadata if a metadata file was included, indexed if the gender was explicitly indexed within data or if data was structured in terms of gender and manually if the gender information are the results of a manual research made by ourselves, trying to either find a paper describing the resources, or by relying on regularities that seems like speaker ID and listening to the recordings. We acknowledge that this last method has some methodological shortcomings: we relied on our perceptual stereotypes to distinguish male from female speakers, most of the time for languages we have no knowledge of, but considering the global lack of data, we used it when corpora were small enough in order to increase our sample size. <<</Speaker Information and Lack of Meta-Data>>> <<<Speech Time Information and Data Consistency>>> The second difficulty regards the fact that speech time information are not standardised, making impossible to obtain speech time for individual speakers or gender categories. When speech time information is provided, the statistics given do not all refer to the same measurements. Some authors report speech duration in hours e.g. panayotov2015librispeech,hernandez2018ted, some the number of utterances (e.g BIBREF5) or sentences (e.g. googleuken2019), the definition of these two terms never being clearly defined. We gathered all information available, meaning that our final table contains some empty cells, and we found that there was no consistency between speech duration and number of utterances, excluding the possibility to approximate one by the other. As a result, we decided to rely on the size of the corpora as a (rough) approximation of the amount of speech data available, the text files representing a small proportion of the resources size. This method however has drawbacks as not all corpora used the same file format, nor the same sampling rate. Sampling rate has been provided as well in the final table, but we decided to rely on qualitative categories, a corpus being considered small if its size is under 5GB, medium if it is between 5 and 50GB and large if above. <<</Speech Time Information and Data Consistency>>> <<<Corpora Characteristics>>> The final result consists of a table reporting all the characteristics of the corpora. The chosen features are the following: the resource identifier (id) as defined on OpenSLR the language (lang) the dialect or accent if specified (dial) the total number of speakers as well as the number of male and female speakers (#spk, #spk_m, #spk_f) the total number of utterances as well as the total number of utterances for male and female speakers (#utt, #utt_m, #utt_f) the total duration, or speech time, as well as the duration for male and female speakers (dur, dur_m, dur_f) the size of the resource in gigabytes (sizeGB) as well as a qualitative label (size, taking its value between “big", “medium", “small") the sampling rate (sampling) the speech task targeted for the resource (task) is it elicited speech or not: we define as non-elicited speech data which would have existed without the creation of the resources (e.g TedTalks, audiobooks, etc.), other speech data are considered as elicited the language status (lang_status): a language is considered either as high- or low-resourced. The language status is defined from a technological point of view (i.e. are there resources or NLP systems available for this language?). It is fixed at the language granularity (hence the name), regardless of the dialect or accent (if provided). the year of the release (year) the authors of the resource (producer) <<</Corpora Characteristics>>> <<</Methodology>>> <<<Analysis>>> <<<Gender Information Availability>>> Before diving into the gender analysis, we report the number of corpora for which gender information was provided. Indeed, 36.4% of the corpora do not give any gender information regarding the speakers. Moreover, almost 20% of the corpora do not provide any speaker information whatsoever. Table sums up the number of corpora for which speaker's gender information was provided and if it was, where it was found. We first looked at the metadata file if available. If no metadata was provided, we searched whether gender was indexed within the data structure. At last, if we still could not find anything, we looked for a paper describing the data set. This search pipeline results in ordered levels for our found_in category, meaning papers might also be available for corpora with the “metadata" or “indexed" modalities. When gender information was given it was most of the time in terms of number of speakers in each gender categories, as only five corpora provide speech time for each category. Table reports what type of information was provided in terms of gender, in the subset of the 42 corpora containing gender information. We observe that gender information is easier to find when it regards the number of speakers, than when it accounts for the quantity of data available for each gender group. Due to this lack of data, we did not study the speech time per gender category as intended, but we relied on utterance count when available. It is worth noticing however, that we did not find any consistency between speech time and number of utterances, so such results must be taken with caution. Out of the 42 corpora providing gender information, 41 reported speaker counts for each gender category. We manually gathered speaker gender information for 7 more corpora, as explained in the previous section, reaching a final sample size of 47 corpora. <<</Gender Information Availability>>> <<<Gender Distribution Among Speakers>>> <<<Elicited vs Non-Elicited Data>>> Generally, when gender demographics are provided, we observe the following distribution: out of the 6,072 speakers, 3,050 are women and 3,022 are men, so parity is almost achieved. We then look at whether data was elicited or not, non-elicited speech being speech that would have existed without the corpus creation such as TEDTalks, interviews, radio broadcast and so on. We assume that if data was not elicited, gender imbalance might emerge. Indeed, non-elicited data often comes from the media, and it has been shown, that women are under-represented in this type of data BIBREF6. This disparity of gender representation in French media BIBREF7, BIBREF8 precisely led us to the present survey. Our expectations are reinforced by examples such as the resource of Spanish TEDTalks, which states in its description regarding the speakers that “most of them are men" mena2019. We report results in Table . In both cases (respectively elicited and non-elicited speech), gender difference is relatively small (respectively 5.6 percentage points and 5.8 points), far from the 30 percentage points difference observed in BIBREF2. A possible explanation is that either elicited or not, corpora are the result of a controlled process, so gender disparity will be reduced as much as possible by the corpus authors. However, we notice that, apart from Librispeech BIBREF9, all the non-elicited corpora are small corpora. When removing Librispeech from the analysis, we observe a 1/3-2/3 female to male ratio, coherent with our previous findings. This can be explained by the care put by the creators of the Librispeech data set to "[ensure] a gender balance at the speaker level and in terms of the amount of data available for each gender" BIBREF9, while general gender disparity is observed in smaller corpora. What emerges from these results is that when data sets are not elicited or carefully balanced, gender disparity creeps in. This gender imbalance is not observed at the scale of the entire OpenSLR platform, due to the fact that most of the corpora are elicited (89.1%). Hence, the existence of such gender gap is prevented by a careful control during the data set creation process. <<</Elicited vs Non-Elicited Data>>> <<<High-resource vs Low-resource Languages>>> In the elicited corpora made available on OpenSLR, some are of low-resource languages other high-resource languages (mostly regional variation of high-resources languages). When looking at gender in these elicited corpora, we do not observe a difference depending on the language status. However, we can notice that high-resource corpora contain twice as many speakers, all low-resource language corpora being small corpora. <<</High-resource vs Low-resource Languages>>> <<<“How Can I Help?": Spoken Language Tasks>>> Speech corpora are built in order to train systems, most of the time ASR or TTS ones. We carry out our gender analysis taking into account the task addressed and obtain the results reported in Table . We observe that if gender representation is almost balanced within ASR corpora, women are better represented in TTS-oriented data sets. This can be related to the UN report of recommendation for gender-equal digital education stating that nowadays, most of the vocal assistants are given female voices which raises educational and societal problems BIBREF10. This gendered design of vocal assistants is sometimes justified by relying on gender stereotypes such as “female voices are perceived as more helpful, sympathetic or pleasant." TTS systems being often used to create such assistants, we can assume that using female voices has become general practice to ensure the adoption of the system by the users. This claim can however be nuanced by nass2005wired who showed that other factors might be worth taking into account to design gendered voices, such as social identification and cultural gender stereotypes. <<</“How Can I Help?": Spoken Language Tasks>>> <<</Gender Distribution Among Speakers>>> <<<Speech Time and Gender>>> Due to a global lack of speech time information, we did not analyse the amount of data available per speaker category. However, utterance counts were often reported, or easily found within the corpora. We gathered utterance counts for a total of 32 corpora. We observe that if gender balance is almost achieved in terms of number of speakers, at the utterance level, men speech is more represented. But this disparity is only the effect of three corpora containing 51,463 and 26,567 korvas2014 and 8376 mena2019 utterances for male speakers, while the mean number of utterances per corpora is respectively 1942 for male speakers and 1983 for female speakers. Removing these three outliers, we observe that utterances count is balanced between gender categories. It is worth noticing, that the high amount of utterances of the outliers is surprising considering that these three corpora are small (2.1GB, 2.8GB) and medium (5.2GB). This highlights the problem of the notion of utterance which is never being explicitly defined. Such difference in granularity is thus preventing comparison between corpora. <<</Speech Time and Gender>>> <<<Evolution over Time>>> When collecting data, we noticed that the more recent the resources, the easier it was to find gender information, attesting of the emergence of gender in technology as a relevant topic. As pointed out by Kate crawford2017nips in her NeurIPS keynote talk, fairness in AI has recently become a huge part of the research effort in AI and machine learning. As a result, methodology papers have been published, with for example the work of bender2018data, for NLP data and systems, encouraging the community towards rich and explicit data statements. Figure FIGREF34 shows the evolution of gender information availability in the last 10 years. We can see that this peek of interest is also present in our data, with more resources provided with gender information after 2017. <<</Evolution over Time>>> <<</Analysis>>> <<<Recommendations>>> The social impact of big data and the ethical problems raised by NLP systems have already been discussed by previous work. wilkinson2016fair developed principles for scientific data management and stewardship, the FAIR Data Principles, based on four foundational data characteristics that are Findability, Accessibility, Interoperability and Reusability BIBREF11. In our case, findability and accessibility are taken into account by design, resources on OpenSLR being freely accessible. Interoperability and Reusability of data are however not yet achieved. Another attempt to integrate this discussion about data description within the NLP community has been made by COUILLAULT14.424, who proposed an Ethics and Big Data Charter, to help resources creators describe data from a legal and ethical point of view. hovy2016social highlighted the different social implications of NLP systems, such as exclusion, overgeneralisation and exposure problems. More recently, work by bender2018data proposed the notion of data statement to ensure data transparency. The common point of all these studies is that information is key. The FAIR Principles are a baseline to guarantee the reproducibility of scientific findings. We need data to be described exhaustively in order to acknowledge demographic bias that may exist within our corpora. As pointed out by hovy2016social, language is always situated and so are language resources. This demographic bias in itself will always exist, but by not mentioning it in the data description we might create tools and systems that will have negative impacts on society. The authors presented the notion of exclusion as a demographic misrepresentation leading to exclusion of certain groups in the use of a technology, due to the fact that this technology fail to take them into account during its developing process. This directly relates to our work on ASR performance on women speech, and we can assume that this can be extended to other speaker characteristics, such as accent or age. To prevent such collateral consequences of NLP systems, bender2018data advocated the use of data statement, as a professional and research practice. We hope the present study will encourage researchers and resources creators to describe exhaustively their data sets, following the guidelines proposed by these authors. <<<On the Importance of Meta-Data>>> The first take-away of our survey is that obtaining an exhaustive description of the speakers within speech resources is not straightforward. This lack of meta-data is a problem in itself as it prevents guaranteeing the generalisability of systems or linguistics findings based on these corpora, as pointed out by bender2018data. As they rightly highlighted in their paper, the problem is also an ethical one as we have no way of controlling the existence of representation disparity in data. And this disparity may lead to bias in our systems. We observed that most of the speech resources available contain elicited speech and that on average, researchers are careful as to balance the speakers in terms of gender when crafting data. But this cannot be said about corpora containing non-elicited speech. And apart from Librispeech, we observed a general gender imbalance, which can lead to a performance decrease on female speech BIBREF2. Speech time measurements are not consistent throughout our panel of resources and utterance counts are not reliable. We gathered the size of the corpora as well as the sampling rate in order to estimate the amount of speech time available, but variation in terms of precision, bit-rate, encoding and containers prevent us from reaching reliable results. Yet, speech time information enables us to know the quantity of data available for each category and this directly impacts the systems. This information is now given in papers such as the one describing the latest version of TEDLIUM, as this information is paramount for speaker adaptation. bender2018data proposed to provide the following information alongside corpus releases: curation rationale, language variety, speaker demographic, annotator demographic, speech situation, text characteristics, recording quality and others. Information we can add to their recommendations relates to the duration of the data sets in hours or minutes, globally and per speaker and/or gender category. This could allow to quickly check the gender balance in terms of quantity of data available for each category, without relying on an unreliable notion of utterance. This descriptive work is of importance for the future corpora, but should also be made for the data sets already released as they are likely to be used again by the community. <<</On the Importance of Meta-Data>>> <<<Transparency in Evaluation>>> Word Error Rate (WER) is usually computed as the sum of the errors made on the test data set divided by the total number of words. But if such an evaluation allows for an easy comparison of the systems, it fails to acknowledge for their performance variations. In our survey, 13 of the 66 corpora had a paper describing the resources. When the paper reported ASR results, none of them reported gendered evaluation even if gender information about the data was provided. Reporting results for different categories is the most straightforward way to check for performance bias or overfitting behaviours. Providing data statements is a first step towards, but for an open and fair science, the next step should be to also take into account such information in the evaluation process. A recent work in this direction has been made by mitchell2019model who proposed to describe model performance in model cards, thus encouraging a transparent report of model results. <<</Transparency in Evaluation>>> <<</Recommendations>>> <<<Conclusion>>> In our gender survey of the corpora available on the OpenSLR platform, we observe the following trends: parity is globally achieved on the whole, but interactions with other corpus characteristics reveal that gender misrepresentation needs more than just a number of speakers to be identified. In non-elicited data (meaning type of speech that would have existed without the creation of the corpus, such as TEDTalks or radio broadcast), we found that, except in Librispeech where gender balance is controlled, men are more represented than women. It also seems that most of the corpora aimed at developing TTS systems contain mostly female voices, maybe due to the stereotype associating female voice with caring activities. We also observe that gender description of data has been taken into account by the community, with an increased number of corpora provided with gender meta-data in the last two years. Our sample containing only 66 corpora, we acknowledge that our results cannot necessarily be extended to all language resources, however it allows us to open discussion about general corpus description practices, pointing out a lack of meta-data and to actualise the discourse around the social implications of NLP systems. We advocate for a more open science and technology by following guidelines such as the FAIR Data Principle or providing data statements, in order to ensure scientific generalisation and interoperability while preventing social harm. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Conclusion" ], "type": "disordered_section" }
2001.02380
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> A Neural Approach to Discourse Relation Signal Detection <<<Abstract>>> Previous data-driven work investigating the types and distributions of discourse relation signals, including discourse markers such as 'however' or phrases such as 'as a result' has focused on the relative frequencies of signal words within and outside text from each discourse relation. Such approaches do not allow us to quantify the signaling strength of individual instances of a signal on a scale (e.g. more or less discourse-relevant instances of 'and'), to assess the distribution of ambiguity for signals, or to identify words that hinder discourse relation identification in context ('anti-signals' or 'distractors'). In this paper we present a data-driven approach to signal detection using a distantly supervised neural network and develop a metric, {\Delta}s (or 'delta-softmax'), to quantify signaling strength. Ranging between -1 and 1 and relying on recent advances in contextualized words embeddings, the metric represents each word's positive or negative contribution to the identifiability of a relation in specific instances in context. Based on an English corpus annotated for discourse relations using Rhetorical Structure Theory and signal type annotations anchored to specific tokens, our analysis examines the reliability of the metric, the places where it overlaps with and differs from human judgments, and the implications for identifying features that neural models may need in order to perform better on automatic discourse relation classification. <<</Abstract>>> <<<Introduction>>> The development of formal frameworks for the analysis of discourse relations has long gone hand in hand with work on signaling devices. The analysis of discourse relations is also closely tied to what a discourse structure should look like and what discourse goals should be fulfilled in relation to the interpretation of discourse relations BIBREF0. Earlier work on the establishment of inventories of discourse relations and their formalization (BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6) relied on the existence of `discourse markers' (DMs) or `connectives', including conjunctions such as because or if, adverbials such as however or as a result, and coordinations such as but, to identify and distinguish relations such as condition in SECREF1, concession in SECREF1, cause in SECREF1, or contrast, result etc., depending on the postulated inventory of relations (signals for these relations as identified by human analysts are given in bold; examples come from the GUM corpus BIBREF7, presented in Section SECREF3). . [If you work for a company,]$_{\textsc {condition}}$ [they pay you that money.] . [Albeit limited,]$_{\textsc {concession}}$ [these results provide valuable insight into SI interpretation by Chitonga-speaking children.] . [not all would have been interviewed at Wave 3] [due to differential patterns of temporary attrition]$_{\textsc {cause}}$ The same reasoning of identifying relations based on overt signals has been applied to the comparison of discourse relations across languages, by comparing inventories of similar function words cross-linguistically (BIBREF8, BIBREF9); and the annotation guidelines of prominent contemporary corpora rely on such markers as well: for instance, the Penn Discourse Treebank (see BIBREF10) explicitly refers to either the presence of DMs or the possibility of their insertion in cases of implicit discourse relations, and DM analysis in Rhetorical Structure Theory BIBREF11 has also shown the important role of DMs as signals of discourse relations at all hierarchical levels of discourse analysis BIBREF12. At the same time, research over the past two decades analyzing the full range of possible cues that humans use to identify the presence of discourse relations has suggested that classic DMs such as conjunctions and adverbials are only a part of the network of signals that writers or speakers can harness for discourse structuring, which also includes entity-based cohesion devices (e.g. certain uses of anaphora, see BIBREF13), alternative lexicalizations using content words, as well as syntactic constructions (see BIBREF14 and the addition of alternative lexicalization constructions, AltLexC, in the latest version of PDTB, BIBREF15). In previous work, two main approaches to extracting the inventory of discourse signal types in an open-ended framework can be identified: data-driven approaches, which attempt to extract relevant words from distributional properties of the data, using frequencies or association measures capturing their co-occurrences with certain relation types (e.g. BIBREF16, BIBREF17); and manual annotation efforts (e.g. BIBREF10, BIBREF18), which develop categorization schemes and guidelines for human evaluation of signaling devices. The former family of methods benefits from an unbiased openness to any and every type of word which may reliably co-occur with some relation types, whether or not a human might notice it while annotating, as well as the naturally graded and comparable nature of the resulting quantitative scores, but, as we will show, falls short in identifying specific cases of a word being a signal (or not) in context. By contrast, the latter approach allows for the identification of individual instances of signaling devices, but relies on less open-ended guidelines and is categorical in nature: a word either is or isn't a signal in context, providing less access to concepts such as signaling strength. The goal of this paper is to develop and evaluate a model of discourse signal identification that is built bottom up from the data, but retains sensitivity to context in the evaluation of each individual example. In addition, even though this work is conducted within Rhetorical Structural Theory, we hope that it can shed light on signal identification of discourse relations across genres and provide empirical evidence to motivate research on theory-neutral and genre-diverse discourse processing, which would be beneficial for pushing forward theories of discourse across frameworks or formalisms. Furthermore, employing a computational approach to studying discourse relations has a promising impact on various NLP downstream tasks such as question answering and document summarization etc. For example, BIBREF20 incorporated discourse information into the task of automated text comprehension and benefited from such information without relying on explicit annotations of discourse structure during training, which outperformed state-of-the-art text comprehension systems at the time. Towards this goal, we begin by reviewing some previous work in the traditions sketched out above in the next section, and point out some open questions which we would like to address. In Section SECREF3 we present the discourse annotated data that we will be using, which covers a number of English text types from the Web annotated for 20 discourse relations in the framework of Rhetorical Structure Theory, and is enriched with human annotations of discourse relation signaling devices for a subset of the data. Moreover, we also propose a taxonomy of anchored signals based on the discourse annotated data used in this paper, illustrating the properties and the distribution of the anchorable signals. In Section SECREF4 we then train a distantly supervised neural network model which is made aware of the relations present in the data, but attempts to learn which words signal those relations without any exposure to explicit signal annotations. We evaluate the accuracy of our model using state-of-the-art pretrained and contextualized character and word embeddings, and develop a metric for signaling strength based on a masking concept similar to permutation importance, which naturally lends itself to the definition of both positive and negative or `anti-signals', which we will refer to as `distractors'. In Section SECREF5, we combine the anchoring annotation data from Section SECREF3 with the model's predictions to evaluate how `human-like' its performance is, using an information retrieval approach measuring recall@k and assessing the stability of different signal types based on how the model scores them. We develop a visualization for tokenwise signaling strength and perform error analysis for some signals found by the model which were not flagged by humans and vice versa, and point out the strengths and weaknesses of the architecture. Section SECREF6 offers further discussion of what we can learn from the model, what kinds of additional features it might benefit from given the error analysis, and what the distributions of scores for individual signals can teach us about the ambiguity and reliability of different signal types, opening up avenues for further research. <<</Introduction>>> <<<Previous Work>>> <<<Data-driven Approaches>>> A straightforward approach to identifying discourse relation signals in corpora with discourse parses is to extract frequency counts for all lexical types or lemmas and cross-tabulate them with discourse relations, such as sentences annotated as cause, elaboration, etc. (e.g. BIBREF21, BIBREF22, BIBREF17). Table TABREF4, reproduced from BIBREF17, illustrates this approach for the Russian RST Treebank. This approach quickly reveals the core inventory of cue words in the language, and in particular the class of low-ambiguity discourse markers (DMs), such as odnako `however' signaling contrast (see Fraser 1999 on delimiting the class of explicit DMs) or relative pronouns signaling elaboration. As such, it can be very helpful for corpus-based lexicography of discourse markers (cf. BIBREF23). The approach can potentially include multiword expressions, if applied equally to multi-token spans (e.g. as a result), and because it is language-independent, it also allows for a straightforward comparison of connectives or other DMs across languages. Results may also converge across frameworks, as the frequency analysis may reveal the same items in different corpora annotated using different frameworks. For instance, the inventory of connectives found in work on the Penn Discourse Treebank (PDTB, see BIBREF10) largely converges with findings on connectives using RST (see BIBREF24, BIBREF18): conjunctions such as but can mark different kinds of contrastive relations at a high level, and adverbs such as meanwhile can convey contemporaneousness, among other things, even when more fine-grained analyses are applied. However, a purely frequentist approach runs into problems on multiple levels, as we will show in Section SECREF4: high frequency and specificity to a small number of relations characterize only the most common and unambiguous discourse markers, but not less common ones. Additionally, differentiating actual and potentially ambiguous usages of candidate words in context requires substantial qualitative analysis (see BIBREF25), which is not reflected in aggregated counts, and signals that belong to a class of relations (e.g. a variety of distinct contrastive relation types) may appear to be non-specific, when in fact they reliably mark a superset of relations. Other studies have used more sophisticated metrics, such as point-wise mutual information (PMI), to identify words associated with particular relations BIBREF16. Using the PDTB corpus, BIBREF16 extracted such scores and measured the contribution of different signal types based on the information gain which they deliver for the classification of discourse relations at various degrees of granularity, as expressed by the hierarchical labels of PDTB relation types. This approach is most similar to the goal given to our own model in Section SECREF4, but is less detailed in that the aggregation process assigns a single number to each candidate lexical item, rather than assigning contextual scores to each instance. Finally we note that for hierarchical discourse annotation schemes, the data-driven approaches described here become less feasible at higher levels of abstraction, as relations connecting entire paragraphs encompass large amounts of text, and it is therefore difficult to find words with high specificity to those relations. As a result, approaches using human annotation of discourse relation signals may ultimately be irreplaceable. <<</Data-driven Approaches>>> <<<Discourse Relation Signal Annotations>>> Discourse relation signals are broadly classified into two categorizes: anchored signals and unanchored signals. By `anchoring' we refer to associating signals with concrete token spans in texts. Intuitively, most of the signals are anchorable since they correspond to certain token spans. However, it is also possible for a discourse relation to be signaled but remain unanchored. Results from BIBREF27 indicated that there are several signaled but unanchored relations such as preparation and background since they are high-level discourse relations that capture and correspond to genre features such as interview layout in interviews where the conversation is constructed as a question-answer scheme, and are thus rarely anchored to tokens. The Penn Discourse Treebank (PDTB V3, BIBREF15) is the largest discourse annotated corpus of English, and the largest resource annotated explicitly for discourse relation signals such as connectives, with similar corpora having been developed for a variety of languages (e.g. BIBREF28 for Turkish, BIBREF29 for Chinese). However the annotation scheme used by PDTB is ahierarchical, annotating only pairs of textual argument spans connected by a discourse relation, and disregarding relations at higher levels, such as relations between paragraphs or other groups of discourse units. Additionally, the annotation scheme used for explicit signals is limited to specific sets of expressions and constructions, and does not include some types of potential signals, such as the graphical layout of a document, lexical chains of (non-coreferring) content words that are not seen as connectives, or genre conventions which may signal the discourse function for parts of a text. It is nevertheless a very useful resource for obtaining frequency lists of the most prevalent DMs in English, as well as data on a range of phenomena such as anaphoric relations signaled by entities, and some explicitly annotated syntactic constructions. Working in the hierarchical framework of Rhetorical Structure Theory BIBREF11, BIBREF18 re-annotated the existing RST Discourse Treebank BIBREF30, by taking the existing discourse relation annotations in the corpus as a ground truth and analyzing any possible information in the data, including content words, patterns of repetition or genre conventions, as a possibly present discourse relation signaling device. The resulting RST Signalling Corpus (RST-SC, BIBREF31) consists of 385 Wall Street Journal articles from the Penn Treebank BIBREF32, a smaller subset of the same corpus used in PDTB. It contains 20,123 instances of 78 relation types (e.g. attribution, circumstance, result etc.), which are enriched with 29,297 signal annotations. BIBREF12 showed that when all types of signals are considered, over 86% of discourse relations annotated in the corpus were signaled in some way, but among these, just under 20% of cases were marked by a DM. However, unlike PDTB, the RST Signalling Corpus does not provide a concrete span of tokens for the locus of each signal, indicating instead only the type of signaling device used. Although the signal annotations in RST-SC have a broader scope than those in PDTB and are made more complex by extending to hierarchical relations, BIBREF33 have shown that RST-SC's annotation scheme can be `anchored' by associating discourse signal categories from RST-SC with concrete token spans. BIBREF27 applied the same scheme to a data set described in Section SECREF3, which we will use to evaluate our model in Section SECREF5. Since that data set is based on the same annotation scheme of signal types as RST-SC, we will describe the data for the present study and RST-SC signal type annotation scheme next. <<</Discourse Relation Signal Annotations>>> <<</Previous Work>>> <<<Data>>> <<<Anchored Signals in the GUM Corpus>>> In order to study open-ended signals anchored to concrete tokens, we use the signal-annotated subset of the freely available Georgetown University Multilayer (GUM) Corpus BIBREF7 from BIBREF27. Our choice to use a multi-genre RST-annotated corpus rather than using PDTB, which also contains discourse relation signal annotation to a large extent is motivated by three reasons: The first reason is that we wish to explore the full range of potential signals, as laid out in the work on the Signalling Corpus BIBREF12, BIBREF34, whereas PDTB annotates only a subset of the possible cues identified by human annotators. Secondly, the use of RST as a framework allows us to examine discourse relations at all hierarchical levels, including long distance, high-level relations between structures as large as paragraphs or sections, which often have different types of signals allowing their identification. Finally, although the entire GUM corpus is only about half the size of RST-DT (109K tokens), using GUM offers the advantage of a more varied range of genres than PDTB and RST-SC, both of which annotate Wall Street Journal data. The signal annotated subset of GUM includes academic papers, how-to guides, interviews and news text, encompassing over 11,000 tokens. Although this data set may be too small to train a successful neural model for signal detection, we will not be using it for this purpose; instead, we will reserve it for use solely as a test set, and use the remainder of the data (about 98K tokens) to build our model (see Section SECREF28 for more details about the subsets and splits), including data from four further genres, for which the corpus also contains RST annotations but no signaling annotations: travel guides, biographies, fiction, and Reddit forum discussions. The GUM corpus is manually annotated with a large number of layers, including document layout (headings, paragraphs, figures, etc.); multiple POS tags (Penn tags, CLAWS5, Universal POS); lemmas; sentence types (e.g. imperative, wh-question etc., BIBREF35); Universal Dependencies BIBREF36; (non-)named entity types; coreference and bridging resolution; and discourse parses using Rhetorical Structure Theory BIBREF11. In particular, the RST annotations in the corpus use a set of 20 commonly used RST relation labels, which are given in Table TABREF10, along with their frequencies in the corpus. The relations cover asymmetrical prominence relations (satellite-nucleus) and symmetrical ones (multinuclear relations), with the restatement relation being realized in two versions, one for each type. The signaling annotation in the corpus follows the scheme developed by RST-SC, with some additions. Although RST-SC does not indicate token positions for signals, it provides a detailed taxonomy of signal types which is hierarchically structured into three levels: signal class, denoting the signal's degree of complexity signal type, indicating the linguistic system to which it belongs specific signal, which gives the most fine-grained subtypes of signals within each type It is assumed that any number of word tokens can be associated with any number of signals (including the same tokens participating in multiple signals), that signals can arise without corresponding to specific tokens (e.g. due to graphical layout of paragraphs), and that each relation can have an unbounded number of signals ($0-n$), each of which is characterized by all three levels. The signal class level is divided into single, combined (for complex signals), and unsure for unclear signals which cannot be identified conclusively, but are noted for further study. For each signal (regardless of its class), signal type and specific signal are identified. According to RST-SC's taxonomy, signal type includes 9 types such as DMs, genre, graphical, lexical, morphological, numerical, reference, semantic, and syntactic. Each type then has specific subcategories. For instance, the signal type semantic has 7 specific signal subtypes: synonymy, antonymy, meronymy, repetition, indicative word pair, lexical chain, and general word. We will describe some of these in more depth below. In addition to the 9 signal types, RST-SC has 6 combined signal types such as reference+syntactic, semantic+syntactic, and graphical+syntactic etc., and 15 specific signals are identified for the combined signals. Although the rich signaling annotations in RST-SC offer an excellent overview of the relative prevalence of different signal types in the Wall Street Journal corpus, it is difficult to apply the original scheme to the study of individual signal words, since actual signal positions are not identified. While recovering these positions may be possible for some categories using the original guidelines, most signaling annotations (e.g. lexical chains, repetition) cannot be automatically paired with actual tokens, meaning that, in order to use the original RST-SC for our study, we would need to re-annotate it for signal token positions. As this effort is beyond the scope of our study, we will use the smaller data set with anchored signaling annotations from BIBREF27: This data is annotated with the same signal categories as RST-SC, but also includes exact token positions for each signal, including possibly no tokens for unanchorable signals such as some types of genre conventions or graphical layout which are not expressible in terms of specific words. In order to get a better sense of how the annotations work, we consider example SECREF7. . [5] Sociologists have explored the adverse consequences of discrimination; [6] psychologists have examined the mental processes that underpin conscious and unconscious biases; [7] neuroscientists have examined the neurobiological underpinnings of discrimination; [8] and evolutionary theorists have explored the various ways that in-group/out-group biases emerged across the history of our species. – joint [GUM_academic_discrimination] In this example, there is a joint relation between four spans in a fragment from an RST discourse tree. The first tokens in each span form a parallel construction and include semantically related items such as explored and examined (signal class `combined', type `semantic+syntactic', specific subtype `parallel syntactic construction + lexical chain'). The words corresponding to this signal in each span are highlighted in Figure FIGREF15, and are considered to signal each instance of the joint relation. Additionally, the joint relation is also signaled by a number of further signals which are highlighted in the figure as well, such as the semicolons between spans, which correspond to a type `graphical', subtype `semicolon' in RST-SC. The data model of the corpus records which tokens are associated with which categorized signals, and allows for multiple membership of the same token in several signal annotations. In terms of annotation reliability, BIBREF12 reported a weighted kappa of 0.71 for signal subtypes in RST-SC without regard to the span of words corresponding to a signal, while a study by BIBREF37 suggests that signal anchoring, i.e. associating RST-SC signal categories with specific tokens achieves a 90.9% perfect agreement score on which tokens constitute signals, or a Cohen's Kappa value of 0.77. As anchored signal positions will be of the greatest interest to our study, we will consider how signal token positions are distributed in the corpus next, and develop an anchoring taxonomy which we will refer back to for the remainder of this paper. <<</Anchored Signals in the GUM Corpus>>> <<<A Taxonomy of Anchored Signals>>> From a structural point of view, one of the most fundamental distinctions with regard to signal realization recognized in previous work is the classification of signaling tokens into satellite or nucleus-oriented positions, i.e. whether a signal for the relation appears within the modifier span or the span being modified BIBREF38. While some relation types exhibit a strong preference for signal position (e.g. using a discourse marker such as because in the satellite for cause, BIBREF39), others, such as concession are more balanced (almost evenly split signals between satellite and nucleus in BIBREF38). In this study we would like to further refine the taxonomy of signal positions, breaking it down into several features. At the highest level, we have the distinction between anchorable and non-anchorable signals, i.e. signals which correspond to no token in the text (e.g. genre conventions, graphical layout). Below this level, we follow BIBREF38 in classifying signals as satellite or nucleus-oriented, based on whether they appear in the more prominent Elementary Discourse Unit (EDU) of a relation or its dependent. However, several further distinctions may be drawn: Whether the signal appears before or after the relation in text order; since we consider the relation to be instantiated as soon as its second argument in the text appears, `before' is interpreted as any token before the second head unit in the discourse tree begins, and `after' is any subsequent token Whether the signal appears in the head unit of the satellite/nucleus, or in a dependent of that unit; this distinction only matters for satellite or nucleus subtrees that consist of more than one unit Whether the signal is anywhere within the structure dominated by the units participating in the relation, or completely outside of this structure Table TABREF20 gives an overview of the taxonomy proposed here, which includes the possible combinations of these properties and the distribution of the corresponding anchorable signals found in the signal-annotated subset of the GUM Corpus from BIBREF27. Individual feature combinations can be referred to either as acronyms, e.g. ABIHS for `Anchorable, Before the second EDU of the relation, Inside the relation's subtree, Head unit of the Satellite', or using the group IDs near the bottom of the table (in this case the category numbered Roman I). We will refer back to these categories in our comparison of manually annotated and automatically predicted signals. To illustrate how the taxonomy works in practice, we can consider the example in Figure FIGREF23, which shows a signal whose associated tokens instantiate categories I and IV in a discourse tree – the words demographic variables appear both within a preparation satellite (unit [50], category I), which precedes and points to its nucleus [51–54], and within a satellite inside that block (unit [52], a dependent inside the nucleus block, category IV). Based on the RST-SC annotation scheme, the signal class is Simple, with the type Semantic and specific sub-type Lexical chain. The numbers at the bottom of Table TABREF20 show the number of tokens signaling each relation at each position, as well as the number of relations which have signal tokens at the relevant positions. The hypothetical categories V and X, with signal tokens which are not within the subtree of satellite or nucleus descendants, are not attested in our data, as far as annotators were able to identify. <<</A Taxonomy of Anchored Signals>>> <<</Data>>> <<<Automatic Signal Extraction>>> <<<A Contextless Frequentist Approach>>> To motivate the need for a fine-grained and contextualized approach to describing discourse relation signals in our data, we begin by extracting some basic data-driven descriptions of our data along the lines presented in Section SECREF3. In order to constrain candidate words to just the most relevant ones for marking a specific signal, we first need a way to address a caveat of the frequentist approach: higher order relations which often connect entire paragraphs (notably background and elaboration) must be prevented from allowing most or even all words in the document to be considered as signaling them. A simple approach to achieving this is to assume `Strong Nuclearity', relying on Marcu's (BIBREF42) Compositionality Criterion for Discourse Trees (CCDT), which suggests that if a relation holds between two blocks of EDUs, then it also holds between their head EDUs. While this simplification may not be entirely accurate in all cases, Table TABREF20 suggests that it captures most signals, and allows us to reduce the space of candidate signal tokens to just the two head EDUs implicated in a relation. We will refer to signals within the head units of a relation as `endocentric' and signals outside this region as `exocentric'. Figure FIGREF25 illustrates this, where units [64] and [65] are the respective heads of two blocks of EDUs, and unit [65] in fact contains a plausible endocentric signal for the result relation, the discourse marker thus. More problematic caveats for the frequentist approach are the potential for over/underfitting and ambiguity. The issue of overfitting is especially thorny in small datasets, in which certain content words appear coincidentally in discourse segments with a certain function. Table TABREF27 shows the most distinctive lexical types for several discourse relations in GUM based on pure ratio of occurrence in head EDUs marked for those relations. On the left, types are chosen which have a maximal frequency in the relevant relationship compared with their overall frequency in the corpus. This quickly overfits the contents of the corpus, selecting irrelevant words such as holiest and Slate for the circumstance relation, or hypnotizing and currency for concession. The same lack of filtering can, however, yield some potentially relevant lexical items, such as causing for result or even highly specific content words such as ammonium, which are certainly not discourse markers, but whose appearance in a sequence is not accidental: the word is in this case typical for sequences in how-to guides, where use of ingredients in a recipe is described in a sequence. Even if these kinds of items may be undesirable candidates for signal words in general, it seems likely that some rare content words may function as signals in context, such as evaluative adjectives (e.g. exquisite) enabling readers to recognize an evaluation. If we are willing to give up on the latter kind of rare items, the overfitting problem can be alleviated somewhat by setting a frequency threshold for each potential signal lexeme, thereby suppressing rare items. The items on the right of the table are limited to types occurring more than 10 times. Since the most distinctive items on the left are all comparatively rare (and therefore exclusive to their relations), they do not overlap with the items on the right. Looking at the items on the right, several signals make intuitive sense, especially for relations such as solutionhood (used for question-answer pairs) or concession, which show the expected WH words and auxiliary did, or discourse markers such as though, respectively. At the same time, some high frequency items may be spurious, such as NATO for justify, which could perhaps be filtered out based on low dispersion across documents, but also stuff for cause, which probably could not be. Another problem with the lists on the right is that some expected strong signals, such as the word and for sequence are absent from the table. This is not because and is not frequent in sequences, but rather because it is a ubiquitous word, and as a result, it is not very specific to the relation. However if we look at actual examples of and inside and outside of sequences, it is easy to notice that the kind of and that does signal a relation in context is often clause initial as in SECREF24 and very different from the adnominal coordinating ands in SECREF24, which do not signal the relation: . [she was made a Dame by Elizabeth II for services to architecture,] [and in 2015 she became the first and only woman to be awarded the Royal Gold Medal]$_{\textsc {sequence}}$ . [Gordon visited England and Scotland in 1686.] [In 1687 and 1689 he took part in expeditions against the Tatars in the Crimea]$_{\textsc {sequence}}$ These examples suggest that a data-driven approach to signal detection needs some way of taking context into account. In particular, we would like to be able to compare instances of signals and quantify how strong the signal is in each case. In the next section, we will attempt to apply a neural model with contextualized word embeddings BIBREF44 to this problem, which will be capable of learning contextualized representations of words within the discourse graph. <<</A Contextless Frequentist Approach>>> <<<A Contextualized Neural Model>>> <<<Task and Model Architecture>>> Since we are interested in identifying unrestricted signaling devices, we deliberately avoid a supervised learning approach as used in automatic signal detection trained on resources such as PDTB. While recent work on PDTB connective detection (BIBREF26, BIBREF45) achieves good results (F-Scores of around 88-89 for English PDTB explicit connectives), the use of such supervised approaches would not tell us about new signaling devices, and especially about unrestricted lexical signals and other coherence devices not annotated in PDTB. Additionally, we would be restricted to the newspaper text types represented in the Wall Street Journal corpus, since no other large English corpus has been annotated for anchored signals. Instead, we will adopt a distantly supervised approach: we will task a model with supervised discourse relation classification on data that has not been annotated for signals, and infer the positions of signals in the text by analyzing the model's behavior. A key assumption, which we will motivate below, is that signals can have different levels of signaling strength, corresponding to their relative importance in identifying a relation. We would like to assume that different signal strength is in fact relevant to human analysts' decision making in relation identification, though in practice we will be focusing on model estimates of strength, the usefulness of which will become apparent below. As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters. The model's architecture is shown in Figure FIGREF30. Contextualized embeddings BIBREF44 have the advantage of giving distinct representations to different instances of the same word based on the surrounding words, meaning that an adnominal and connecting two NPs can be distinguished from one connecting two verbs based on its vector space representation in the model. Using character embeddings, which give vector space representations to substrings within each word, means that the model can learn the importance of morphological forms, such as the English gerund's -ing suffix, even for out-of-vocabulary items not seen during training. Formally, the input to our system is formed of EDU pairs which are the head units within the respective blocks of discourse units that they belong to, which are in turn connected by an instance of a discourse relation. This means that every discourse relation in the corpus is expressed as exactly one EDU pair. Each EDU is encoded as a (possibly padded) sequence of $n$-dimensional vector representations of each word ${x_1,..,x_T}$, with some added separators which are encoded in the same way and described below. The bidirectional LSTM composes representations and context for the input, and a fully connected softmax layer gives the probability of each relation: where the probability of each relation $rel_i$ is derived from the composed output of the function $h$ across time steps $0 \ldots t$, $\delta \in \lbrace b,f\rbrace $ is the direction of the respective LSTMs, $c_t^\delta $ is the recurrent context in each direction and $\theta = {W,b}$ gives the model weights and bias parameters (see BIBREF46 for details). Note that although the output of the system is ostensibly a probability distribution over relation types, we will not be directly interested in the most probable relation as outputted by the classifier, but rather in analyzing the model's behavior with respect to the input word representations as potential signals of each relation. In order to capitalize on the system's natural language modeling knowledge, EDU satellite-nucleus pairs are presented to the model in text order (i.e. either the nucleus or the satellite may come first). However, the model is given special separator symbols indicating the positions of the satellite and nucleus, which are essential for deciding the relation type (e.g. cause vs. result, which may have similar cue words but lead to opposite labels), and a separator symbol indicating the transition between satellite and nucleus. This setup is illustrated in SECREF29. . $<$s$>$ Sometimes this information is available , $<$sep$>$ but usually not . $<$n$>$ Label: concession In this example, the satellite precedes the nucleus and is therefore presented first. The model is made aware of the fact that the segment on the left is the satellite thanks to the tag <s>. Since the LSTM is bi-directional, it is aware of positions being within the nucleus or satellite, as well as their proximity to the separator, at every time step. We reserve the signal-annotated subset of 12 documents from GUM for testing, which contains 1,185 head EDU pairs (each representing one discourse relation), and a random selection of 12 further documents from the remaining RST-annotated GUM data (1,078 pairs) is taken as development data, leaving 102 documents (5,828 pairs) for training. The same EDUs appear in multiple pairs if a unit has multiple children with distinct relations, but no instances of EDUs are shard across partitions, since the splits are based on document boundaries. We note again that for the training and development data, we have no signaling annotations of any kind; this is possible since the network does not actually use the human signaling annotations we will be evaluating against: its distant supervision consists solely of the RST relation labels. <<</Task and Model Architecture>>> <<<Relation Classification Performance>>> Although only used as an auxiliary training task, we can look at the model's performance on predicting discourse relations, which is given in Table TABREF34. Unsurprisingly, the model performs best on the most frequent relations in the corpus, such as elaboration or joint, but also on rarer ones which tend to be signaled explicitly, such as condition (often signaled explicitly by if), solutionhood (used for question-answer pairs signaled by question marks and WH words), or concession (DMs such as although). However, the model also performs reasonably well for some trickier (i.e. less often introduced by unambiguous DMs) but frequent relations, such as preparation, circumstance, and sequence. Rare relations with complex contextual environments, such as result, justify or antithesis, unsurprisingly do not perform well, with the latter two showing an F-score of 0. The relation restatement, which also shows no correct classifications, reveals a weakness of the model: while it is capable of recognizing signals in context, it cannot learn that repetition in and of itself, regardless of specific areas in vector space, is important (see Section SECREF6 for more discussion of these and other classification weaknesses). Although this is not the actual task targeted by the current paper, we may note that the overall performance of the model, with an F-Score of 44.37, is not bad, though below the performance of state-of-the-art full discourse parsers (see BIBREF49) – this is to be expected, since the model is not aware of the entire RST tree, rather looking only at EDU pairs out of context, and given that standard scores on RST-DT come from a larger and more homogeneous corpus, with with fewer relations and some easy cases that are absent from GUM. Given the model's performance on relation classification, which is far from perfect, one might wonder whether signal predictions made by our analysis should be trusted. This question can be answered in two ways: first, quantitatively, we will see in Section SECREF5 that model signal predictions overlap considerably with human judgments, even when the predicted relation is incorrect. Intuitively, for similar relations, such as concession or contrast, both of which are adversative, the model may notice a relevant cue (e.g. `but', or contrasting lexical items) despite choosing the wrong one. Second, as we will see below, we will be analyzing the model's behavior with respect to the probability of the correct relation, regardless of the label it ultimately chooses, meaning that the importance of predicting the correct label exactly will be diminished further. <<</Relation Classification Performance>>> <<<Signaling Metric>>> The actual performance we are interested in evaluating is the model's ability to extract signals for given discourse relations, rather than its accuracy in predicting the relations. To do so, we must extract anchored signal predictions from the model, which is non-trivial. While earlier work on interpreting neural models has focused on token-wise softmax probability BIBREF50 or attention weights BIBREF51, using contextualized embeddings complicates the evaluation: since word representations are adjusted to reflect neighboring words, the model may assign higher importance to the word standing next to what a human annotator may interpret as a signal. Example SECREF36 illustrates the problem: . [RGB]230, 230, 230To [RGB]53, 53, 53provide [RGB]165, 165, 165information [RGB]179, 179, 179on [RGB]175, 175, 175the [RGB]160, 160, 160analytical [RGB]157, 157, 157sample [RGB]187, 187, 187as [RGB]170, 170, 170a [RGB]168, 168, 168whole [RGB]207, 207, 207, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]168, 168, 168two [RGB]170, 170, 170additional [RGB]164, 164, 164demographic [RGB]175, 175, 175variables [RGB]182, 182, 182are [RGB]165, 165, 165included [RGB]230, 230, 230. Each word in SECREF36 is shaded based on the softmax probability assigned to the correct relation of the satellite, i.e. how `convincing' the model found the word in terms of local probability. In addition, the top-scoring word in each sentence is rendered in boldface for emphasis. The gold label for the relation is placed above the arrow, which indicates the direction of the relation (satellite to nucleus), and the model's predicted label is shown under the arrow. Intuitively, the strongest signal of the purpose relation in SECREF36 is the initial infinitive marker To – however, the model ranks the adjacent provide higher and almost ignores To. We suspect that the reason for this, and many similar examples in the model evaluated based on relation probabilities, is that contextual embeddings allow for a special representation of the word provide next to To, making it difficult to tease apart the locus of the most meaningful signal. To overcome this complication, we use the logic of permutation importance, treating the neural model as a black box and manipulating the input to discover relevant features in the data (cf. BIBREF52). We reason that this type of evaluation is more robust than, for example, examining model internal attention weights because such weights are not designed or trained with a reward ensuring they are informative – they are simply trained on the same classification error loss as the rest of the model. Instead, we can withhold potentially relevant information from the model directly: After training is complete, we feed the test data to the model in two forms – as-is, and with each word masked, as shown in SECREF36. . Original: <$s$>$ To\quad \ p̄rovide īnformation .. <$sep$>$ .. <$n$>$ Original: \: $<$s$>$ \: \ To \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked1: \: $<$s$>$ \: $<$X$>$ \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked2: \: $<$s$>$ \: \ To \: \ $<$X$>$ \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked3: \: $<$s$>$ \: \ To \: provide \: \ $<$X$>$ \: ... \: $<$sep$>$ \: ... \: $<$n$>$ $ Label: purpose We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least reduce its reported classification certainty. In SECREF36, it seems reasonable to assume that masking the word `To' has a greater impact on predicting the label purpose than masking the word `provide', and even less so, the following noun `information'. We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\Delta }_s$ (for delta-softmax), which can be written as: where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \in 1 \ldots N$ ignoring separators, or $\phi $, the empty set). To visualize the model's predictions, we compare ${\Delta }_s$ for a particular token to two numbers: the maximum ${\Delta }_s$ achieved by any token in the current pair (a measure of relative importance for the current classification) and the maximum ${\Delta }_s$ achieved by any token in the current document (a measure of how strongly the current relation is signaled compared to other relations in the text). We then shade each token 50% based on the first number and 50% based on the second. As a result, the most valid cues in an EDU pair are darker than their neighbors, but EDU pairs with no good cues are overall very light, whereas pairs with many good signals are darker. Some examples of this visualization are given in SECREF36-SECREF36 (human annotated endocentric signal tokens are marked by double underlines). . [RGB]61, 61, 61To [RGB]112, 112, 112provide [RGB]205, 205, 205information [RGB]230, 230, 230on [RGB]230, 230, 230the [RGB]230, 230, 230analytical [RGB]230, 230, 230sample [RGB]230, 230, 230as [RGB]230, 230, 230a [RGB]230, 230, 230whole [RGB]230, 230, 230, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]230, 230, 230two [RGB]183, 183, 183additional [RGB]230, 230, 230demographic [RGB]230, 230, 230variables [RGB]94, 94, 94are [RGB]194, 194, 194included [RGB]163, 163, 163. . [RGB]230, 230, 230Telling [RGB]230, 230, 230good [RGB]230, 230, 230jokes [RGB]230, 230, 230is [RGB]230, 230, 230an [RGB]230, 230, 230art [RGB]230, 230, 230that [RGB]230, 230, 230comes [RGB]230, 230, 230naturally [RGB]230, 230, 230to [RGB]230, 230, 230some [RGB]211, 211, 211people [RGB]135, 135, 135, $\xleftarrow[\text{pred:contrast}]{\text{gold:contrast}}$ [RGB]21, 21, 21but [RGB]209, 209, 209for [RGB]207, 207, 207others [RGB]230, 230, 230it [RGB]217, 217, 217takes [RGB]230, 230, 230practice [RGB]230, 230, 230and [RGB]189, 189, 189hard [RGB]230, 230, 230work [RGB]230, 230, 230. . [RGB]230, 230, 230It [RGB]230, 230, 230is [RGB]230, 230, 230possible [RGB]230, 230, 230that [RGB]230, 230, 230these [RGB]230, 230, 230two [RGB]230, 230, 230children [RGB]230, 230, 230understood [RGB]230, 230, 230the [RGB]230, 230, 230task [RGB]230, 230, 230and [RGB]230, 230, 230really [RGB]230, 230, 230did [RGB]230, 230, 230believe [RGB]230, 230, 230that [RGB]230, 230, 230the [RGB]230, 230, 230puppet [RGB]230, 230, 230did [RGB]230, 230, 230not [RGB]230, 230, 230produce [RGB]230, 230, 230any [RGB]230, 230, 230poor [RGB]230, 230, 230descriptions [RGB]230, 230, 230, [RGB]230, 230, 230and [RGB]230, 230, 230in [RGB]230, 230, 230this [RGB]230, 230, 230regard [RGB]230, 230, 230, [RGB]230, 230, 230are [RGB]230, 230, 230not [RGB]230, 230, 230yet [RGB]230, 230, 230adult-like [RGB]230, 230, 230in [RGB]230, 230, 230their [RGB]230, 230, 230SI [RGB]230, 230, 230interpretations [RGB]230, 230, 230. $\xleftarrow[\text{pred:evaluation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230This [RGB]230, 230, 230is [RGB]41, 41, 41unlikely The highlighting in SECREF36 illustrates the benefits of the masking based evaluation compared to SECREF36: the token To is now clearly the strongest signal, and the verb is taken to be less important, followed by the even less important object of the verb. This is because removing the initial To hinders classification much more than the removal of the verb or noun. We note also that although the model in fact misclassified this example as preparation, we can still use masking importance to identify To, since the score queried from the model corresponds to a relative decrease in the probability of the correct relation, purpose, even if this was not the highest scoring relation overall. In SECREF36 we see the model's ability to correctly predict contrast based on the DM but. Note that despite a rather long sentence, the model does not need any other word nearly as much for the classification. Although the model is not trained explicitly to detect discourse markers, the DM can be recognized due to the fact that masking it leads to a drop of 66% softmax probability (${\Delta }_s$=0.66) of this pair representing the contrast relation. We can also note that a somewhat lower scoring content word is also marked: hard (${\Delta }_s$=0.18). In our gold signaling annotations, this word was marked together with comes naturally as a signal, due to the contrast between the two concepts (additionally, some people is flagged as a signal along with others). The fact that the model finds hard helpful, but does not need the contextual near antonym naturally, suggests that it is merely learning that words in the semantic space near hard may indicate contrast, and not learning about the antonymous relationship – otherwise we would expect to see `naturally' have a stronger score (see also the discussion in Section SECREF6). Finally SECREF36 shows that, much like in the case of hard, the model is not biased towards traditional DMs, confirming that it is capable of learning about content words, or neighborhoods of content words in vector space. In a long EDU pair of 41 words, the model relies almost exclusively on the word unlikely (${\Delta }_s$=0.36) to correctly label the relation as evaluation. By contrast, the anaphoric demonstrative `This' flagged by the human annotator, which is a more common function word, is disregarded, perhaps because it can appear with several other relations, and is not particularly exclusive to evaluation. These results suggest that the model may be capable of recognizing signals through distant supervision, allowing it to validate human annotations, to potentially point out signals that may be missed by annotators, and most importantly, to quantify signaling strength on a sliding scale. At the same time, we need a way to evaluate the model's quality and assess the kinds of errors it makes, as well as what we can learn from them. We therefore move on to evaluating the model and its errors next. <<</Signaling Metric>>> <<</A Contextualized Neural Model>>> <<</Automatic Signal Extraction>>> <<<Evaluation and Error Analysis>>> <<<Evaluation Metric>>> To evaluate the neural model, we would like to know how well ${\Delta }_s$ corresponds to annotators' gold standard labels. This leads to two kinds of problems: the first is that the model is distantly supervised, and therefore does not know about signal types, subtypes, or any aspect of signaling annotation and its relational structure. The second problem is that signaling annotations are categorical, and do not correspond to the ratio-scaled predictions provided by ${\Delta }_s$ (this is in fact one of the motivations for desiring a model-based estimate of signaling strength). The first issue means that we can only examine the model's ability to locate signals – not to classify them. Although there may be some conceivable ways of analyzing model output to identify classes such as DMs (which are highly lexicalized, rather than representing broad regions of vector space, as words such as unlikely might), or more contextual relational signals, such as pronouns, this line of investigation is beyond the scope of the present paper. A naive solution to the second problem might be to identify a cutoff point, e.g. deciding that all and only words scoring ${\Delta }_s>$0.15 are predicted to be signals. The problem with the latter approach is that sentences can be very different in many ways, and specifically in both length and in levels of ambiguity. Sentences with multiple, mutually redundant cues, may produce lower ${\Delta }_s$ scores compared to shorter sentences with a subset of the same cues. Conversely, in very short sentences with low signal strength, the model may reasonably be expected to degrade very badly with the deletion of almost any word, as the context becomes increasingly incomprehensible. For these reasons, we choose to adopt an evaluation metric from the paradigm of information retrieval, and focus on recall@k (recall at rank k, for $k=1,2,3$...). The idea is to poll the model for each sentence in which some signals have been identified, and see whether the model is able to find them if we let it guess using the word with the maximal ${\Delta }_s$ score (recall@1), regardless of how high that score is, or alternatively relax the evaluation criteria and see whether the human annotator's signal tokens appear at rank 2 or 3. Figure FIGREF40 shows numbers for recall@k for the top 3 ranks outputted by the model, next to random guess baselines. The left, middle and right panels in Figure FIGREF40 correspond to measurements when all signals are included, only cases contained entirely in the head EDUs shown to the model, and only DMs, respectively. The scenario on the left is rather unreasonable and is included only for completeness: here the model is also penalized for not detecting signals such as lexical chains, part of which is outside the units that the model is being shown. An example of such a case can be seen in Figure FIGREF41. The phrase Respondents in unit [23] signals the relation elaboration, since it is coreferential with a previous mention of the respondents in [21]. However, because the model is only given heads of EDU blocks to classify, it does not have access to the first occurrence of respondents while predicting the elaboration relation – the first half of the signal token set is situated in a child of the nucleus EDU before the relation, i.e. it belongs to group IV in the taxonomy in Table TABREF20. Realistically, our model can only be expected to learn about signals from `directly participating' EDUs, i.e. groups I, II, VI and VII, the `endocentric' signal groups from Section SECREF16. Although most signals belong to endocentric categories (71.62% of signaled relations belong to these groups, cf. Table TABREF20), exocentric cases form a substantial portion of signals which we have little hope of capturing with the architecture used here. As a result, recall metrics in the `all signals' scenario are closest to the random baselines, though the signals detected in other instances still place the model well above the baseline. A more reasonable evaluation is the one in the middle panel of Figure FIGREF40, which includes only endocentric signals as defined in the taxonomy. EDUs with no endocentric signals are completely disregarded in this scenario, which substantially reduces the number of tokens considered to be signals, since, while many tokens are part of some meaningful lexical chain in the document, requiring signals to be contained only in the pair of head units eliminates a wide range of candidates. Although the random baseline is actually very slightly higher (perhaps because eliminated EDUs were often longer ones, sharing small amounts of material with larger parts of the text, and therefore prone to penalizing the baseline; many words mean more chances for a random guess to be wrong), model accuracy is substantially better in this scenario, reaching a 40% chance of hitting a signal with only one guess, exceeding 53% with two guesses, and capping at 64% for recall@3, over 20 points above baseline. Finally, the right panel in the figure shows recall when only DMs are considered. In this scenario, a random guess fares very poorly, since most words are not DMs. The model, by contrast, achieves the highest results in all metrics, since DMs have the highest cue validity for relation classification, and the model attends to them most strongly. With just one guess, recall is over 56%, and goes as high as 67% for recall@3. The baseline only goes as high as 16% for three guesses. <<</Evaluation Metric>>> <<<Qualitative Analysis>>> Looking at the model's performance qualitatively, it is clear that it can detect not only DMs, but also morphological cues (e.g. gerunds as markers of elaboration, as in SECREF43), semantic classes and sentiment, such as positive and negative evaluatory terms in SECREF43, as well as multiple signals within the same EDU, as in SECREF43. In fact, only about 8.3% of the tokens correctly identified by the model in Table TABREF45 below are of the DM type, whereas about 7.2% of all tokens flagged by human annotators were DMs, meaning that the model frequently matches non-DM items to discourse relation signals (see Performance on Signal Types below). It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose. . [RGB]230, 230, 230For [RGB]230, 230, 230the [RGB]230, 230, 230present [RGB]230, 230, 230analysis [RGB]230, 230, 230, [RGB]230, 230, 230these [RGB]230, 230, 230responses [RGB]230, 230, 230were [RGB]230, 230, 230recoded [RGB]230, 230, 230into [RGB]230, 230, 230nine [RGB]230, 230, 230mutually [RGB]230, 230, 230exclusive [RGB]230, 230, 230categories $\xleftarrow[\text{pred:elaboration}]{\text{gold:result}}$ [RGB]63, 63, 63capturing [RGB]219, 219, 219the [RGB]230, 230, 230following [RGB]230, 230, 230options [RGB]135, 135, 135: . [RGB]185, 185, 185Professor [RGB]219, 219, 219Eastman [RGB]223, 223, 223said [RGB]207, 207, 207he [RGB]194, 194, 194is [RGB]64, 64, 64alarmed [RGB]230, 230, 230by [RGB]230, 230, 230what [RGB]230, 230, 230they [RGB]230, 230, 230found [RGB]230, 230, 230. $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230" [RGB]230, 230, 230Pregnant [RGB]229, 229, 229women [RGB]187, 187, 187in [RGB]230, 230, 230Australia [RGB]98, 98, 98are [RGB]213, 213, 213getting [RGB]230, 230, 230about [RGB]230, 230, 230half [RGB]171, 171, 171as [RGB]159, 159, 159much [RGB]230, 230, 230as [RGB]230, 230, 230what [RGB]155, 155, 155they [RGB]155, 155, 155require [RGB]223, 223, 223on [RGB]214, 214, 214a [RGB]109, 109, 109daily [RGB]176, 176, 176basis [RGB]111, 111, 111. . [RGB]195, 195, 195Even [RGB]230, 230, 230so [RGB]230, 230, 230, [RGB]230, 230, 230estimates [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230prevalence [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]230, 230, 230discrimination [RGB]219, 219, 219remains [RGB]230, 230, 230rare $\xleftarrow[\text{pred:evidence}]{\text{gold:concession}}$ [RGB]111, 111, 111At [RGB]63, 63, 63least [RGB]230, 230, 230one [RGB]230, 230, 230prior [RGB]230, 230, 230study [RGB]230, 230, 230by [RGB]230, 230, 230Kessler [RGB]225, 225, 225and [RGB]230, 230, 230colleagues [RGB]230, 230, 230[ [RGB]230, 230, 23015 [RGB]161, 161, 161] [RGB]200, 200, 200, [RGB]136, 136, 136however [RGB]222, 222, 222, [RGB]228, 228, 228using [RGB]230, 230, 230measures [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]224, 224, 224discrimination [RGB]217, 217, 217in [RGB]230, 230, 230a [RGB]230, 230, 230large [RGB]218, 218, 218American [RGB]230, 230, 230sample [RGB]230, 230, 230, [RGB]230, 230, 230reported [RGB]230, 230, 230that [RGB]230, 230, 230approximately [RGB]230, 230, 23033 [RGB]212, 212, 212% [RGB]230, 230, 230of [RGB]230, 230, 230respondents [RGB]156, 156, 156reported [RGB]169, 169, 169some [RGB]122, 122, 122form [RGB]168, 168, 168of [RGB]230, 230, 230discrimination Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators. . [RGB]216, 216, 216The [RGB]99, 99, 99agreement [RGB]89, 89, 89was [RGB]230, 230, 230that [RGB]131, 131, 131Gorbachev [RGB]102, 102, 102agreed [RGB]230, 230, 230to [RGB]230, 230, 230a [RGB]230, 230, 230quite [RGB]230, 230, 230remarkable [RGB]125, 125, 125concession [RGB]230, 230, 230: $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]64, 64, 64he [RGB]81, 81, 81agreed [RGB]230, 230, 230to [RGB]230, 230, 230let [RGB]220, 220, 220a [RGB]143, 143, 143united [RGB]149, 149, 149Germany [RGB]230, 230, 230join [RGB]83, 83, 83the [RGB]230, 230, 230NATO [RGB]230, 230, 230military [RGB]230, 230, 230alliance [RGB]230, 230, 230. . [RGB]230, 230, 230The [RGB]220, 220, 220opening [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230joke [RGB]230, 230, 230— [RGB]230, 230, 230or [RGB]230, 230, 230setup [RGB]230, 230, 230— [RGB]230, 230, 230should [RGB]230, 230, 230have [RGB]230, 230, 230a [RGB]230, 230, 230basis [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230real [RGB]200, 200, 200world $\xleftarrow[\text{pred:purpose}]{\text{gold:purpose}}$ [RGB]7, 7, 7so [RGB]73, 73, 73your [RGB]230, 230, 230audience [RGB]230, 230, 230can [RGB]230, 230, 230relate [RGB]230, 230, 230to [RGB]230, 230, 230it [RGB]230, 230, 230, In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead: . [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183. From the model's perspective, the question mark, which scores ${\Delta }_s$=0.79, is the single most important signal, and virtually sufficient for classifying the relation correctly, though it was left out of the gold annotations. The WH word Which and the sentence final why, by contrast, were noticed by annotators but were are not as unambiguous (the former could be a determiner, and the latter in sentence final position could be part of an embedded clause). In the presence of the question mark, their individual removal has much less impact on the classification decision. Although the model's behavior is sensible and can reveal annotation errors, it also suggests that ${\Delta }_s$ will be blind to auxiliary signals in the presence of very strong, independently sufficient cues. Using the difference in likelihood of correct relation prediction as a metric also raises the possibility of an opposite concept to signals, which we will refer to as distractors. Since ${\Delta }_s$ is a signed measure of difference, it is in fact possible to obtain negative values whenever the removal or masking of a word results in an improvement in the model's ability to predict the relation. In such cases, and especially when the negative value is of a large magnitude, it seems like a reasonable interpretation to say that a word functions as a sort of anti-signal, preventing or complicating the recognition of what might otherwise be a more clear-cut case. Examples SECREF43–SECREF43 show some instances of distractors identified by the masking procedure (distractors with ${\Delta }_s<$-0.2 are underlined). . [RGB]230, 230, 230How [RGB]230, 230, 230do [RGB]230, 230, 230they [RGB]201, 201, 201treat [RGB]167, 167, 167those [RGB]210, 210, 210not [RGB]190, 190, 190like [RGB]230, 230, 230themselves [RGB]100, 100, 100? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:preparation}}$ [RGB]52, 52, 52then [RGB]230, 230, 230they [RGB]230, 230, 230're [RGB]230, 230, 230either [RGB]230, 230, 230over-zealous [RGB]230, 230, 230, [RGB]230, 230, 230ignorant [RGB]230, 230, 230of [RGB]230, 230, 230other [RGB]230, 230, 230people [RGB]230, 230, 230or [RGB]230, 230, 230what [RGB]230, 230, 230to [RGB]230, 230, 230avoid [RGB]230, 230, 230those [RGB]230, 230, 230that [RGB]230, 230, 230contradict [RGB]230, 230, 230their [RGB]230, 230, 230fantasy [RGB]230, 230, 230land [RGB]230, 230, 230that [RGB]220, 220, 220caters [RGB]230, 230, 230to [RGB]230, 230, 230them [RGB]230, 230, 230and [RGB]230, 230, 230them [RGB]230, 230, 230only [RGB]230, 230, 230. . [RGB]230, 230, 230God [RGB]230, 230, 230, [RGB]230, 230, 230I [RGB]230, 230, 230do [RGB]230, 230, 230n't [RGB]230, 230, 230know [RGB]51, 51, 51! $\xrightarrow[\text{pred:preparation}]{\text{gold:preparation}}$ [RGB]230, 230, 230but [RGB]230, 230, 230nobody [RGB]230, 230, 230will [RGB]230, 230, 230go [RGB]230, 230, 230to [RGB]230, 230, 230fight [RGB]230, 230, 230for [RGB]230, 230, 230noses [RGB]230, 230, 230any [RGB]219, 219, 219more [RGB]169, 169, 169. In SECREF43, a rhetorical question trips up the classifier, which predicts the question-answer relation solutionhood instead of preparation. Here the initial WH word How and the subsequent auxiliary do-support both distract (with ${\Delta }_s$=-0.23 and -0.25) from the preparation relation, which is however being signaled positively by the DM then in the nucleus unit. Later on, the adverb only is also disruptive (${\Delta }_s$=-0.31), perhaps due to a better association with adversative relations, such as contrast. In SECREF43, a preparatory “God, I don't know!” is followed up with a nucleus starting with but, which typically marks a concession or other adversative relation. In fact, the DM but is related to a concessive relation with another EDU (not shown), which the model is not aware of while making the classification for the preparation. Although this example reveals a weakness in the model's inability to consider broader context, it also reveals the difficulty of expecting DMs to fall in line with a strong nuclearity assumption: since units serve multiple functions as satellites and nuclei, signals which aid the recognition of one relation may hinder the recognition of another. <<</Qualitative Analysis>>> <<<Performance on Signal Types>>> To better understand the kinds of signals which the model captures better or worse, Table TABREF45 gives a breakdown of performance by signal type and specific signal categories, for categories attested over 20 times (note that the categories are human labels assigned to the corresponding positions – the system does not predict signal types). To evaluate performance for all types we cannot use recall@1–3, since some sentences contain more than 3 signal tokens, which would lead to recall errors even if the top 3 ranks are correctly identified signals. The scores in the table therefore express how many of the signal tokens belonging to each subtype in the gold annotations are recognized if we allow the system to make as many guesses as there are signal tokens in each EDU pair, plus a tolerance of a maximum of 2 additional tokens (similarly to recall@3). We also note that a single token may be associated with multiple signal types, in which case its identification or omission is counted separately for each type. Three of the top four categories which the model performs best for are, perhaps unsurprisingly, the most lexical ones: alternate expression captures non-DM phrases such as I mean (for elaboration), or the problem is (for concession), and indicative word includes lexical items such as imperative see (consistently marking evidence in references within academic articles) or evaluative adjectives such as interesting for evaluation. The good performance of the category colon captures the model's recognition of colons as important punctuation, primarily predicting preparation. The only case of a `relational' category, requiring attention to two separate positions in the input, which also fares well is synonymy, though this is often based on flagging only one of two items annotated as synonymous, and is based on rather few examples. We can find only one example, SECREF44, where both sides of a pair of similar words is actually noticed, which both belong to the same stem (decline/declining): . [RGB]230, 230, 230The [RGB]230, 230, 230report [RGB]209, 209, 209says [RGB]213, 213, 213the [RGB]172, 172, 172decline [RGB]220, 220, 220in [RGB]228, 228, 228iodine [RGB]230, 230, 230intake [RGB]215, 215, 215appears [RGB]230, 230, 230to [RGB]230, 230, 230be [RGB]230, 230, 230due [RGB]230, 230, 230to [RGB]230, 230, 230changes [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230dairy [RGB]230, 230, 230industry [RGB]230, 230, 230, [RGB]230, 230, 230where [RGB]230, 230, 230chlorine-containing [RGB]230, 230, 230sanitisers [RGB]226, 226, 226have [RGB]230, 230, 230replaced [RGB]230, 230, 230iodine-containing [RGB]230, 230, 230sanitisers [RGB]230, 230, 230. $\xleftarrow[\text{pred:background}]{\text{gold:justify}}$ [RGB]193, 193, 193Iodine [RGB]230, 230, 230released [RGB]230, 230, 230from [RGB]230, 230, 230these [RGB]230, 230, 230chemicals [RGB]230, 230, 230into [RGB]216, 216, 216milk [RGB]230, 230, 230has [RGB]230, 230, 230been [RGB]230, 230, 230the [RGB]230, 230, 230major [RGB]230, 230, 230source [RGB]230, 230, 230of [RGB]226, 226, 226dietary [RGB]206, 206, 206iodine [RGB]230, 230, 230in [RGB]230, 230, 230Australia [RGB]230, 230, 230for [RGB]230, 230, 230at [RGB]230, 230, 230least [RGB]230, 230, 230four [RGB]230, 230, 230decades [RGB]202, 202, 202, [RGB]153, 153, 153but [RGB]230, 230, 230is [RGB]230, 230, 230now [RGB]63, 63, 63declining [RGB]79, 79, 79. We note that our evaluation is actually rather harsh towards the model, since in multiword expressions, often only one central word is flagged by ${\Delta }_s$ (e.g. problem in “the problem is”), while the model is penalized in Table TABREF45 for each token that is not recognized (i.e. the and is, which were all flagged by a human annotator as signals in the data). Interestingly, the model fares rather well in identifying morphological tense cues, even though these are marked by both inflected lexical verbs and semantically poor auxiliaries (e.g. past perfect auxiliary had marking background); but modality cues (especially can or could for evaluation) are less successfully identified, suggesting they are either more ambiguous, or mainly relevant in the presence of evaluative content words which out-score them. Other relational categories from the middle of the table which ostensibly require matching pairs of words, such as repetition, meronymy, or personal reference (coreference) are mainly captured by the model when a single item is a sufficiently powerful cue, often ignoring the other half of the signal, as shown in SECREF44. . [RGB]230, 230, 230On [RGB]230, 230, 230a [RGB]230, 230, 230new [RGB]230, 230, 230website [RGB]230, 230, 230, [RGB]230, 230, 230" [RGB]230, 230, 230The [RGB]230, 230, 230Internet [RGB]230, 230, 230Explorer [RGB]230, 230, 2306 [RGB]230, 230, 230Countdown [RGB]230, 230, 230" [RGB]230, 230, 230, [RGB]230, 230, 230Microsoft [RGB]230, 230, 230has [RGB]230, 230, 230launched [RGB]230, 230, 230an [RGB]230, 230, 230aggressive [RGB]230, 230, 230campaign [RGB]230, 230, 230to [RGB]230, 230, 230persuade [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230stop [RGB]171, 171, 171using [RGB]133, 133, 133IE6 $\xleftarrow[\text{pred:elaboration}]{\text{gold:elaboration}}$ [RGB]56, 56, 56Its [RGB]197, 197, 197goal [RGB]167, 167, 167is [RGB]230, 230, 230to [RGB]230, 230, 230decrease [RGB]230, 230, 230IE6 [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230less [RGB]230, 230, 230than [RGB]230, 230, 230one [RGB]124, 124, 124percent [RGB]229, 229, 229. Here the model has learned that an initial possessive pronoun, perhaps in the context of a subject NP in a copula sentence (note the shading of the following is) is an indicator of an elaboration relation, even though there is no indication that the model has noticed which word is the antecedent. Similarly for the count category, the model only learns to notice the possible importance of some numbers, but is not actually aware of whether they are identical (e.g. for restatement) or different (e.g. in contrast). Finally, some categories are actually recognized fairly reliably, but are penalized by the same partial substring issue identified above: Date expressions are consistently flagged as indicators of circumstance, but often a single word, such as a weekday in SECREF44, is dominant, while the model is penalized for not scoring other words as highly (including commas within dates, which are marked as part of the signal token span in the gold standard, but whose removal does not degrade prediction accuracy). In this case it seems fair to say that the model has successfully recognized the date signal of `Wednesday April 13', yet it loses points for missing two instances of `,', and the `2011', which is no longer necessary for recognizing that this is a date. . [RGB]230, 230, 230NASA [RGB]230, 230, 230celebrates [RGB]230, 230, 23030th [RGB]230, 230, 230anniversary [RGB]230, 230, 230of [RGB]230, 230, 230first [RGB]230, 230, 230shuttle [RGB]230, 230, 230launch [RGB]230, 230, 230; $\xleftarrow[\text{pred:circumstance}]{\text{gold:circumstance}}$ [RGB]11, 11, 11Wednesday [RGB]186, 186, 186, [RGB]115, 115, 115April [RGB]153, 153, 15313 [RGB]219, 219, 219, [RGB]230, 230, 2302011 <<</Performance on Signal Types>>> <<</Evaluation and Error Analysis>>> <<<Discussion>>> This paper has used a corpus annotated for discourse relation signals within the framework of the RST Signalling Corpus (BIBREF12) and extended with anchored signal annotations (BIBREF27) to develop a taxonomy of unrestricted and hierarchically aware discourse signal positions, as well as a data-driven neural network model to explore distantly supervised signal word extraction. The results shed light on the distribution of signal categories from the RST-SC taxonomy in terms of associated word forms, and show the promise of neural models with contextual embeddings for the extraction of context dependent and gradient discourse signal detection in individual texts. The metric developed for the evaluation, $\Delta _s$, allows us to assess the relative importance of signal words for automatic relation classification, and reveal observations for further study, as well as shortcomings which point to the need to develop richer feature representations and system architectures in future work. The model presented in the previous sections is clearly incomplete in both its classification accuracy and its ability to recognize the same signals that humans do. However, given the fact that it is trained entirely without access to discourse signal annotations and is unaware of any of the guidelines used to create the gold standard that it is evaluated on, its performance may be considered surprisingly good. As an approach to extracting discourse signals in a data-driven way, similar to frequentist methods or association measures used in previous work, we suggest that this model forms a more fine grained tool, capable of taking context into consideration and delivering scores for each instance of a signal candidate, rather than resulting in a table of undifferentiated signal word types. Additionally, although we consider human signal annotations to be the gold standard in identifying the presence of relevant cues, the ${\Delta }_s$ metric gives new insights into signaling which cannot be approached using manual signaling annotations. Firstly, the quantitative nature of the metric allows us to rank signaling strength in a way that humans have not to date been able to apply: using ${\Delta }_s$, we can say which instances of which signals are evaluated as stronger, by how much, and which words within a multi-word signal instance are the most important (e.g. weekdays in dates are important, the commas are not). Secondly, the potential for negative values of the metric opens the door to the study of negative signals, or `distractors', which we have only touched upon briefly in this paper. And finally, we consider the availability of multiple measurements for a single DM or other discourse signal to be a potentially very interesting window into the relative ambiguity of different signaling devices (cf. BIBREF16) and for research on the contexts in which such ambiguity results. To see how ambiguity is reflected in multiple measurements of ${\Delta }_s$, we can consider Figure FIGREF47. The figure shows boxplots for multiple instances of the same signal tokens. We can see that words like and are usually not strong signals, with the entire interquartile range scoring less than 0.02, i.e. aiding relation classification by less than 2%, with some values dipping into the negative region (i.e. cases functioning as distractors). However, some outliers are also present, reaching almost as high as 0.25 – these are likely to be coordinating predicates, which may signal relations such as sequence or joint. A word such as but is more important overall, with the box far above and, but still covering a wide range of values: these can correspond to more or less ambiguous cases of but, but also to cases in which the word is more or less irreplaceable as a signal. In the presence of multiple signals for the same relation, the presence of but should be less important. We can also see that but can be a distractor with negative values, as we saw in example SECREF43 above. As far as we are aware, this is the first empirical corpus-based evidence giving a quantitative confirmation to the intuition that `but' in context is significantly less ambiguous as a discourse marker than `and'; the overlap in their bar plots indicate that they can be similarly ambiguous or even distracting in some cases, but the difference in interquartile ranges makes it clear that these are exceptions. For less ambiguous DMs, such as if, we can also see a contrast between lower and upper case instances: upper case If is almost always a marker of condition, but the lower case if is sometimes part of an embedded object clause, which is not segmented in the corpus and does not mark a conditional relation (e.g. “they wanted to see if...”). For the word to, the figure suggests a strongly bimodal distribution, with a core population of (primarily prepositional) discourse-irrelevant to, and a substantial number of outliers above a large gap, representing to in infinitival purpose clauses (though not all to infinitives mark such clauses, as in adnominal “a chance to go”, which the model is usually able to distinguish in context). In other words, our model can not only disambiguate ambiguous strings into grammatical categories, but also rank members of the same category by importance in context, as evidenced by its ability to correctly classify high frequency items like `to' or `and' as true positives. A frequentist approach would not only lack this ability – it would miss such items altogether, due to its overall high string frequency and low specificity. Beyond what the results can tell us about discourse signals in this particular corpus, the fact that the neural model is sensitive to mutual redundancy of signals raises interesting theoretical questions about what human annotators are doing when they characterize multiple features of a discourse unit as signals. If it is already evident from the presence of a conventional DM that some relation applies, are other, less explicit signals which might be relied on in the absence of the DM, equally `there'? Do we need a concept of primary and auxiliary signals, or graded signaling strength, in the way that a metric such as ${\Delta }_s$ suggests? Another open question relates to the postulation of distractors as an opposite concept to discourse relation signals. While we have not tested this so far, it is interesting to ask to what extent human analysts are aware of distractors, whether we could form annotation guidelines to recognize them, and how humans weigh the value of signals and potential distractors in extrapolating intended discourse relations. It seems likely that distractors affecting humans may be found in cases of misunderstanding or ambiguity of discourse relations (see also BIBREF25). Finally, the error analysis for signal detection complements the otherwise opaque relation classification results in Table TABREF34 in showing some of the missing sources of information that our model would need in order to work better. We have seen that relational information, such as identifying not just the presence of a pronoun but also its antecedent, or both sides of lexical semantic relations such as synonymy, meronymy or antonymy, as well as comparing count information, are still unavailable to the classifier – if they were being used, then ${\Delta }_s$ would reflect the effects of their removal, but this is largely not the case. This suggests that, in the absence of vastly larger discourse annotated corpora, discourse relation recognition may require the construction of either features, architectures, or both, which can harness abstract relational information of this nature beyond the memorization of specific pairs of words (or regions of vector space with similar words) that are already attested in the limited training data. In this vein, BIBREF54 conducted a series of experiments on automatic sense prediction for four top-level implicit discourse relations within the PDTB framework, which also suggested benefits for using linguistically-informed features such as verb information, polarity tags, context, lexical items (e.g. first and last words of the arguments; first three words in the sentence) etc. The model architecture and input data are also in need of improvements, as the current architecture can only be expected to identify endocentric signals. The substantial amount of exocentric signaling cases is in itself an interesting finding, as it suggests that relation classification from head EDU pairs may ultimately have a natural ceiling that is considerably below what could be inferred from looking at larger contexts. We predict that as we add more features to the model and improve its architecture in ways that allow it to recognize the kinds of signals that humans do, classification accuracy will increase; and conversely, as classification accuracy rises, measurements based on ${\Delta }_s$ will overlap increasingly with human annotations of anchored signals. In sum, we believe that there is room for much research on what relation classification models should look like, and how they can represent the kinds of information found in non-trivial signals. The results of this line of work can therefore benefit NLP systems targeting discourse relations by suggesting locations within the text which systems should attend to in one way or another. Moreover, we think that using distant-supervised techniques for learning discourse relations (e.g. BIBREF55) is promising in the development of discourse models using the proposed dataset. We hope to see further analyses benefit from this work and the application of metrics such as ${\Delta }_s$ to other datasets, within more complex models, and using additional features to capture such information. We also hope to see applications of discourse relations such as machine comprehension BIBREF20 and sentiment analysis BIBREF55 etc. benefit from the proposed model architecture as well as the dataset. <<</Discussion>>> <<</Title>>>
{ "references": [ "Introduction, Discussion" ], "type": "disordered_section" }
2001.02380
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> A Neural Approach to Discourse Relation Signal Detection <<<Abstract>>> Previous data-driven work investigating the types and distributions of discourse relation signals, including discourse markers such as 'however' or phrases such as 'as a result' has focused on the relative frequencies of signal words within and outside text from each discourse relation. Such approaches do not allow us to quantify the signaling strength of individual instances of a signal on a scale (e.g. more or less discourse-relevant instances of 'and'), to assess the distribution of ambiguity for signals, or to identify words that hinder discourse relation identification in context ('anti-signals' or 'distractors'). In this paper we present a data-driven approach to signal detection using a distantly supervised neural network and develop a metric, {\Delta}s (or 'delta-softmax'), to quantify signaling strength. Ranging between -1 and 1 and relying on recent advances in contextualized words embeddings, the metric represents each word's positive or negative contribution to the identifiability of a relation in specific instances in context. Based on an English corpus annotated for discourse relations using Rhetorical Structure Theory and signal type annotations anchored to specific tokens, our analysis examines the reliability of the metric, the places where it overlaps with and differs from human judgments, and the implications for identifying features that neural models may need in order to perform better on automatic discourse relation classification. <<</Abstract>>> <<<Introduction>>> The development of formal frameworks for the analysis of discourse relations has long gone hand in hand with work on signaling devices. The analysis of discourse relations is also closely tied to what a discourse structure should look like and what discourse goals should be fulfilled in relation to the interpretation of discourse relations BIBREF0. Earlier work on the establishment of inventories of discourse relations and their formalization (BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6) relied on the existence of `discourse markers' (DMs) or `connectives', including conjunctions such as because or if, adverbials such as however or as a result, and coordinations such as but, to identify and distinguish relations such as condition in SECREF1, concession in SECREF1, cause in SECREF1, or contrast, result etc., depending on the postulated inventory of relations (signals for these relations as identified by human analysts are given in bold; examples come from the GUM corpus BIBREF7, presented in Section SECREF3). . [If you work for a company,]$_{\textsc {condition}}$ [they pay you that money.] . [Albeit limited,]$_{\textsc {concession}}$ [these results provide valuable insight into SI interpretation by Chitonga-speaking children.] . [not all would have been interviewed at Wave 3] [due to differential patterns of temporary attrition]$_{\textsc {cause}}$ The same reasoning of identifying relations based on overt signals has been applied to the comparison of discourse relations across languages, by comparing inventories of similar function words cross-linguistically (BIBREF8, BIBREF9); and the annotation guidelines of prominent contemporary corpora rely on such markers as well: for instance, the Penn Discourse Treebank (see BIBREF10) explicitly refers to either the presence of DMs or the possibility of their insertion in cases of implicit discourse relations, and DM analysis in Rhetorical Structure Theory BIBREF11 has also shown the important role of DMs as signals of discourse relations at all hierarchical levels of discourse analysis BIBREF12. At the same time, research over the past two decades analyzing the full range of possible cues that humans use to identify the presence of discourse relations has suggested that classic DMs such as conjunctions and adverbials are only a part of the network of signals that writers or speakers can harness for discourse structuring, which also includes entity-based cohesion devices (e.g. certain uses of anaphora, see BIBREF13), alternative lexicalizations using content words, as well as syntactic constructions (see BIBREF14 and the addition of alternative lexicalization constructions, AltLexC, in the latest version of PDTB, BIBREF15). In previous work, two main approaches to extracting the inventory of discourse signal types in an open-ended framework can be identified: data-driven approaches, which attempt to extract relevant words from distributional properties of the data, using frequencies or association measures capturing their co-occurrences with certain relation types (e.g. BIBREF16, BIBREF17); and manual annotation efforts (e.g. BIBREF10, BIBREF18), which develop categorization schemes and guidelines for human evaluation of signaling devices. The former family of methods benefits from an unbiased openness to any and every type of word which may reliably co-occur with some relation types, whether or not a human might notice it while annotating, as well as the naturally graded and comparable nature of the resulting quantitative scores, but, as we will show, falls short in identifying specific cases of a word being a signal (or not) in context. By contrast, the latter approach allows for the identification of individual instances of signaling devices, but relies on less open-ended guidelines and is categorical in nature: a word either is or isn't a signal in context, providing less access to concepts such as signaling strength. The goal of this paper is to develop and evaluate a model of discourse signal identification that is built bottom up from the data, but retains sensitivity to context in the evaluation of each individual example. In addition, even though this work is conducted within Rhetorical Structural Theory, we hope that it can shed light on signal identification of discourse relations across genres and provide empirical evidence to motivate research on theory-neutral and genre-diverse discourse processing, which would be beneficial for pushing forward theories of discourse across frameworks or formalisms. Furthermore, employing a computational approach to studying discourse relations has a promising impact on various NLP downstream tasks such as question answering and document summarization etc. For example, BIBREF20 incorporated discourse information into the task of automated text comprehension and benefited from such information without relying on explicit annotations of discourse structure during training, which outperformed state-of-the-art text comprehension systems at the time. Towards this goal, we begin by reviewing some previous work in the traditions sketched out above in the next section, and point out some open questions which we would like to address. In Section SECREF3 we present the discourse annotated data that we will be using, which covers a number of English text types from the Web annotated for 20 discourse relations in the framework of Rhetorical Structure Theory, and is enriched with human annotations of discourse relation signaling devices for a subset of the data. Moreover, we also propose a taxonomy of anchored signals based on the discourse annotated data used in this paper, illustrating the properties and the distribution of the anchorable signals. In Section SECREF4 we then train a distantly supervised neural network model which is made aware of the relations present in the data, but attempts to learn which words signal those relations without any exposure to explicit signal annotations. We evaluate the accuracy of our model using state-of-the-art pretrained and contextualized character and word embeddings, and develop a metric for signaling strength based on a masking concept similar to permutation importance, which naturally lends itself to the definition of both positive and negative or `anti-signals', which we will refer to as `distractors'. In Section SECREF5, we combine the anchoring annotation data from Section SECREF3 with the model's predictions to evaluate how `human-like' its performance is, using an information retrieval approach measuring recall@k and assessing the stability of different signal types based on how the model scores them. We develop a visualization for tokenwise signaling strength and perform error analysis for some signals found by the model which were not flagged by humans and vice versa, and point out the strengths and weaknesses of the architecture. Section SECREF6 offers further discussion of what we can learn from the model, what kinds of additional features it might benefit from given the error analysis, and what the distributions of scores for individual signals can teach us about the ambiguity and reliability of different signal types, opening up avenues for further research. <<</Introduction>>> <<<Previous Work>>> <<<Data-driven Approaches>>> A straightforward approach to identifying discourse relation signals in corpora with discourse parses is to extract frequency counts for all lexical types or lemmas and cross-tabulate them with discourse relations, such as sentences annotated as cause, elaboration, etc. (e.g. BIBREF21, BIBREF22, BIBREF17). Table TABREF4, reproduced from BIBREF17, illustrates this approach for the Russian RST Treebank. This approach quickly reveals the core inventory of cue words in the language, and in particular the class of low-ambiguity discourse markers (DMs), such as odnako `however' signaling contrast (see Fraser 1999 on delimiting the class of explicit DMs) or relative pronouns signaling elaboration. As such, it can be very helpful for corpus-based lexicography of discourse markers (cf. BIBREF23). The approach can potentially include multiword expressions, if applied equally to multi-token spans (e.g. as a result), and because it is language-independent, it also allows for a straightforward comparison of connectives or other DMs across languages. Results may also converge across frameworks, as the frequency analysis may reveal the same items in different corpora annotated using different frameworks. For instance, the inventory of connectives found in work on the Penn Discourse Treebank (PDTB, see BIBREF10) largely converges with findings on connectives using RST (see BIBREF24, BIBREF18): conjunctions such as but can mark different kinds of contrastive relations at a high level, and adverbs such as meanwhile can convey contemporaneousness, among other things, even when more fine-grained analyses are applied. However, a purely frequentist approach runs into problems on multiple levels, as we will show in Section SECREF4: high frequency and specificity to a small number of relations characterize only the most common and unambiguous discourse markers, but not less common ones. Additionally, differentiating actual and potentially ambiguous usages of candidate words in context requires substantial qualitative analysis (see BIBREF25), which is not reflected in aggregated counts, and signals that belong to a class of relations (e.g. a variety of distinct contrastive relation types) may appear to be non-specific, when in fact they reliably mark a superset of relations. Other studies have used more sophisticated metrics, such as point-wise mutual information (PMI), to identify words associated with particular relations BIBREF16. Using the PDTB corpus, BIBREF16 extracted such scores and measured the contribution of different signal types based on the information gain which they deliver for the classification of discourse relations at various degrees of granularity, as expressed by the hierarchical labels of PDTB relation types. This approach is most similar to the goal given to our own model in Section SECREF4, but is less detailed in that the aggregation process assigns a single number to each candidate lexical item, rather than assigning contextual scores to each instance. Finally we note that for hierarchical discourse annotation schemes, the data-driven approaches described here become less feasible at higher levels of abstraction, as relations connecting entire paragraphs encompass large amounts of text, and it is therefore difficult to find words with high specificity to those relations. As a result, approaches using human annotation of discourse relation signals may ultimately be irreplaceable. <<</Data-driven Approaches>>> <<<Discourse Relation Signal Annotations>>> Discourse relation signals are broadly classified into two categorizes: anchored signals and unanchored signals. By `anchoring' we refer to associating signals with concrete token spans in texts. Intuitively, most of the signals are anchorable since they correspond to certain token spans. However, it is also possible for a discourse relation to be signaled but remain unanchored. Results from BIBREF27 indicated that there are several signaled but unanchored relations such as preparation and background since they are high-level discourse relations that capture and correspond to genre features such as interview layout in interviews where the conversation is constructed as a question-answer scheme, and are thus rarely anchored to tokens. The Penn Discourse Treebank (PDTB V3, BIBREF15) is the largest discourse annotated corpus of English, and the largest resource annotated explicitly for discourse relation signals such as connectives, with similar corpora having been developed for a variety of languages (e.g. BIBREF28 for Turkish, BIBREF29 for Chinese). However the annotation scheme used by PDTB is ahierarchical, annotating only pairs of textual argument spans connected by a discourse relation, and disregarding relations at higher levels, such as relations between paragraphs or other groups of discourse units. Additionally, the annotation scheme used for explicit signals is limited to specific sets of expressions and constructions, and does not include some types of potential signals, such as the graphical layout of a document, lexical chains of (non-coreferring) content words that are not seen as connectives, or genre conventions which may signal the discourse function for parts of a text. It is nevertheless a very useful resource for obtaining frequency lists of the most prevalent DMs in English, as well as data on a range of phenomena such as anaphoric relations signaled by entities, and some explicitly annotated syntactic constructions. Working in the hierarchical framework of Rhetorical Structure Theory BIBREF11, BIBREF18 re-annotated the existing RST Discourse Treebank BIBREF30, by taking the existing discourse relation annotations in the corpus as a ground truth and analyzing any possible information in the data, including content words, patterns of repetition or genre conventions, as a possibly present discourse relation signaling device. The resulting RST Signalling Corpus (RST-SC, BIBREF31) consists of 385 Wall Street Journal articles from the Penn Treebank BIBREF32, a smaller subset of the same corpus used in PDTB. It contains 20,123 instances of 78 relation types (e.g. attribution, circumstance, result etc.), which are enriched with 29,297 signal annotations. BIBREF12 showed that when all types of signals are considered, over 86% of discourse relations annotated in the corpus were signaled in some way, but among these, just under 20% of cases were marked by a DM. However, unlike PDTB, the RST Signalling Corpus does not provide a concrete span of tokens for the locus of each signal, indicating instead only the type of signaling device used. Although the signal annotations in RST-SC have a broader scope than those in PDTB and are made more complex by extending to hierarchical relations, BIBREF33 have shown that RST-SC's annotation scheme can be `anchored' by associating discourse signal categories from RST-SC with concrete token spans. BIBREF27 applied the same scheme to a data set described in Section SECREF3, which we will use to evaluate our model in Section SECREF5. Since that data set is based on the same annotation scheme of signal types as RST-SC, we will describe the data for the present study and RST-SC signal type annotation scheme next. <<</Discourse Relation Signal Annotations>>> <<</Previous Work>>> <<<Data>>> <<<Anchored Signals in the GUM Corpus>>> In order to study open-ended signals anchored to concrete tokens, we use the signal-annotated subset of the freely available Georgetown University Multilayer (GUM) Corpus BIBREF7 from BIBREF27. Our choice to use a multi-genre RST-annotated corpus rather than using PDTB, which also contains discourse relation signal annotation to a large extent is motivated by three reasons: The first reason is that we wish to explore the full range of potential signals, as laid out in the work on the Signalling Corpus BIBREF12, BIBREF34, whereas PDTB annotates only a subset of the possible cues identified by human annotators. Secondly, the use of RST as a framework allows us to examine discourse relations at all hierarchical levels, including long distance, high-level relations between structures as large as paragraphs or sections, which often have different types of signals allowing their identification. Finally, although the entire GUM corpus is only about half the size of RST-DT (109K tokens), using GUM offers the advantage of a more varied range of genres than PDTB and RST-SC, both of which annotate Wall Street Journal data. The signal annotated subset of GUM includes academic papers, how-to guides, interviews and news text, encompassing over 11,000 tokens. Although this data set may be too small to train a successful neural model for signal detection, we will not be using it for this purpose; instead, we will reserve it for use solely as a test set, and use the remainder of the data (about 98K tokens) to build our model (see Section SECREF28 for more details about the subsets and splits), including data from four further genres, for which the corpus also contains RST annotations but no signaling annotations: travel guides, biographies, fiction, and Reddit forum discussions. The GUM corpus is manually annotated with a large number of layers, including document layout (headings, paragraphs, figures, etc.); multiple POS tags (Penn tags, CLAWS5, Universal POS); lemmas; sentence types (e.g. imperative, wh-question etc., BIBREF35); Universal Dependencies BIBREF36; (non-)named entity types; coreference and bridging resolution; and discourse parses using Rhetorical Structure Theory BIBREF11. In particular, the RST annotations in the corpus use a set of 20 commonly used RST relation labels, which are given in Table TABREF10, along with their frequencies in the corpus. The relations cover asymmetrical prominence relations (satellite-nucleus) and symmetrical ones (multinuclear relations), with the restatement relation being realized in two versions, one for each type. The signaling annotation in the corpus follows the scheme developed by RST-SC, with some additions. Although RST-SC does not indicate token positions for signals, it provides a detailed taxonomy of signal types which is hierarchically structured into three levels: signal class, denoting the signal's degree of complexity signal type, indicating the linguistic system to which it belongs specific signal, which gives the most fine-grained subtypes of signals within each type It is assumed that any number of word tokens can be associated with any number of signals (including the same tokens participating in multiple signals), that signals can arise without corresponding to specific tokens (e.g. due to graphical layout of paragraphs), and that each relation can have an unbounded number of signals ($0-n$), each of which is characterized by all three levels. The signal class level is divided into single, combined (for complex signals), and unsure for unclear signals which cannot be identified conclusively, but are noted for further study. For each signal (regardless of its class), signal type and specific signal are identified. According to RST-SC's taxonomy, signal type includes 9 types such as DMs, genre, graphical, lexical, morphological, numerical, reference, semantic, and syntactic. Each type then has specific subcategories. For instance, the signal type semantic has 7 specific signal subtypes: synonymy, antonymy, meronymy, repetition, indicative word pair, lexical chain, and general word. We will describe some of these in more depth below. In addition to the 9 signal types, RST-SC has 6 combined signal types such as reference+syntactic, semantic+syntactic, and graphical+syntactic etc., and 15 specific signals are identified for the combined signals. Although the rich signaling annotations in RST-SC offer an excellent overview of the relative prevalence of different signal types in the Wall Street Journal corpus, it is difficult to apply the original scheme to the study of individual signal words, since actual signal positions are not identified. While recovering these positions may be possible for some categories using the original guidelines, most signaling annotations (e.g. lexical chains, repetition) cannot be automatically paired with actual tokens, meaning that, in order to use the original RST-SC for our study, we would need to re-annotate it for signal token positions. As this effort is beyond the scope of our study, we will use the smaller data set with anchored signaling annotations from BIBREF27: This data is annotated with the same signal categories as RST-SC, but also includes exact token positions for each signal, including possibly no tokens for unanchorable signals such as some types of genre conventions or graphical layout which are not expressible in terms of specific words. In order to get a better sense of how the annotations work, we consider example SECREF7. . [5] Sociologists have explored the adverse consequences of discrimination; [6] psychologists have examined the mental processes that underpin conscious and unconscious biases; [7] neuroscientists have examined the neurobiological underpinnings of discrimination; [8] and evolutionary theorists have explored the various ways that in-group/out-group biases emerged across the history of our species. – joint [GUM_academic_discrimination] In this example, there is a joint relation between four spans in a fragment from an RST discourse tree. The first tokens in each span form a parallel construction and include semantically related items such as explored and examined (signal class `combined', type `semantic+syntactic', specific subtype `parallel syntactic construction + lexical chain'). The words corresponding to this signal in each span are highlighted in Figure FIGREF15, and are considered to signal each instance of the joint relation. Additionally, the joint relation is also signaled by a number of further signals which are highlighted in the figure as well, such as the semicolons between spans, which correspond to a type `graphical', subtype `semicolon' in RST-SC. The data model of the corpus records which tokens are associated with which categorized signals, and allows for multiple membership of the same token in several signal annotations. In terms of annotation reliability, BIBREF12 reported a weighted kappa of 0.71 for signal subtypes in RST-SC without regard to the span of words corresponding to a signal, while a study by BIBREF37 suggests that signal anchoring, i.e. associating RST-SC signal categories with specific tokens achieves a 90.9% perfect agreement score on which tokens constitute signals, or a Cohen's Kappa value of 0.77. As anchored signal positions will be of the greatest interest to our study, we will consider how signal token positions are distributed in the corpus next, and develop an anchoring taxonomy which we will refer back to for the remainder of this paper. <<</Anchored Signals in the GUM Corpus>>> <<<A Taxonomy of Anchored Signals>>> From a structural point of view, one of the most fundamental distinctions with regard to signal realization recognized in previous work is the classification of signaling tokens into satellite or nucleus-oriented positions, i.e. whether a signal for the relation appears within the modifier span or the span being modified BIBREF38. While some relation types exhibit a strong preference for signal position (e.g. using a discourse marker such as because in the satellite for cause, BIBREF39), others, such as concession are more balanced (almost evenly split signals between satellite and nucleus in BIBREF38). In this study we would like to further refine the taxonomy of signal positions, breaking it down into several features. At the highest level, we have the distinction between anchorable and non-anchorable signals, i.e. signals which correspond to no token in the text (e.g. genre conventions, graphical layout). Below this level, we follow BIBREF38 in classifying signals as satellite or nucleus-oriented, based on whether they appear in the more prominent Elementary Discourse Unit (EDU) of a relation or its dependent. However, several further distinctions may be drawn: Whether the signal appears before or after the relation in text order; since we consider the relation to be instantiated as soon as its second argument in the text appears, `before' is interpreted as any token before the second head unit in the discourse tree begins, and `after' is any subsequent token Whether the signal appears in the head unit of the satellite/nucleus, or in a dependent of that unit; this distinction only matters for satellite or nucleus subtrees that consist of more than one unit Whether the signal is anywhere within the structure dominated by the units participating in the relation, or completely outside of this structure Table TABREF20 gives an overview of the taxonomy proposed here, which includes the possible combinations of these properties and the distribution of the corresponding anchorable signals found in the signal-annotated subset of the GUM Corpus from BIBREF27. Individual feature combinations can be referred to either as acronyms, e.g. ABIHS for `Anchorable, Before the second EDU of the relation, Inside the relation's subtree, Head unit of the Satellite', or using the group IDs near the bottom of the table (in this case the category numbered Roman I). We will refer back to these categories in our comparison of manually annotated and automatically predicted signals. To illustrate how the taxonomy works in practice, we can consider the example in Figure FIGREF23, which shows a signal whose associated tokens instantiate categories I and IV in a discourse tree – the words demographic variables appear both within a preparation satellite (unit [50], category I), which precedes and points to its nucleus [51–54], and within a satellite inside that block (unit [52], a dependent inside the nucleus block, category IV). Based on the RST-SC annotation scheme, the signal class is Simple, with the type Semantic and specific sub-type Lexical chain. The numbers at the bottom of Table TABREF20 show the number of tokens signaling each relation at each position, as well as the number of relations which have signal tokens at the relevant positions. The hypothetical categories V and X, with signal tokens which are not within the subtree of satellite or nucleus descendants, are not attested in our data, as far as annotators were able to identify. <<</A Taxonomy of Anchored Signals>>> <<</Data>>> <<<Automatic Signal Extraction>>> <<<A Contextless Frequentist Approach>>> To motivate the need for a fine-grained and contextualized approach to describing discourse relation signals in our data, we begin by extracting some basic data-driven descriptions of our data along the lines presented in Section SECREF3. In order to constrain candidate words to just the most relevant ones for marking a specific signal, we first need a way to address a caveat of the frequentist approach: higher order relations which often connect entire paragraphs (notably background and elaboration) must be prevented from allowing most or even all words in the document to be considered as signaling them. A simple approach to achieving this is to assume `Strong Nuclearity', relying on Marcu's (BIBREF42) Compositionality Criterion for Discourse Trees (CCDT), which suggests that if a relation holds between two blocks of EDUs, then it also holds between their head EDUs. While this simplification may not be entirely accurate in all cases, Table TABREF20 suggests that it captures most signals, and allows us to reduce the space of candidate signal tokens to just the two head EDUs implicated in a relation. We will refer to signals within the head units of a relation as `endocentric' and signals outside this region as `exocentric'. Figure FIGREF25 illustrates this, where units [64] and [65] are the respective heads of two blocks of EDUs, and unit [65] in fact contains a plausible endocentric signal for the result relation, the discourse marker thus. More problematic caveats for the frequentist approach are the potential for over/underfitting and ambiguity. The issue of overfitting is especially thorny in small datasets, in which certain content words appear coincidentally in discourse segments with a certain function. Table TABREF27 shows the most distinctive lexical types for several discourse relations in GUM based on pure ratio of occurrence in head EDUs marked for those relations. On the left, types are chosen which have a maximal frequency in the relevant relationship compared with their overall frequency in the corpus. This quickly overfits the contents of the corpus, selecting irrelevant words such as holiest and Slate for the circumstance relation, or hypnotizing and currency for concession. The same lack of filtering can, however, yield some potentially relevant lexical items, such as causing for result or even highly specific content words such as ammonium, which are certainly not discourse markers, but whose appearance in a sequence is not accidental: the word is in this case typical for sequences in how-to guides, where use of ingredients in a recipe is described in a sequence. Even if these kinds of items may be undesirable candidates for signal words in general, it seems likely that some rare content words may function as signals in context, such as evaluative adjectives (e.g. exquisite) enabling readers to recognize an evaluation. If we are willing to give up on the latter kind of rare items, the overfitting problem can be alleviated somewhat by setting a frequency threshold for each potential signal lexeme, thereby suppressing rare items. The items on the right of the table are limited to types occurring more than 10 times. Since the most distinctive items on the left are all comparatively rare (and therefore exclusive to their relations), they do not overlap with the items on the right. Looking at the items on the right, several signals make intuitive sense, especially for relations such as solutionhood (used for question-answer pairs) or concession, which show the expected WH words and auxiliary did, or discourse markers such as though, respectively. At the same time, some high frequency items may be spurious, such as NATO for justify, which could perhaps be filtered out based on low dispersion across documents, but also stuff for cause, which probably could not be. Another problem with the lists on the right is that some expected strong signals, such as the word and for sequence are absent from the table. This is not because and is not frequent in sequences, but rather because it is a ubiquitous word, and as a result, it is not very specific to the relation. However if we look at actual examples of and inside and outside of sequences, it is easy to notice that the kind of and that does signal a relation in context is often clause initial as in SECREF24 and very different from the adnominal coordinating ands in SECREF24, which do not signal the relation: . [she was made a Dame by Elizabeth II for services to architecture,] [and in 2015 she became the first and only woman to be awarded the Royal Gold Medal]$_{\textsc {sequence}}$ . [Gordon visited England and Scotland in 1686.] [In 1687 and 1689 he took part in expeditions against the Tatars in the Crimea]$_{\textsc {sequence}}$ These examples suggest that a data-driven approach to signal detection needs some way of taking context into account. In particular, we would like to be able to compare instances of signals and quantify how strong the signal is in each case. In the next section, we will attempt to apply a neural model with contextualized word embeddings BIBREF44 to this problem, which will be capable of learning contextualized representations of words within the discourse graph. <<</A Contextless Frequentist Approach>>> <<<A Contextualized Neural Model>>> <<<Task and Model Architecture>>> Since we are interested in identifying unrestricted signaling devices, we deliberately avoid a supervised learning approach as used in automatic signal detection trained on resources such as PDTB. While recent work on PDTB connective detection (BIBREF26, BIBREF45) achieves good results (F-Scores of around 88-89 for English PDTB explicit connectives), the use of such supervised approaches would not tell us about new signaling devices, and especially about unrestricted lexical signals and other coherence devices not annotated in PDTB. Additionally, we would be restricted to the newspaper text types represented in the Wall Street Journal corpus, since no other large English corpus has been annotated for anchored signals. Instead, we will adopt a distantly supervised approach: we will task a model with supervised discourse relation classification on data that has not been annotated for signals, and infer the positions of signals in the text by analyzing the model's behavior. A key assumption, which we will motivate below, is that signals can have different levels of signaling strength, corresponding to their relative importance in identifying a relation. We would like to assume that different signal strength is in fact relevant to human analysts' decision making in relation identification, though in practice we will be focusing on model estimates of strength, the usefulness of which will become apparent below. As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters. The model's architecture is shown in Figure FIGREF30. Contextualized embeddings BIBREF44 have the advantage of giving distinct representations to different instances of the same word based on the surrounding words, meaning that an adnominal and connecting two NPs can be distinguished from one connecting two verbs based on its vector space representation in the model. Using character embeddings, which give vector space representations to substrings within each word, means that the model can learn the importance of morphological forms, such as the English gerund's -ing suffix, even for out-of-vocabulary items not seen during training. Formally, the input to our system is formed of EDU pairs which are the head units within the respective blocks of discourse units that they belong to, which are in turn connected by an instance of a discourse relation. This means that every discourse relation in the corpus is expressed as exactly one EDU pair. Each EDU is encoded as a (possibly padded) sequence of $n$-dimensional vector representations of each word ${x_1,..,x_T}$, with some added separators which are encoded in the same way and described below. The bidirectional LSTM composes representations and context for the input, and a fully connected softmax layer gives the probability of each relation: where the probability of each relation $rel_i$ is derived from the composed output of the function $h$ across time steps $0 \ldots t$, $\delta \in \lbrace b,f\rbrace $ is the direction of the respective LSTMs, $c_t^\delta $ is the recurrent context in each direction and $\theta = {W,b}$ gives the model weights and bias parameters (see BIBREF46 for details). Note that although the output of the system is ostensibly a probability distribution over relation types, we will not be directly interested in the most probable relation as outputted by the classifier, but rather in analyzing the model's behavior with respect to the input word representations as potential signals of each relation. In order to capitalize on the system's natural language modeling knowledge, EDU satellite-nucleus pairs are presented to the model in text order (i.e. either the nucleus or the satellite may come first). However, the model is given special separator symbols indicating the positions of the satellite and nucleus, which are essential for deciding the relation type (e.g. cause vs. result, which may have similar cue words but lead to opposite labels), and a separator symbol indicating the transition between satellite and nucleus. This setup is illustrated in SECREF29. . $<$s$>$ Sometimes this information is available , $<$sep$>$ but usually not . $<$n$>$ Label: concession In this example, the satellite precedes the nucleus and is therefore presented first. The model is made aware of the fact that the segment on the left is the satellite thanks to the tag <s>. Since the LSTM is bi-directional, it is aware of positions being within the nucleus or satellite, as well as their proximity to the separator, at every time step. We reserve the signal-annotated subset of 12 documents from GUM for testing, which contains 1,185 head EDU pairs (each representing one discourse relation), and a random selection of 12 further documents from the remaining RST-annotated GUM data (1,078 pairs) is taken as development data, leaving 102 documents (5,828 pairs) for training. The same EDUs appear in multiple pairs if a unit has multiple children with distinct relations, but no instances of EDUs are shard across partitions, since the splits are based on document boundaries. We note again that for the training and development data, we have no signaling annotations of any kind; this is possible since the network does not actually use the human signaling annotations we will be evaluating against: its distant supervision consists solely of the RST relation labels. <<</Task and Model Architecture>>> <<<Relation Classification Performance>>> Although only used as an auxiliary training task, we can look at the model's performance on predicting discourse relations, which is given in Table TABREF34. Unsurprisingly, the model performs best on the most frequent relations in the corpus, such as elaboration or joint, but also on rarer ones which tend to be signaled explicitly, such as condition (often signaled explicitly by if), solutionhood (used for question-answer pairs signaled by question marks and WH words), or concession (DMs such as although). However, the model also performs reasonably well for some trickier (i.e. less often introduced by unambiguous DMs) but frequent relations, such as preparation, circumstance, and sequence. Rare relations with complex contextual environments, such as result, justify or antithesis, unsurprisingly do not perform well, with the latter two showing an F-score of 0. The relation restatement, which also shows no correct classifications, reveals a weakness of the model: while it is capable of recognizing signals in context, it cannot learn that repetition in and of itself, regardless of specific areas in vector space, is important (see Section SECREF6 for more discussion of these and other classification weaknesses). Although this is not the actual task targeted by the current paper, we may note that the overall performance of the model, with an F-Score of 44.37, is not bad, though below the performance of state-of-the-art full discourse parsers (see BIBREF49) – this is to be expected, since the model is not aware of the entire RST tree, rather looking only at EDU pairs out of context, and given that standard scores on RST-DT come from a larger and more homogeneous corpus, with with fewer relations and some easy cases that are absent from GUM. Given the model's performance on relation classification, which is far from perfect, one might wonder whether signal predictions made by our analysis should be trusted. This question can be answered in two ways: first, quantitatively, we will see in Section SECREF5 that model signal predictions overlap considerably with human judgments, even when the predicted relation is incorrect. Intuitively, for similar relations, such as concession or contrast, both of which are adversative, the model may notice a relevant cue (e.g. `but', or contrasting lexical items) despite choosing the wrong one. Second, as we will see below, we will be analyzing the model's behavior with respect to the probability of the correct relation, regardless of the label it ultimately chooses, meaning that the importance of predicting the correct label exactly will be diminished further. <<</Relation Classification Performance>>> <<<Signaling Metric>>> The actual performance we are interested in evaluating is the model's ability to extract signals for given discourse relations, rather than its accuracy in predicting the relations. To do so, we must extract anchored signal predictions from the model, which is non-trivial. While earlier work on interpreting neural models has focused on token-wise softmax probability BIBREF50 or attention weights BIBREF51, using contextualized embeddings complicates the evaluation: since word representations are adjusted to reflect neighboring words, the model may assign higher importance to the word standing next to what a human annotator may interpret as a signal. Example SECREF36 illustrates the problem: . [RGB]230, 230, 230To [RGB]53, 53, 53provide [RGB]165, 165, 165information [RGB]179, 179, 179on [RGB]175, 175, 175the [RGB]160, 160, 160analytical [RGB]157, 157, 157sample [RGB]187, 187, 187as [RGB]170, 170, 170a [RGB]168, 168, 168whole [RGB]207, 207, 207, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]168, 168, 168two [RGB]170, 170, 170additional [RGB]164, 164, 164demographic [RGB]175, 175, 175variables [RGB]182, 182, 182are [RGB]165, 165, 165included [RGB]230, 230, 230. Each word in SECREF36 is shaded based on the softmax probability assigned to the correct relation of the satellite, i.e. how `convincing' the model found the word in terms of local probability. In addition, the top-scoring word in each sentence is rendered in boldface for emphasis. The gold label for the relation is placed above the arrow, which indicates the direction of the relation (satellite to nucleus), and the model's predicted label is shown under the arrow. Intuitively, the strongest signal of the purpose relation in SECREF36 is the initial infinitive marker To – however, the model ranks the adjacent provide higher and almost ignores To. We suspect that the reason for this, and many similar examples in the model evaluated based on relation probabilities, is that contextual embeddings allow for a special representation of the word provide next to To, making it difficult to tease apart the locus of the most meaningful signal. To overcome this complication, we use the logic of permutation importance, treating the neural model as a black box and manipulating the input to discover relevant features in the data (cf. BIBREF52). We reason that this type of evaluation is more robust than, for example, examining model internal attention weights because such weights are not designed or trained with a reward ensuring they are informative – they are simply trained on the same classification error loss as the rest of the model. Instead, we can withhold potentially relevant information from the model directly: After training is complete, we feed the test data to the model in two forms – as-is, and with each word masked, as shown in SECREF36. . Original: <$s$>$ To\quad \ p̄rovide īnformation .. <$sep$>$ .. <$n$>$ Original: \: $<$s$>$ \: \ To \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked1: \: $<$s$>$ \: $<$X$>$ \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked2: \: $<$s$>$ \: \ To \: \ $<$X$>$ \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked3: \: $<$s$>$ \: \ To \: provide \: \ $<$X$>$ \: ... \: $<$sep$>$ \: ... \: $<$n$>$ $ Label: purpose We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least reduce its reported classification certainty. In SECREF36, it seems reasonable to assume that masking the word `To' has a greater impact on predicting the label purpose than masking the word `provide', and even less so, the following noun `information'. We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\Delta }_s$ (for delta-softmax), which can be written as: where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \in 1 \ldots N$ ignoring separators, or $\phi $, the empty set). To visualize the model's predictions, we compare ${\Delta }_s$ for a particular token to two numbers: the maximum ${\Delta }_s$ achieved by any token in the current pair (a measure of relative importance for the current classification) and the maximum ${\Delta }_s$ achieved by any token in the current document (a measure of how strongly the current relation is signaled compared to other relations in the text). We then shade each token 50% based on the first number and 50% based on the second. As a result, the most valid cues in an EDU pair are darker than their neighbors, but EDU pairs with no good cues are overall very light, whereas pairs with many good signals are darker. Some examples of this visualization are given in SECREF36-SECREF36 (human annotated endocentric signal tokens are marked by double underlines). . [RGB]61, 61, 61To [RGB]112, 112, 112provide [RGB]205, 205, 205information [RGB]230, 230, 230on [RGB]230, 230, 230the [RGB]230, 230, 230analytical [RGB]230, 230, 230sample [RGB]230, 230, 230as [RGB]230, 230, 230a [RGB]230, 230, 230whole [RGB]230, 230, 230, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]230, 230, 230two [RGB]183, 183, 183additional [RGB]230, 230, 230demographic [RGB]230, 230, 230variables [RGB]94, 94, 94are [RGB]194, 194, 194included [RGB]163, 163, 163. . [RGB]230, 230, 230Telling [RGB]230, 230, 230good [RGB]230, 230, 230jokes [RGB]230, 230, 230is [RGB]230, 230, 230an [RGB]230, 230, 230art [RGB]230, 230, 230that [RGB]230, 230, 230comes [RGB]230, 230, 230naturally [RGB]230, 230, 230to [RGB]230, 230, 230some [RGB]211, 211, 211people [RGB]135, 135, 135, $\xleftarrow[\text{pred:contrast}]{\text{gold:contrast}}$ [RGB]21, 21, 21but [RGB]209, 209, 209for [RGB]207, 207, 207others [RGB]230, 230, 230it [RGB]217, 217, 217takes [RGB]230, 230, 230practice [RGB]230, 230, 230and [RGB]189, 189, 189hard [RGB]230, 230, 230work [RGB]230, 230, 230. . [RGB]230, 230, 230It [RGB]230, 230, 230is [RGB]230, 230, 230possible [RGB]230, 230, 230that [RGB]230, 230, 230these [RGB]230, 230, 230two [RGB]230, 230, 230children [RGB]230, 230, 230understood [RGB]230, 230, 230the [RGB]230, 230, 230task [RGB]230, 230, 230and [RGB]230, 230, 230really [RGB]230, 230, 230did [RGB]230, 230, 230believe [RGB]230, 230, 230that [RGB]230, 230, 230the [RGB]230, 230, 230puppet [RGB]230, 230, 230did [RGB]230, 230, 230not [RGB]230, 230, 230produce [RGB]230, 230, 230any [RGB]230, 230, 230poor [RGB]230, 230, 230descriptions [RGB]230, 230, 230, [RGB]230, 230, 230and [RGB]230, 230, 230in [RGB]230, 230, 230this [RGB]230, 230, 230regard [RGB]230, 230, 230, [RGB]230, 230, 230are [RGB]230, 230, 230not [RGB]230, 230, 230yet [RGB]230, 230, 230adult-like [RGB]230, 230, 230in [RGB]230, 230, 230their [RGB]230, 230, 230SI [RGB]230, 230, 230interpretations [RGB]230, 230, 230. $\xleftarrow[\text{pred:evaluation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230This [RGB]230, 230, 230is [RGB]41, 41, 41unlikely The highlighting in SECREF36 illustrates the benefits of the masking based evaluation compared to SECREF36: the token To is now clearly the strongest signal, and the verb is taken to be less important, followed by the even less important object of the verb. This is because removing the initial To hinders classification much more than the removal of the verb or noun. We note also that although the model in fact misclassified this example as preparation, we can still use masking importance to identify To, since the score queried from the model corresponds to a relative decrease in the probability of the correct relation, purpose, even if this was not the highest scoring relation overall. In SECREF36 we see the model's ability to correctly predict contrast based on the DM but. Note that despite a rather long sentence, the model does not need any other word nearly as much for the classification. Although the model is not trained explicitly to detect discourse markers, the DM can be recognized due to the fact that masking it leads to a drop of 66% softmax probability (${\Delta }_s$=0.66) of this pair representing the contrast relation. We can also note that a somewhat lower scoring content word is also marked: hard (${\Delta }_s$=0.18). In our gold signaling annotations, this word was marked together with comes naturally as a signal, due to the contrast between the two concepts (additionally, some people is flagged as a signal along with others). The fact that the model finds hard helpful, but does not need the contextual near antonym naturally, suggests that it is merely learning that words in the semantic space near hard may indicate contrast, and not learning about the antonymous relationship – otherwise we would expect to see `naturally' have a stronger score (see also the discussion in Section SECREF6). Finally SECREF36 shows that, much like in the case of hard, the model is not biased towards traditional DMs, confirming that it is capable of learning about content words, or neighborhoods of content words in vector space. In a long EDU pair of 41 words, the model relies almost exclusively on the word unlikely (${\Delta }_s$=0.36) to correctly label the relation as evaluation. By contrast, the anaphoric demonstrative `This' flagged by the human annotator, which is a more common function word, is disregarded, perhaps because it can appear with several other relations, and is not particularly exclusive to evaluation. These results suggest that the model may be capable of recognizing signals through distant supervision, allowing it to validate human annotations, to potentially point out signals that may be missed by annotators, and most importantly, to quantify signaling strength on a sliding scale. At the same time, we need a way to evaluate the model's quality and assess the kinds of errors it makes, as well as what we can learn from them. We therefore move on to evaluating the model and its errors next. <<</Signaling Metric>>> <<</A Contextualized Neural Model>>> <<</Automatic Signal Extraction>>> <<<Evaluation and Error Analysis>>> <<<Evaluation Metric>>> To evaluate the neural model, we would like to know how well ${\Delta }_s$ corresponds to annotators' gold standard labels. This leads to two kinds of problems: the first is that the model is distantly supervised, and therefore does not know about signal types, subtypes, or any aspect of signaling annotation and its relational structure. The second problem is that signaling annotations are categorical, and do not correspond to the ratio-scaled predictions provided by ${\Delta }_s$ (this is in fact one of the motivations for desiring a model-based estimate of signaling strength). The first issue means that we can only examine the model's ability to locate signals – not to classify them. Although there may be some conceivable ways of analyzing model output to identify classes such as DMs (which are highly lexicalized, rather than representing broad regions of vector space, as words such as unlikely might), or more contextual relational signals, such as pronouns, this line of investigation is beyond the scope of the present paper. A naive solution to the second problem might be to identify a cutoff point, e.g. deciding that all and only words scoring ${\Delta }_s>$0.15 are predicted to be signals. The problem with the latter approach is that sentences can be very different in many ways, and specifically in both length and in levels of ambiguity. Sentences with multiple, mutually redundant cues, may produce lower ${\Delta }_s$ scores compared to shorter sentences with a subset of the same cues. Conversely, in very short sentences with low signal strength, the model may reasonably be expected to degrade very badly with the deletion of almost any word, as the context becomes increasingly incomprehensible. For these reasons, we choose to adopt an evaluation metric from the paradigm of information retrieval, and focus on recall@k (recall at rank k, for $k=1,2,3$...). The idea is to poll the model for each sentence in which some signals have been identified, and see whether the model is able to find them if we let it guess using the word with the maximal ${\Delta }_s$ score (recall@1), regardless of how high that score is, or alternatively relax the evaluation criteria and see whether the human annotator's signal tokens appear at rank 2 or 3. Figure FIGREF40 shows numbers for recall@k for the top 3 ranks outputted by the model, next to random guess baselines. The left, middle and right panels in Figure FIGREF40 correspond to measurements when all signals are included, only cases contained entirely in the head EDUs shown to the model, and only DMs, respectively. The scenario on the left is rather unreasonable and is included only for completeness: here the model is also penalized for not detecting signals such as lexical chains, part of which is outside the units that the model is being shown. An example of such a case can be seen in Figure FIGREF41. The phrase Respondents in unit [23] signals the relation elaboration, since it is coreferential with a previous mention of the respondents in [21]. However, because the model is only given heads of EDU blocks to classify, it does not have access to the first occurrence of respondents while predicting the elaboration relation – the first half of the signal token set is situated in a child of the nucleus EDU before the relation, i.e. it belongs to group IV in the taxonomy in Table TABREF20. Realistically, our model can only be expected to learn about signals from `directly participating' EDUs, i.e. groups I, II, VI and VII, the `endocentric' signal groups from Section SECREF16. Although most signals belong to endocentric categories (71.62% of signaled relations belong to these groups, cf. Table TABREF20), exocentric cases form a substantial portion of signals which we have little hope of capturing with the architecture used here. As a result, recall metrics in the `all signals' scenario are closest to the random baselines, though the signals detected in other instances still place the model well above the baseline. A more reasonable evaluation is the one in the middle panel of Figure FIGREF40, which includes only endocentric signals as defined in the taxonomy. EDUs with no endocentric signals are completely disregarded in this scenario, which substantially reduces the number of tokens considered to be signals, since, while many tokens are part of some meaningful lexical chain in the document, requiring signals to be contained only in the pair of head units eliminates a wide range of candidates. Although the random baseline is actually very slightly higher (perhaps because eliminated EDUs were often longer ones, sharing small amounts of material with larger parts of the text, and therefore prone to penalizing the baseline; many words mean more chances for a random guess to be wrong), model accuracy is substantially better in this scenario, reaching a 40% chance of hitting a signal with only one guess, exceeding 53% with two guesses, and capping at 64% for recall@3, over 20 points above baseline. Finally, the right panel in the figure shows recall when only DMs are considered. In this scenario, a random guess fares very poorly, since most words are not DMs. The model, by contrast, achieves the highest results in all metrics, since DMs have the highest cue validity for relation classification, and the model attends to them most strongly. With just one guess, recall is over 56%, and goes as high as 67% for recall@3. The baseline only goes as high as 16% for three guesses. <<</Evaluation Metric>>> <<<Qualitative Analysis>>> Looking at the model's performance qualitatively, it is clear that it can detect not only DMs, but also morphological cues (e.g. gerunds as markers of elaboration, as in SECREF43), semantic classes and sentiment, such as positive and negative evaluatory terms in SECREF43, as well as multiple signals within the same EDU, as in SECREF43. In fact, only about 8.3% of the tokens correctly identified by the model in Table TABREF45 below are of the DM type, whereas about 7.2% of all tokens flagged by human annotators were DMs, meaning that the model frequently matches non-DM items to discourse relation signals (see Performance on Signal Types below). It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose. . [RGB]230, 230, 230For [RGB]230, 230, 230the [RGB]230, 230, 230present [RGB]230, 230, 230analysis [RGB]230, 230, 230, [RGB]230, 230, 230these [RGB]230, 230, 230responses [RGB]230, 230, 230were [RGB]230, 230, 230recoded [RGB]230, 230, 230into [RGB]230, 230, 230nine [RGB]230, 230, 230mutually [RGB]230, 230, 230exclusive [RGB]230, 230, 230categories $\xleftarrow[\text{pred:elaboration}]{\text{gold:result}}$ [RGB]63, 63, 63capturing [RGB]219, 219, 219the [RGB]230, 230, 230following [RGB]230, 230, 230options [RGB]135, 135, 135: . [RGB]185, 185, 185Professor [RGB]219, 219, 219Eastman [RGB]223, 223, 223said [RGB]207, 207, 207he [RGB]194, 194, 194is [RGB]64, 64, 64alarmed [RGB]230, 230, 230by [RGB]230, 230, 230what [RGB]230, 230, 230they [RGB]230, 230, 230found [RGB]230, 230, 230. $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230" [RGB]230, 230, 230Pregnant [RGB]229, 229, 229women [RGB]187, 187, 187in [RGB]230, 230, 230Australia [RGB]98, 98, 98are [RGB]213, 213, 213getting [RGB]230, 230, 230about [RGB]230, 230, 230half [RGB]171, 171, 171as [RGB]159, 159, 159much [RGB]230, 230, 230as [RGB]230, 230, 230what [RGB]155, 155, 155they [RGB]155, 155, 155require [RGB]223, 223, 223on [RGB]214, 214, 214a [RGB]109, 109, 109daily [RGB]176, 176, 176basis [RGB]111, 111, 111. . [RGB]195, 195, 195Even [RGB]230, 230, 230so [RGB]230, 230, 230, [RGB]230, 230, 230estimates [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230prevalence [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]230, 230, 230discrimination [RGB]219, 219, 219remains [RGB]230, 230, 230rare $\xleftarrow[\text{pred:evidence}]{\text{gold:concession}}$ [RGB]111, 111, 111At [RGB]63, 63, 63least [RGB]230, 230, 230one [RGB]230, 230, 230prior [RGB]230, 230, 230study [RGB]230, 230, 230by [RGB]230, 230, 230Kessler [RGB]225, 225, 225and [RGB]230, 230, 230colleagues [RGB]230, 230, 230[ [RGB]230, 230, 23015 [RGB]161, 161, 161] [RGB]200, 200, 200, [RGB]136, 136, 136however [RGB]222, 222, 222, [RGB]228, 228, 228using [RGB]230, 230, 230measures [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]224, 224, 224discrimination [RGB]217, 217, 217in [RGB]230, 230, 230a [RGB]230, 230, 230large [RGB]218, 218, 218American [RGB]230, 230, 230sample [RGB]230, 230, 230, [RGB]230, 230, 230reported [RGB]230, 230, 230that [RGB]230, 230, 230approximately [RGB]230, 230, 23033 [RGB]212, 212, 212% [RGB]230, 230, 230of [RGB]230, 230, 230respondents [RGB]156, 156, 156reported [RGB]169, 169, 169some [RGB]122, 122, 122form [RGB]168, 168, 168of [RGB]230, 230, 230discrimination Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators. . [RGB]216, 216, 216The [RGB]99, 99, 99agreement [RGB]89, 89, 89was [RGB]230, 230, 230that [RGB]131, 131, 131Gorbachev [RGB]102, 102, 102agreed [RGB]230, 230, 230to [RGB]230, 230, 230a [RGB]230, 230, 230quite [RGB]230, 230, 230remarkable [RGB]125, 125, 125concession [RGB]230, 230, 230: $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]64, 64, 64he [RGB]81, 81, 81agreed [RGB]230, 230, 230to [RGB]230, 230, 230let [RGB]220, 220, 220a [RGB]143, 143, 143united [RGB]149, 149, 149Germany [RGB]230, 230, 230join [RGB]83, 83, 83the [RGB]230, 230, 230NATO [RGB]230, 230, 230military [RGB]230, 230, 230alliance [RGB]230, 230, 230. . [RGB]230, 230, 230The [RGB]220, 220, 220opening [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230joke [RGB]230, 230, 230— [RGB]230, 230, 230or [RGB]230, 230, 230setup [RGB]230, 230, 230— [RGB]230, 230, 230should [RGB]230, 230, 230have [RGB]230, 230, 230a [RGB]230, 230, 230basis [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230real [RGB]200, 200, 200world $\xleftarrow[\text{pred:purpose}]{\text{gold:purpose}}$ [RGB]7, 7, 7so [RGB]73, 73, 73your [RGB]230, 230, 230audience [RGB]230, 230, 230can [RGB]230, 230, 230relate [RGB]230, 230, 230to [RGB]230, 230, 230it [RGB]230, 230, 230, In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead: . [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183. From the model's perspective, the question mark, which scores ${\Delta }_s$=0.79, is the single most important signal, and virtually sufficient for classifying the relation correctly, though it was left out of the gold annotations. The WH word Which and the sentence final why, by contrast, were noticed by annotators but were are not as unambiguous (the former could be a determiner, and the latter in sentence final position could be part of an embedded clause). In the presence of the question mark, their individual removal has much less impact on the classification decision. Although the model's behavior is sensible and can reveal annotation errors, it also suggests that ${\Delta }_s$ will be blind to auxiliary signals in the presence of very strong, independently sufficient cues. Using the difference in likelihood of correct relation prediction as a metric also raises the possibility of an opposite concept to signals, which we will refer to as distractors. Since ${\Delta }_s$ is a signed measure of difference, it is in fact possible to obtain negative values whenever the removal or masking of a word results in an improvement in the model's ability to predict the relation. In such cases, and especially when the negative value is of a large magnitude, it seems like a reasonable interpretation to say that a word functions as a sort of anti-signal, preventing or complicating the recognition of what might otherwise be a more clear-cut case. Examples SECREF43–SECREF43 show some instances of distractors identified by the masking procedure (distractors with ${\Delta }_s<$-0.2 are underlined). . [RGB]230, 230, 230How [RGB]230, 230, 230do [RGB]230, 230, 230they [RGB]201, 201, 201treat [RGB]167, 167, 167those [RGB]210, 210, 210not [RGB]190, 190, 190like [RGB]230, 230, 230themselves [RGB]100, 100, 100? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:preparation}}$ [RGB]52, 52, 52then [RGB]230, 230, 230they [RGB]230, 230, 230're [RGB]230, 230, 230either [RGB]230, 230, 230over-zealous [RGB]230, 230, 230, [RGB]230, 230, 230ignorant [RGB]230, 230, 230of [RGB]230, 230, 230other [RGB]230, 230, 230people [RGB]230, 230, 230or [RGB]230, 230, 230what [RGB]230, 230, 230to [RGB]230, 230, 230avoid [RGB]230, 230, 230those [RGB]230, 230, 230that [RGB]230, 230, 230contradict [RGB]230, 230, 230their [RGB]230, 230, 230fantasy [RGB]230, 230, 230land [RGB]230, 230, 230that [RGB]220, 220, 220caters [RGB]230, 230, 230to [RGB]230, 230, 230them [RGB]230, 230, 230and [RGB]230, 230, 230them [RGB]230, 230, 230only [RGB]230, 230, 230. . [RGB]230, 230, 230God [RGB]230, 230, 230, [RGB]230, 230, 230I [RGB]230, 230, 230do [RGB]230, 230, 230n't [RGB]230, 230, 230know [RGB]51, 51, 51! $\xrightarrow[\text{pred:preparation}]{\text{gold:preparation}}$ [RGB]230, 230, 230but [RGB]230, 230, 230nobody [RGB]230, 230, 230will [RGB]230, 230, 230go [RGB]230, 230, 230to [RGB]230, 230, 230fight [RGB]230, 230, 230for [RGB]230, 230, 230noses [RGB]230, 230, 230any [RGB]219, 219, 219more [RGB]169, 169, 169. In SECREF43, a rhetorical question trips up the classifier, which predicts the question-answer relation solutionhood instead of preparation. Here the initial WH word How and the subsequent auxiliary do-support both distract (with ${\Delta }_s$=-0.23 and -0.25) from the preparation relation, which is however being signaled positively by the DM then in the nucleus unit. Later on, the adverb only is also disruptive (${\Delta }_s$=-0.31), perhaps due to a better association with adversative relations, such as contrast. In SECREF43, a preparatory “God, I don't know!” is followed up with a nucleus starting with but, which typically marks a concession or other adversative relation. In fact, the DM but is related to a concessive relation with another EDU (not shown), which the model is not aware of while making the classification for the preparation. Although this example reveals a weakness in the model's inability to consider broader context, it also reveals the difficulty of expecting DMs to fall in line with a strong nuclearity assumption: since units serve multiple functions as satellites and nuclei, signals which aid the recognition of one relation may hinder the recognition of another. <<</Qualitative Analysis>>> <<<Performance on Signal Types>>> To better understand the kinds of signals which the model captures better or worse, Table TABREF45 gives a breakdown of performance by signal type and specific signal categories, for categories attested over 20 times (note that the categories are human labels assigned to the corresponding positions – the system does not predict signal types). To evaluate performance for all types we cannot use recall@1–3, since some sentences contain more than 3 signal tokens, which would lead to recall errors even if the top 3 ranks are correctly identified signals. The scores in the table therefore express how many of the signal tokens belonging to each subtype in the gold annotations are recognized if we allow the system to make as many guesses as there are signal tokens in each EDU pair, plus a tolerance of a maximum of 2 additional tokens (similarly to recall@3). We also note that a single token may be associated with multiple signal types, in which case its identification or omission is counted separately for each type. Three of the top four categories which the model performs best for are, perhaps unsurprisingly, the most lexical ones: alternate expression captures non-DM phrases such as I mean (for elaboration), or the problem is (for concession), and indicative word includes lexical items such as imperative see (consistently marking evidence in references within academic articles) or evaluative adjectives such as interesting for evaluation. The good performance of the category colon captures the model's recognition of colons as important punctuation, primarily predicting preparation. The only case of a `relational' category, requiring attention to two separate positions in the input, which also fares well is synonymy, though this is often based on flagging only one of two items annotated as synonymous, and is based on rather few examples. We can find only one example, SECREF44, where both sides of a pair of similar words is actually noticed, which both belong to the same stem (decline/declining): . [RGB]230, 230, 230The [RGB]230, 230, 230report [RGB]209, 209, 209says [RGB]213, 213, 213the [RGB]172, 172, 172decline [RGB]220, 220, 220in [RGB]228, 228, 228iodine [RGB]230, 230, 230intake [RGB]215, 215, 215appears [RGB]230, 230, 230to [RGB]230, 230, 230be [RGB]230, 230, 230due [RGB]230, 230, 230to [RGB]230, 230, 230changes [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230dairy [RGB]230, 230, 230industry [RGB]230, 230, 230, [RGB]230, 230, 230where [RGB]230, 230, 230chlorine-containing [RGB]230, 230, 230sanitisers [RGB]226, 226, 226have [RGB]230, 230, 230replaced [RGB]230, 230, 230iodine-containing [RGB]230, 230, 230sanitisers [RGB]230, 230, 230. $\xleftarrow[\text{pred:background}]{\text{gold:justify}}$ [RGB]193, 193, 193Iodine [RGB]230, 230, 230released [RGB]230, 230, 230from [RGB]230, 230, 230these [RGB]230, 230, 230chemicals [RGB]230, 230, 230into [RGB]216, 216, 216milk [RGB]230, 230, 230has [RGB]230, 230, 230been [RGB]230, 230, 230the [RGB]230, 230, 230major [RGB]230, 230, 230source [RGB]230, 230, 230of [RGB]226, 226, 226dietary [RGB]206, 206, 206iodine [RGB]230, 230, 230in [RGB]230, 230, 230Australia [RGB]230, 230, 230for [RGB]230, 230, 230at [RGB]230, 230, 230least [RGB]230, 230, 230four [RGB]230, 230, 230decades [RGB]202, 202, 202, [RGB]153, 153, 153but [RGB]230, 230, 230is [RGB]230, 230, 230now [RGB]63, 63, 63declining [RGB]79, 79, 79. We note that our evaluation is actually rather harsh towards the model, since in multiword expressions, often only one central word is flagged by ${\Delta }_s$ (e.g. problem in “the problem is”), while the model is penalized in Table TABREF45 for each token that is not recognized (i.e. the and is, which were all flagged by a human annotator as signals in the data). Interestingly, the model fares rather well in identifying morphological tense cues, even though these are marked by both inflected lexical verbs and semantically poor auxiliaries (e.g. past perfect auxiliary had marking background); but modality cues (especially can or could for evaluation) are less successfully identified, suggesting they are either more ambiguous, or mainly relevant in the presence of evaluative content words which out-score them. Other relational categories from the middle of the table which ostensibly require matching pairs of words, such as repetition, meronymy, or personal reference (coreference) are mainly captured by the model when a single item is a sufficiently powerful cue, often ignoring the other half of the signal, as shown in SECREF44. . [RGB]230, 230, 230On [RGB]230, 230, 230a [RGB]230, 230, 230new [RGB]230, 230, 230website [RGB]230, 230, 230, [RGB]230, 230, 230" [RGB]230, 230, 230The [RGB]230, 230, 230Internet [RGB]230, 230, 230Explorer [RGB]230, 230, 2306 [RGB]230, 230, 230Countdown [RGB]230, 230, 230" [RGB]230, 230, 230, [RGB]230, 230, 230Microsoft [RGB]230, 230, 230has [RGB]230, 230, 230launched [RGB]230, 230, 230an [RGB]230, 230, 230aggressive [RGB]230, 230, 230campaign [RGB]230, 230, 230to [RGB]230, 230, 230persuade [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230stop [RGB]171, 171, 171using [RGB]133, 133, 133IE6 $\xleftarrow[\text{pred:elaboration}]{\text{gold:elaboration}}$ [RGB]56, 56, 56Its [RGB]197, 197, 197goal [RGB]167, 167, 167is [RGB]230, 230, 230to [RGB]230, 230, 230decrease [RGB]230, 230, 230IE6 [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230less [RGB]230, 230, 230than [RGB]230, 230, 230one [RGB]124, 124, 124percent [RGB]229, 229, 229. Here the model has learned that an initial possessive pronoun, perhaps in the context of a subject NP in a copula sentence (note the shading of the following is) is an indicator of an elaboration relation, even though there is no indication that the model has noticed which word is the antecedent. Similarly for the count category, the model only learns to notice the possible importance of some numbers, but is not actually aware of whether they are identical (e.g. for restatement) or different (e.g. in contrast). Finally, some categories are actually recognized fairly reliably, but are penalized by the same partial substring issue identified above: Date expressions are consistently flagged as indicators of circumstance, but often a single word, such as a weekday in SECREF44, is dominant, while the model is penalized for not scoring other words as highly (including commas within dates, which are marked as part of the signal token span in the gold standard, but whose removal does not degrade prediction accuracy). In this case it seems fair to say that the model has successfully recognized the date signal of `Wednesday April 13', yet it loses points for missing two instances of `,', and the `2011', which is no longer necessary for recognizing that this is a date. . [RGB]230, 230, 230NASA [RGB]230, 230, 230celebrates [RGB]230, 230, 23030th [RGB]230, 230, 230anniversary [RGB]230, 230, 230of [RGB]230, 230, 230first [RGB]230, 230, 230shuttle [RGB]230, 230, 230launch [RGB]230, 230, 230; $\xleftarrow[\text{pred:circumstance}]{\text{gold:circumstance}}$ [RGB]11, 11, 11Wednesday [RGB]186, 186, 186, [RGB]115, 115, 115April [RGB]153, 153, 15313 [RGB]219, 219, 219, [RGB]230, 230, 2302011 <<</Performance on Signal Types>>> <<</Evaluation and Error Analysis>>> <<<Discussion>>> This paper has used a corpus annotated for discourse relation signals within the framework of the RST Signalling Corpus (BIBREF12) and extended with anchored signal annotations (BIBREF27) to develop a taxonomy of unrestricted and hierarchically aware discourse signal positions, as well as a data-driven neural network model to explore distantly supervised signal word extraction. The results shed light on the distribution of signal categories from the RST-SC taxonomy in terms of associated word forms, and show the promise of neural models with contextual embeddings for the extraction of context dependent and gradient discourse signal detection in individual texts. The metric developed for the evaluation, $\Delta _s$, allows us to assess the relative importance of signal words for automatic relation classification, and reveal observations for further study, as well as shortcomings which point to the need to develop richer feature representations and system architectures in future work. The model presented in the previous sections is clearly incomplete in both its classification accuracy and its ability to recognize the same signals that humans do. However, given the fact that it is trained entirely without access to discourse signal annotations and is unaware of any of the guidelines used to create the gold standard that it is evaluated on, its performance may be considered surprisingly good. As an approach to extracting discourse signals in a data-driven way, similar to frequentist methods or association measures used in previous work, we suggest that this model forms a more fine grained tool, capable of taking context into consideration and delivering scores for each instance of a signal candidate, rather than resulting in a table of undifferentiated signal word types. Additionally, although we consider human signal annotations to be the gold standard in identifying the presence of relevant cues, the ${\Delta }_s$ metric gives new insights into signaling which cannot be approached using manual signaling annotations. Firstly, the quantitative nature of the metric allows us to rank signaling strength in a way that humans have not to date been able to apply: using ${\Delta }_s$, we can say which instances of which signals are evaluated as stronger, by how much, and which words within a multi-word signal instance are the most important (e.g. weekdays in dates are important, the commas are not). Secondly, the potential for negative values of the metric opens the door to the study of negative signals, or `distractors', which we have only touched upon briefly in this paper. And finally, we consider the availability of multiple measurements for a single DM or other discourse signal to be a potentially very interesting window into the relative ambiguity of different signaling devices (cf. BIBREF16) and for research on the contexts in which such ambiguity results. To see how ambiguity is reflected in multiple measurements of ${\Delta }_s$, we can consider Figure FIGREF47. The figure shows boxplots for multiple instances of the same signal tokens. We can see that words like and are usually not strong signals, with the entire interquartile range scoring less than 0.02, i.e. aiding relation classification by less than 2%, with some values dipping into the negative region (i.e. cases functioning as distractors). However, some outliers are also present, reaching almost as high as 0.25 – these are likely to be coordinating predicates, which may signal relations such as sequence or joint. A word such as but is more important overall, with the box far above and, but still covering a wide range of values: these can correspond to more or less ambiguous cases of but, but also to cases in which the word is more or less irreplaceable as a signal. In the presence of multiple signals for the same relation, the presence of but should be less important. We can also see that but can be a distractor with negative values, as we saw in example SECREF43 above. As far as we are aware, this is the first empirical corpus-based evidence giving a quantitative confirmation to the intuition that `but' in context is significantly less ambiguous as a discourse marker than `and'; the overlap in their bar plots indicate that they can be similarly ambiguous or even distracting in some cases, but the difference in interquartile ranges makes it clear that these are exceptions. For less ambiguous DMs, such as if, we can also see a contrast between lower and upper case instances: upper case If is almost always a marker of condition, but the lower case if is sometimes part of an embedded object clause, which is not segmented in the corpus and does not mark a conditional relation (e.g. “they wanted to see if...”). For the word to, the figure suggests a strongly bimodal distribution, with a core population of (primarily prepositional) discourse-irrelevant to, and a substantial number of outliers above a large gap, representing to in infinitival purpose clauses (though not all to infinitives mark such clauses, as in adnominal “a chance to go”, which the model is usually able to distinguish in context). In other words, our model can not only disambiguate ambiguous strings into grammatical categories, but also rank members of the same category by importance in context, as evidenced by its ability to correctly classify high frequency items like `to' or `and' as true positives. A frequentist approach would not only lack this ability – it would miss such items altogether, due to its overall high string frequency and low specificity. Beyond what the results can tell us about discourse signals in this particular corpus, the fact that the neural model is sensitive to mutual redundancy of signals raises interesting theoretical questions about what human annotators are doing when they characterize multiple features of a discourse unit as signals. If it is already evident from the presence of a conventional DM that some relation applies, are other, less explicit signals which might be relied on in the absence of the DM, equally `there'? Do we need a concept of primary and auxiliary signals, or graded signaling strength, in the way that a metric such as ${\Delta }_s$ suggests? Another open question relates to the postulation of distractors as an opposite concept to discourse relation signals. While we have not tested this so far, it is interesting to ask to what extent human analysts are aware of distractors, whether we could form annotation guidelines to recognize them, and how humans weigh the value of signals and potential distractors in extrapolating intended discourse relations. It seems likely that distractors affecting humans may be found in cases of misunderstanding or ambiguity of discourse relations (see also BIBREF25). Finally, the error analysis for signal detection complements the otherwise opaque relation classification results in Table TABREF34 in showing some of the missing sources of information that our model would need in order to work better. We have seen that relational information, such as identifying not just the presence of a pronoun but also its antecedent, or both sides of lexical semantic relations such as synonymy, meronymy or antonymy, as well as comparing count information, are still unavailable to the classifier – if they were being used, then ${\Delta }_s$ would reflect the effects of their removal, but this is largely not the case. This suggests that, in the absence of vastly larger discourse annotated corpora, discourse relation recognition may require the construction of either features, architectures, or both, which can harness abstract relational information of this nature beyond the memorization of specific pairs of words (or regions of vector space with similar words) that are already attested in the limited training data. In this vein, BIBREF54 conducted a series of experiments on automatic sense prediction for four top-level implicit discourse relations within the PDTB framework, which also suggested benefits for using linguistically-informed features such as verb information, polarity tags, context, lexical items (e.g. first and last words of the arguments; first three words in the sentence) etc. The model architecture and input data are also in need of improvements, as the current architecture can only be expected to identify endocentric signals. The substantial amount of exocentric signaling cases is in itself an interesting finding, as it suggests that relation classification from head EDU pairs may ultimately have a natural ceiling that is considerably below what could be inferred from looking at larger contexts. We predict that as we add more features to the model and improve its architecture in ways that allow it to recognize the kinds of signals that humans do, classification accuracy will increase; and conversely, as classification accuracy rises, measurements based on ${\Delta }_s$ will overlap increasingly with human annotations of anchored signals. In sum, we believe that there is room for much research on what relation classification models should look like, and how they can represent the kinds of information found in non-trivial signals. The results of this line of work can therefore benefit NLP systems targeting discourse relations by suggesting locations within the text which systems should attend to in one way or another. Moreover, we think that using distant-supervised techniques for learning discourse relations (e.g. BIBREF55) is promising in the development of discourse models using the proposed dataset. We hope to see further analyses benefit from this work and the application of metrics such as ${\Delta }_s$ to other datasets, within more complex models, and using additional features to capture such information. We also hope to see applications of discourse relations such as machine comprehension BIBREF20 and sentiment analysis BIBREF55 etc. benefit from the proposed model architecture as well as the dataset. <<</Discussion>>> <<</Title>>>
{ "references": [ "Introduction, Discussion" ], "type": "disordered_section" }
2002.00317
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Citation Text Generation <<<Abstract>>> We introduce the task of citation text generation: given a pair of scientific documents, explain their relationship in natural language text in the manner of a citation from one text to the other. This task encourages systems to learn rich relationships between scientific texts and to express them concretely in natural language. Models for citation text generation will require robust document understanding including the capacity to quickly adapt to new vocabulary and to reason about document content. We believe this challenging direction of research will benefit high-impact applications such as automatic literature review or scientific writing assistance systems. In this paper we establish the task of citation text generation with a standard evaluation corpus and explore several baseline models. <<</Abstract>>> <<<Introduction>>> The output of the world's scientists doubles roughly every nine years BIBREF0, and their pace is quickening. As a result, scientists and other experts must devote significant time to the difficult task of literature review, or coming to understand the context in which they work. Might artificial intelligence help to reduce that time? Several lines of research seek to do so. Citation recommendations systems BIBREF1, BIBREF2, BIBREF3 suggest references to relevant published work for a given document such as a current draft. Summarization systems BIBREF4, BIBREF5 condense the information in one or more documents, allowing researchers to more quickly understand the basic ideas in a piece of research. We introduce a complementary—but so far unaddressed—problem, citation text generation, where the relationship between a document and one or several others is expressed in natural language text. This differs from traditional summarization in that the primary focus is explaining the relationship between the two documents rather than their content. Automatically describing inter-document relationships could dramatically decrease the time researchers devote to literature review. For instance, a new paper could be explained in terms of its relationships to relevant works that a particular reader is most familiar with, rather than just those which the authors elected to cite (personalization). Further, such technology could be incorporated into writing assistance systems to help less experienced or non-native writers better articulate the connection between their work and prior art. Additionally, users of citation recommendation systems can benefit from natural language explanations of recommendation system choices. Beyond the immediate utility of citation text generation systems, the task offers significant challenges for language understanding and generation research. A major challenge is how to represent the information in one or more scientific texts. These documents are longer than those in most other domains typically studied in NLP, and make use of a long-tailed, open-domain technical vocabulary. Often an important phrase in the citing sentence output occurs only in a specific cited document and not elsewhere in the corpus. This requires a model that can learn phrase meanings from very few exposures, an important but unsolved problem for text generation systems. Possibly more challenging is understanding and expressing the various and nuanced relationships between related scientific works. In this work, we introduce the task of citation text generation. Leveraging the full texts of English-language scientific articles, we construct a dataset of citation sentences in the computer science domain for training and evaluating citation text generation models. We investigate strong retrieval and neural baseline models against which future work can compare. For use cases where large models can be trained, we extend the successful GPT2 architecture BIBREF6 to the scientific domain with additional pre-training and subsequent fine-tuning on the citation generation task. We experiment with different kinds of document context in the fine-tuning and inference stages. We also explore retrieval-based techniques which may more easily generalize to lower-resource settings. These models retrieve citation sentences from training documents which are most similar to test inputs. Our evaluations show that these techniques often produce plausible citation sentences, but indicate clear directions for improvement. Code and artifacts are provided for future research. <<</Introduction>>> <<<Task>>> Given the important research challenges posed by the citation text generation task, along with the potential social benefits of its solutions, let us continue with a formalization of the problem. Citation text generation is the task of generating a natural language citing sentence which explains the relationship between two documents. Examples of such citing sentences can be found in scientific documents as in-text citations to a previous work. Thus, we will formally distinguish one document as the source document, from which we will draw citing sentences which reference the cited document. If we want to leverage powerful modern neural text generation systems, we are faced with the problem of how to represent the documents in a way that these models can consume. In particular, language models like GPT2 are trained to predict next token probabilities given long stretches of contiguous text from a single document. It is not clear how to mix information from more than one document when providing context to these models. An additional difficulty of the citation text generation task is the vocabulary. In this domain, low-frequency, highly meaningful terms regularly appear in output texts. These terms may be completely novel to a single or small collection of papers (consider the phrase “citation text generation”, for instance), yet they are necessary for explaining the paper. This framing suggests a supervised learning setup. Let $t$ denote a citing sentence drawn from $S$, and $S^{\prime }$ denote $S$ without $t$. Then let be the probability of $t$ given $S^{\prime }$, cited document $C$, and model parameters $\theta $. The goal of learning a citation text generation model would be to maximize this probability across a large number of $t,S,C$ triples, so long as the parameters also generalize to unseen instances. At inference time, the goal is to generate a sentence $t^\ast $ which accurately describes the relationship between $S$ and $C$. The most appropriate evaluation metric for most text generation tasks is human judgment by potential users of the system. Evaluating citation text requires human judges with scientific expertise. For exploratory purposes, we use the standard automatic metrics for text generation tasks described in Section SECREF4, and we an expert error analysis in Section SECREF14. For source and cited documents, we use English-language computer science articles and annotation from the S2-GORC dataset BIBREF7. S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets. Detailed statistics can be found in Table TABREF4. <<</Task>>> <<<Models>>> We explore two basic styles of model for citation text generation. Following current work in neural text generation, we fine-tune the predictions of a large pre-trained language model to the citation text generation task. Additionally, we investigate approximate nearest neighbor methods to retrieve plausible human-authored citation sentences from the training data. <<<Neural Text Generation>>> Recent work has shown that adapting large pre-trained language models to text generation tasks yields strong results BIBREF8. Due to its widespread use in text generation, we investigate the GPT model of BIBREF6 for the citation text generation task. GPT2 is a transformer model trained on 40 gigabytes of internet text with a language modeling objective BIBREF9. The adaptation process, called fine-tuning, involves continued training of the model on the target objective, in our case citation text generation. To fine-tune GPT2 for text generation, it is typical to concatenate the conditioning context $X = x_1 \ldots x_n$ and citing sentence $Y = y_1 \ldots y_m$ with a special separator token $\mho $. The model learns to approximate next token probabilities for each index after $\mho $: for $0<i<m$ and model parameters $\theta $. Cross-entropy loss is calculated for each $y_i$ and backpropagation is used find parameters $\theta $ which maximize $p(y_{i+1} \mid X,\mho ,y_1,\ldots ,y_i)$. To adapt Equation DISPLAY_FORM6 to the citation text generation task, we construct the conditioning context $X$ from the source and cited documents. We take $j$ tokens from source document $s_1,\ldots ,s_j$ along with $k$ tokens from the cited document $c_1,\ldots ,c_k$. (Which tokens are drawn from the two documents is an independent variable that we explore experimentally.) We then condition the generation of citing sentence $Y$ on $X = s_1,\ldots ,s_j,\mho ,c_1,\ldots ,c_k$. This model is trained to predict each token of $Y$ as described above. <<<Context>>> The primary question we investigate with this model is what kind of input is best for generating accurate and informative citation sentences. Prior works in citation recommendation have made use of abstracts, which perhaps act as sufficient summaries of document content for this task. Additionally, we explore variants of extended context, such as the introduction or first section after the abstract. Since scientific texts are too long to fit into the context window of our generation model, we also investigate a “sampling” approach which samples sentences from throughout the document until the context window is full. In this work, we combine either the abstract or introduction of the source document with each of the abstract, introduction, or sampled sentences from the cited document. <<</Context>>> <<</Neural Text Generation>>> <<<Retrieval with Approximate Nearest Neighbors>>> While neural text generation techniques have advanced significantly in recent years, they are still inferior to human authored texts. For some tasks, it is better to retrieve a relevant human-authored text rather than generating novel text automatically BIBREF10. Is this also the case for citation text generation? To answer this question, we adapt an approximate nearest neighbor search algorithm to find similar pairs of documents. The basic search procedure is as follows: Given a test instance input $(S,C)$ for source $S$ and cited document $C$, we find the set $\bf {N}_C$, the nearest neighbors to $C$ in the training data. For each document $N_C$ from $\bf {N}_C$, let $\bf {N}_S$ be the set of documents that cite $N_C$. This means that each $N_S \in {\bf N}_S$ contains at least one citing sentence $t^{\prime }$ which cites $N_C$. We return the $t^{\prime }$ associated with the $(N_S,N_C)$ pair from the training which is closest to $(S,C)$. We measure the closeness of two pairs of documents by measuring cosine distances between vector representations of their content. The abstract of each document is embedded into a single dense vector by averaging the contextualized embeddings provided by the SciBERT model of BIBREF11 and normalizing. The distance between $(S,C)$ and candidate $(N_S,N_C)$ is computed as: where $\alpha $ and $\beta $ control the relative contribution of the two document similarities. We explore setting both $\alpha $ and $\beta $ to 1, or tuning them to optimize either BLEU or BERTScore on the validation set. <<</Retrieval with Approximate Nearest Neighbors>>> <<<Language Model Pretraining>>> GPT2-based models have demonstrated an ability to capture long distance dependencies over hundreds of tokens, which we hypothesize will allow them to synthesize information in both the source and cited documents. But citation text generation models must also handle the challenging technical vocabulary of the scientific domain. Prior work has shown that pretraining on in-domain data improves the performance of large language models on domain-specific tasks BIBREF11. Inspired by this, we experiment with additional pretraining of GPT2 in the science domain. This model, SciGPT2, is trained for an additional 3 epochs over the full text of the documents in our corpus using a language modeling objective. We note that both SciGPT2 and the SciBERT language models used here have been exposed to citing sentences from the test and validation sets as in-line citations during their pre-training phases, which may improve their performance versus models without this exposure. Such exposure is typical when using pretrained language models, as text from test data cannot be guaranteed to be absent from the large task-independent corpora upon which these models are trained. <<</Language Model Pretraining>>> <<</Models>>> <<<Evaluation>>> We compare the different baseline systems using BLEU BIBREF12, ROUGE (specifically ROUGE 1, 2, and L; BIBREF13), and the recently introduced BertScore BIBREF14, a similarity metric based on BERT embeddings which has been shown to correlate well with human judgements on other tasks. To adapt the BertScore metric to the scientific text domain, we use SciBERT embeddings. Table TABREF7 (above the double line) shows the performance of the SciGPT2 model on the test set when provided with the different input context combinations outlined in Section SECREF5. We find that context does make a difference for this category of model, and that models which have access to the intro of the documents outperform those which use abstracts or sampling. Automatic evaluation of the retrieval-based methods on the test data are shown below the double line in Table TABREF7. This table shows that the retrieval methods perform well on this task. However we will show the limitations of these automatic metrics in Section SECREF14. We also observe that tuning the $\alpha $ and $\beta $ parameters on the validation set results in overfitting for this method. Outputs are largely unchanged by this tuning; fewer than 400 test datapoints differ from the untuned outputs. A larger validation split may alleviate this problem. Statistical significance is assessed for select results using bootstrapping with 1000 samples in each of 100 iterations. This test shows that conditioning on the introduction of the source document improves performance compared to conditioning on the abstract when using the SciGPT2 model. However, we see that IR methods perform better than the best neural models. We do not find enough evidence to reject the null hypothesis regarding what context from the cited document should be used. <<</Evaluation>>> <<<Analysis>>> In this section we take a closer look at the details of the SciGPT2 and IR system outputs on a collection of validation datapoints. We provide a quantitative error analysis as well as qualitative analysis and examples. <<<Errors>>> In order to better understand the performance of the models, we undertake a quantitative analysis of its output. One author randomly selected 200 datapoints from the validation set and their associated model outputs. Source and cited papers in the topic of NLP were used so as to facilitate expert judgement. For tractability, we limited the context presented to the annotator to the document abstracts and analyze the outputs of the abs $\times $ abs and IR systems. In this analysis, we ask whether the models are producing believable citing sentences given their input. In particular, we are interested in the relative believability of the SciGPT2 and IR systems, as well as how believability of a citing sentence changes when a reader can see the abstract of one document or both. We use 100 datapoints with outputs from the SciGPT2 system and 100 with outputs from the IR system. For 50 datapoints from each system, the cited document's abstract is initially masked such that only the source context is visible (Source, One Visible). Based only on the source context, the annotator judged whether the model output (1) could have convincingly been a citation in the source document based solely on the abstract (believable), (2) could have been a citation in the source document, but unclear from the abstract alone and depends on the rest of the paper's content (content-dependent), or (3) is unlikely to appear in this document (not believable). After making this judgment, the annotator was then shown the abstract of the cited document and asked to make the 3-way believability judgment based on both source and cited abstracts (Source, Both Visible). This process is repeated with the remaining 50 datapoints, but with the cited context masked initially (Cited, One Visible and Cited, Both Visible). The results of our analysis presented in Table TABREF13. We find that believability in the Cited, One Visible condition correlates well with the Cited, Both Visible condition. In the Source conditions, we see a greater difference in believability between One Visible and Both Visible. These findings makes sense: in-line citations often summarize a prior study rather than highlight the paper's own contributions. Together, these results indicate that the believability of citing sentences is more related to the cited document than to the source. Another interesting feature of this analysis is the difference between SciGPT2 and IR in terms of context-dependent citing sentences. We observe fewer such judgements in the IR outputs. This is probably due to the fact that neural text generation systems such as SciGPT2 will sometimes produce generic, uninformative outputs while the IR system outputs are usually specific enough that a stronger believability judgement can be made. We also observe an overall higher instance of not believable judgements of the IR model outputs. This implies that automatic metrics such as BLEU, where the IR system scored higher than SciGPT2, do not correlate with factual accuracy in citation text generation. Example citations and annotations are shown in Table TABREF15. We find that in the cases where the model generated outputs are unconvincing they are still on topic. All 10 cases in the Source, One Visible and 4 of the cases in Cited, One Visible that were no longer believable in the Both Visible conditions exhibit this quality. A common example (4 cases) of this phenomenon occurs when the model output references a dataset. While the dataset would be potentially relevant to both papers, the cited papers focus on modeling contributions and do not introduce a novel corpus. <<</Errors>>> <<<Examples>>> Example system outputs for randomly selected validation instances are shown in Table TABREF18. We see that both the SciGPT2 and IR model outputs regularly hit on the correct broad topic of the cited text (such “literary analysis” or “image captioning evaluation metrics”). It is notable that the SciGPT2 model outputs syntactically correct and coherent citation sentences, even given the difficulty of the vocabulary in this domain. This is a testament to the power of the domain-specific language model training. We also observe that the outputs of the SciGPT2 model are often shorter than the desired citing sentence. Brevity is a known issue for neural text generation and may be alleviated by penalizing brevity in the inference procedure. More problematic are the factual errors in the generated text. In the last example, for instance, we see that SciGPT2 fails to cite the specific image captioning dataset described in the cited paper (Pascal1K) and instead focuses on the more general evaluation metric for the image captioning task (CIDEr). This is typical of neural text generation systems, which often assign high probability to generic or frequent phrases and revert to these in the face of uncertainty. <<</Examples>>> <<<Future Work>>> The fluency and topical relevance of the baseline models show the plausibility of the citation text generation task as well as the utility of including pretrained scientific language models in future models. But based on the kinds of errors we have seen, future work should focus on two complementary goals: ensuring the factual accuracy of the generated text and improved modeling of the cited document. Factual accuracy is difficult to enforce in statistical text generation systems, especially where inference includes sampling procedures. Grounding to knowledge bases could help. For this task, knowledge extracted from candidate generations could be compared with knowledge from the full source and cited documents to prune false or irrelevant statements. Further, modeling input documents as knowledge graphs of their contents may help these algorithms better understand the cited document, resulting in better outputs. However, such a model will have to address the open problem of combining pretrained language models with graph encoding techniques. <<</Future Work>>> <<</Analysis>>> <<<Related Work>>> The current work builds on recent research in scientific document understanding, including citation recommendation and categorization, as well as scientific document summarization. Citation recommendation, or the task of selecting works related to a source document which would be suitable for citing, is a longstanding goal of AI research BIBREF15, BIBREF2, BIBREF16. Recently, researchers have sought to categorize citations using various ontologies of citation intents. BIBREF1 sought to discern “highly influential” citations from others. BIBREF17 uses six categories including “motivation”, “uses”, and “future work” among others. BIBREF3 condense this ontology to just three: “background”,“method”, and “result comparison”. We view the citation text generation task as an extension of these classification approaches with distinct advantages. While classification requires an extant citation link to exist, our generation task can describe possible relationships between works which do not cite each other, such as contemporaneous works. Additionally, because gold citation texts are readily available in scientific documents, the citation text generation task requires no task-specific annotated training data. In practice, citation classification is used to assist in suggesting relevant works to researchers; citation text generation complements this goal by providing rationales for the recommendation and furthering progress toward explainable AI. Generating a citation is also connected to summarizing scientific documents. There is a long history research on summarizing scientific documents BIBREF18, BIBREF19. More recently, researchers have included citing sentences as part of the input for summarization, hoping to capture the contribution of a work along with its content BIBREF20, BIBREF21, BIBREF5. Ours is the first to focus on the specific relationship between two documents when generating such sentences. Because of the emphasis on relational document understanding in our task, citation generation models can be used to assist with drafting papers as well, reducing researcher workload and providing non-native writers with a helpful first draft. Our work builds on recent advances in transfer learning in NLP. In particular, large pretrained models such as BERT BIBREF22 and GPT2 BIBREF6 have made strong advances on a number of tasks BIBREF23. It has also been shown that pretraining these models on domain-specific data further improves results on domain-speicific tasks BIBREF11, BIBREF24. In this work, we apply that methodology by adding an additional pretraining phase on in-domain data before finetuning a GPT2 model on the citation text generation task. <<</Related Work>>> <<<Conclusion>>> We have introduced the challenging but useful task of citation text generation. This task requires reasoning about the relationships between documents and expressing these relationships in natural language text. We have established a dataset for this task and studied the performance of contemporary neural text generation and information retrieval models. Our analysis shows that while these models produce fluent and topical outputs, more research is needed to ensure factual accuracy and specificity in the generated text. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Related Work, Abstract" ], "type": "disordered_section" }
2002.00317
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Citation Text Generation <<<Abstract>>> We introduce the task of citation text generation: given a pair of scientific documents, explain their relationship in natural language text in the manner of a citation from one text to the other. This task encourages systems to learn rich relationships between scientific texts and to express them concretely in natural language. Models for citation text generation will require robust document understanding including the capacity to quickly adapt to new vocabulary and to reason about document content. We believe this challenging direction of research will benefit high-impact applications such as automatic literature review or scientific writing assistance systems. In this paper we establish the task of citation text generation with a standard evaluation corpus and explore several baseline models. <<</Abstract>>> <<<Introduction>>> The output of the world's scientists doubles roughly every nine years BIBREF0, and their pace is quickening. As a result, scientists and other experts must devote significant time to the difficult task of literature review, or coming to understand the context in which they work. Might artificial intelligence help to reduce that time? Several lines of research seek to do so. Citation recommendations systems BIBREF1, BIBREF2, BIBREF3 suggest references to relevant published work for a given document such as a current draft. Summarization systems BIBREF4, BIBREF5 condense the information in one or more documents, allowing researchers to more quickly understand the basic ideas in a piece of research. We introduce a complementary—but so far unaddressed—problem, citation text generation, where the relationship between a document and one or several others is expressed in natural language text. This differs from traditional summarization in that the primary focus is explaining the relationship between the two documents rather than their content. Automatically describing inter-document relationships could dramatically decrease the time researchers devote to literature review. For instance, a new paper could be explained in terms of its relationships to relevant works that a particular reader is most familiar with, rather than just those which the authors elected to cite (personalization). Further, such technology could be incorporated into writing assistance systems to help less experienced or non-native writers better articulate the connection between their work and prior art. Additionally, users of citation recommendation systems can benefit from natural language explanations of recommendation system choices. Beyond the immediate utility of citation text generation systems, the task offers significant challenges for language understanding and generation research. A major challenge is how to represent the information in one or more scientific texts. These documents are longer than those in most other domains typically studied in NLP, and make use of a long-tailed, open-domain technical vocabulary. Often an important phrase in the citing sentence output occurs only in a specific cited document and not elsewhere in the corpus. This requires a model that can learn phrase meanings from very few exposures, an important but unsolved problem for text generation systems. Possibly more challenging is understanding and expressing the various and nuanced relationships between related scientific works. In this work, we introduce the task of citation text generation. Leveraging the full texts of English-language scientific articles, we construct a dataset of citation sentences in the computer science domain for training and evaluating citation text generation models. We investigate strong retrieval and neural baseline models against which future work can compare. For use cases where large models can be trained, we extend the successful GPT2 architecture BIBREF6 to the scientific domain with additional pre-training and subsequent fine-tuning on the citation generation task. We experiment with different kinds of document context in the fine-tuning and inference stages. We also explore retrieval-based techniques which may more easily generalize to lower-resource settings. These models retrieve citation sentences from training documents which are most similar to test inputs. Our evaluations show that these techniques often produce plausible citation sentences, but indicate clear directions for improvement. Code and artifacts are provided for future research. <<</Introduction>>> <<<Task>>> Given the important research challenges posed by the citation text generation task, along with the potential social benefits of its solutions, let us continue with a formalization of the problem. Citation text generation is the task of generating a natural language citing sentence which explains the relationship between two documents. Examples of such citing sentences can be found in scientific documents as in-text citations to a previous work. Thus, we will formally distinguish one document as the source document, from which we will draw citing sentences which reference the cited document. If we want to leverage powerful modern neural text generation systems, we are faced with the problem of how to represent the documents in a way that these models can consume. In particular, language models like GPT2 are trained to predict next token probabilities given long stretches of contiguous text from a single document. It is not clear how to mix information from more than one document when providing context to these models. An additional difficulty of the citation text generation task is the vocabulary. In this domain, low-frequency, highly meaningful terms regularly appear in output texts. These terms may be completely novel to a single or small collection of papers (consider the phrase “citation text generation”, for instance), yet they are necessary for explaining the paper. This framing suggests a supervised learning setup. Let $t$ denote a citing sentence drawn from $S$, and $S^{\prime }$ denote $S$ without $t$. Then let be the probability of $t$ given $S^{\prime }$, cited document $C$, and model parameters $\theta $. The goal of learning a citation text generation model would be to maximize this probability across a large number of $t,S,C$ triples, so long as the parameters also generalize to unseen instances. At inference time, the goal is to generate a sentence $t^\ast $ which accurately describes the relationship between $S$ and $C$. The most appropriate evaluation metric for most text generation tasks is human judgment by potential users of the system. Evaluating citation text requires human judges with scientific expertise. For exploratory purposes, we use the standard automatic metrics for text generation tasks described in Section SECREF4, and we an expert error analysis in Section SECREF14. For source and cited documents, we use English-language computer science articles and annotation from the S2-GORC dataset BIBREF7. S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets. Detailed statistics can be found in Table TABREF4. <<</Task>>> <<<Models>>> We explore two basic styles of model for citation text generation. Following current work in neural text generation, we fine-tune the predictions of a large pre-trained language model to the citation text generation task. Additionally, we investigate approximate nearest neighbor methods to retrieve plausible human-authored citation sentences from the training data. <<<Neural Text Generation>>> Recent work has shown that adapting large pre-trained language models to text generation tasks yields strong results BIBREF8. Due to its widespread use in text generation, we investigate the GPT model of BIBREF6 for the citation text generation task. GPT2 is a transformer model trained on 40 gigabytes of internet text with a language modeling objective BIBREF9. The adaptation process, called fine-tuning, involves continued training of the model on the target objective, in our case citation text generation. To fine-tune GPT2 for text generation, it is typical to concatenate the conditioning context $X = x_1 \ldots x_n$ and citing sentence $Y = y_1 \ldots y_m$ with a special separator token $\mho $. The model learns to approximate next token probabilities for each index after $\mho $: for $0<i<m$ and model parameters $\theta $. Cross-entropy loss is calculated for each $y_i$ and backpropagation is used find parameters $\theta $ which maximize $p(y_{i+1} \mid X,\mho ,y_1,\ldots ,y_i)$. To adapt Equation DISPLAY_FORM6 to the citation text generation task, we construct the conditioning context $X$ from the source and cited documents. We take $j$ tokens from source document $s_1,\ldots ,s_j$ along with $k$ tokens from the cited document $c_1,\ldots ,c_k$. (Which tokens are drawn from the two documents is an independent variable that we explore experimentally.) We then condition the generation of citing sentence $Y$ on $X = s_1,\ldots ,s_j,\mho ,c_1,\ldots ,c_k$. This model is trained to predict each token of $Y$ as described above. <<<Context>>> The primary question we investigate with this model is what kind of input is best for generating accurate and informative citation sentences. Prior works in citation recommendation have made use of abstracts, which perhaps act as sufficient summaries of document content for this task. Additionally, we explore variants of extended context, such as the introduction or first section after the abstract. Since scientific texts are too long to fit into the context window of our generation model, we also investigate a “sampling” approach which samples sentences from throughout the document until the context window is full. In this work, we combine either the abstract or introduction of the source document with each of the abstract, introduction, or sampled sentences from the cited document. <<</Context>>> <<</Neural Text Generation>>> <<<Retrieval with Approximate Nearest Neighbors>>> While neural text generation techniques have advanced significantly in recent years, they are still inferior to human authored texts. For some tasks, it is better to retrieve a relevant human-authored text rather than generating novel text automatically BIBREF10. Is this also the case for citation text generation? To answer this question, we adapt an approximate nearest neighbor search algorithm to find similar pairs of documents. The basic search procedure is as follows: Given a test instance input $(S,C)$ for source $S$ and cited document $C$, we find the set $\bf {N}_C$, the nearest neighbors to $C$ in the training data. For each document $N_C$ from $\bf {N}_C$, let $\bf {N}_S$ be the set of documents that cite $N_C$. This means that each $N_S \in {\bf N}_S$ contains at least one citing sentence $t^{\prime }$ which cites $N_C$. We return the $t^{\prime }$ associated with the $(N_S,N_C)$ pair from the training which is closest to $(S,C)$. We measure the closeness of two pairs of documents by measuring cosine distances between vector representations of their content. The abstract of each document is embedded into a single dense vector by averaging the contextualized embeddings provided by the SciBERT model of BIBREF11 and normalizing. The distance between $(S,C)$ and candidate $(N_S,N_C)$ is computed as: where $\alpha $ and $\beta $ control the relative contribution of the two document similarities. We explore setting both $\alpha $ and $\beta $ to 1, or tuning them to optimize either BLEU or BERTScore on the validation set. <<</Retrieval with Approximate Nearest Neighbors>>> <<<Language Model Pretraining>>> GPT2-based models have demonstrated an ability to capture long distance dependencies over hundreds of tokens, which we hypothesize will allow them to synthesize information in both the source and cited documents. But citation text generation models must also handle the challenging technical vocabulary of the scientific domain. Prior work has shown that pretraining on in-domain data improves the performance of large language models on domain-specific tasks BIBREF11. Inspired by this, we experiment with additional pretraining of GPT2 in the science domain. This model, SciGPT2, is trained for an additional 3 epochs over the full text of the documents in our corpus using a language modeling objective. We note that both SciGPT2 and the SciBERT language models used here have been exposed to citing sentences from the test and validation sets as in-line citations during their pre-training phases, which may improve their performance versus models without this exposure. Such exposure is typical when using pretrained language models, as text from test data cannot be guaranteed to be absent from the large task-independent corpora upon which these models are trained. <<</Language Model Pretraining>>> <<</Models>>> <<<Evaluation>>> We compare the different baseline systems using BLEU BIBREF12, ROUGE (specifically ROUGE 1, 2, and L; BIBREF13), and the recently introduced BertScore BIBREF14, a similarity metric based on BERT embeddings which has been shown to correlate well with human judgements on other tasks. To adapt the BertScore metric to the scientific text domain, we use SciBERT embeddings. Table TABREF7 (above the double line) shows the performance of the SciGPT2 model on the test set when provided with the different input context combinations outlined in Section SECREF5. We find that context does make a difference for this category of model, and that models which have access to the intro of the documents outperform those which use abstracts or sampling. Automatic evaluation of the retrieval-based methods on the test data are shown below the double line in Table TABREF7. This table shows that the retrieval methods perform well on this task. However we will show the limitations of these automatic metrics in Section SECREF14. We also observe that tuning the $\alpha $ and $\beta $ parameters on the validation set results in overfitting for this method. Outputs are largely unchanged by this tuning; fewer than 400 test datapoints differ from the untuned outputs. A larger validation split may alleviate this problem. Statistical significance is assessed for select results using bootstrapping with 1000 samples in each of 100 iterations. This test shows that conditioning on the introduction of the source document improves performance compared to conditioning on the abstract when using the SciGPT2 model. However, we see that IR methods perform better than the best neural models. We do not find enough evidence to reject the null hypothesis regarding what context from the cited document should be used. <<</Evaluation>>> <<<Analysis>>> In this section we take a closer look at the details of the SciGPT2 and IR system outputs on a collection of validation datapoints. We provide a quantitative error analysis as well as qualitative analysis and examples. <<<Errors>>> In order to better understand the performance of the models, we undertake a quantitative analysis of its output. One author randomly selected 200 datapoints from the validation set and their associated model outputs. Source and cited papers in the topic of NLP were used so as to facilitate expert judgement. For tractability, we limited the context presented to the annotator to the document abstracts and analyze the outputs of the abs $\times $ abs and IR systems. In this analysis, we ask whether the models are producing believable citing sentences given their input. In particular, we are interested in the relative believability of the SciGPT2 and IR systems, as well as how believability of a citing sentence changes when a reader can see the abstract of one document or both. We use 100 datapoints with outputs from the SciGPT2 system and 100 with outputs from the IR system. For 50 datapoints from each system, the cited document's abstract is initially masked such that only the source context is visible (Source, One Visible). Based only on the source context, the annotator judged whether the model output (1) could have convincingly been a citation in the source document based solely on the abstract (believable), (2) could have been a citation in the source document, but unclear from the abstract alone and depends on the rest of the paper's content (content-dependent), or (3) is unlikely to appear in this document (not believable). After making this judgment, the annotator was then shown the abstract of the cited document and asked to make the 3-way believability judgment based on both source and cited abstracts (Source, Both Visible). This process is repeated with the remaining 50 datapoints, but with the cited context masked initially (Cited, One Visible and Cited, Both Visible). The results of our analysis presented in Table TABREF13. We find that believability in the Cited, One Visible condition correlates well with the Cited, Both Visible condition. In the Source conditions, we see a greater difference in believability between One Visible and Both Visible. These findings makes sense: in-line citations often summarize a prior study rather than highlight the paper's own contributions. Together, these results indicate that the believability of citing sentences is more related to the cited document than to the source. Another interesting feature of this analysis is the difference between SciGPT2 and IR in terms of context-dependent citing sentences. We observe fewer such judgements in the IR outputs. This is probably due to the fact that neural text generation systems such as SciGPT2 will sometimes produce generic, uninformative outputs while the IR system outputs are usually specific enough that a stronger believability judgement can be made. We also observe an overall higher instance of not believable judgements of the IR model outputs. This implies that automatic metrics such as BLEU, where the IR system scored higher than SciGPT2, do not correlate with factual accuracy in citation text generation. Example citations and annotations are shown in Table TABREF15. We find that in the cases where the model generated outputs are unconvincing they are still on topic. All 10 cases in the Source, One Visible and 4 of the cases in Cited, One Visible that were no longer believable in the Both Visible conditions exhibit this quality. A common example (4 cases) of this phenomenon occurs when the model output references a dataset. While the dataset would be potentially relevant to both papers, the cited papers focus on modeling contributions and do not introduce a novel corpus. <<</Errors>>> <<<Examples>>> Example system outputs for randomly selected validation instances are shown in Table TABREF18. We see that both the SciGPT2 and IR model outputs regularly hit on the correct broad topic of the cited text (such “literary analysis” or “image captioning evaluation metrics”). It is notable that the SciGPT2 model outputs syntactically correct and coherent citation sentences, even given the difficulty of the vocabulary in this domain. This is a testament to the power of the domain-specific language model training. We also observe that the outputs of the SciGPT2 model are often shorter than the desired citing sentence. Brevity is a known issue for neural text generation and may be alleviated by penalizing brevity in the inference procedure. More problematic are the factual errors in the generated text. In the last example, for instance, we see that SciGPT2 fails to cite the specific image captioning dataset described in the cited paper (Pascal1K) and instead focuses on the more general evaluation metric for the image captioning task (CIDEr). This is typical of neural text generation systems, which often assign high probability to generic or frequent phrases and revert to these in the face of uncertainty. <<</Examples>>> <<<Future Work>>> The fluency and topical relevance of the baseline models show the plausibility of the citation text generation task as well as the utility of including pretrained scientific language models in future models. But based on the kinds of errors we have seen, future work should focus on two complementary goals: ensuring the factual accuracy of the generated text and improved modeling of the cited document. Factual accuracy is difficult to enforce in statistical text generation systems, especially where inference includes sampling procedures. Grounding to knowledge bases could help. For this task, knowledge extracted from candidate generations could be compared with knowledge from the full source and cited documents to prune false or irrelevant statements. Further, modeling input documents as knowledge graphs of their contents may help these algorithms better understand the cited document, resulting in better outputs. However, such a model will have to address the open problem of combining pretrained language models with graph encoding techniques. <<</Future Work>>> <<</Analysis>>> <<<Related Work>>> The current work builds on recent research in scientific document understanding, including citation recommendation and categorization, as well as scientific document summarization. Citation recommendation, or the task of selecting works related to a source document which would be suitable for citing, is a longstanding goal of AI research BIBREF15, BIBREF2, BIBREF16. Recently, researchers have sought to categorize citations using various ontologies of citation intents. BIBREF1 sought to discern “highly influential” citations from others. BIBREF17 uses six categories including “motivation”, “uses”, and “future work” among others. BIBREF3 condense this ontology to just three: “background”,“method”, and “result comparison”. We view the citation text generation task as an extension of these classification approaches with distinct advantages. While classification requires an extant citation link to exist, our generation task can describe possible relationships between works which do not cite each other, such as contemporaneous works. Additionally, because gold citation texts are readily available in scientific documents, the citation text generation task requires no task-specific annotated training data. In practice, citation classification is used to assist in suggesting relevant works to researchers; citation text generation complements this goal by providing rationales for the recommendation and furthering progress toward explainable AI. Generating a citation is also connected to summarizing scientific documents. There is a long history research on summarizing scientific documents BIBREF18, BIBREF19. More recently, researchers have included citing sentences as part of the input for summarization, hoping to capture the contribution of a work along with its content BIBREF20, BIBREF21, BIBREF5. Ours is the first to focus on the specific relationship between two documents when generating such sentences. Because of the emphasis on relational document understanding in our task, citation generation models can be used to assist with drafting papers as well, reducing researcher workload and providing non-native writers with a helpful first draft. Our work builds on recent advances in transfer learning in NLP. In particular, large pretrained models such as BERT BIBREF22 and GPT2 BIBREF6 have made strong advances on a number of tasks BIBREF23. It has also been shown that pretraining these models on domain-specific data further improves results on domain-speicific tasks BIBREF11, BIBREF24. In this work, we apply that methodology by adding an additional pretraining phase on in-domain data before finetuning a GPT2 model on the citation text generation task. <<</Related Work>>> <<<Conclusion>>> We have introduced the challenging but useful task of citation text generation. This task requires reasoning about the relationships between documents and expressing these relationships in natural language text. We have established a dataset for this task and studied the performance of contemporary neural text generation and information retrieval models. Our analysis shows that while these models produce fluent and topical outputs, more research is needed to ensure factual accuracy and specificity in the generated text. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Models" ], "type": "disordered_section" }
2002.00317
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Citation Text Generation <<<Abstract>>> We introduce the task of citation text generation: given a pair of scientific documents, explain their relationship in natural language text in the manner of a citation from one text to the other. This task encourages systems to learn rich relationships between scientific texts and to express them concretely in natural language. Models for citation text generation will require robust document understanding including the capacity to quickly adapt to new vocabulary and to reason about document content. We believe this challenging direction of research will benefit high-impact applications such as automatic literature review or scientific writing assistance systems. In this paper we establish the task of citation text generation with a standard evaluation corpus and explore several baseline models. <<</Abstract>>> <<<Introduction>>> The output of the world's scientists doubles roughly every nine years BIBREF0, and their pace is quickening. As a result, scientists and other experts must devote significant time to the difficult task of literature review, or coming to understand the context in which they work. Might artificial intelligence help to reduce that time? Several lines of research seek to do so. Citation recommendations systems BIBREF1, BIBREF2, BIBREF3 suggest references to relevant published work for a given document such as a current draft. Summarization systems BIBREF4, BIBREF5 condense the information in one or more documents, allowing researchers to more quickly understand the basic ideas in a piece of research. We introduce a complementary—but so far unaddressed—problem, citation text generation, where the relationship between a document and one or several others is expressed in natural language text. This differs from traditional summarization in that the primary focus is explaining the relationship between the two documents rather than their content. Automatically describing inter-document relationships could dramatically decrease the time researchers devote to literature review. For instance, a new paper could be explained in terms of its relationships to relevant works that a particular reader is most familiar with, rather than just those which the authors elected to cite (personalization). Further, such technology could be incorporated into writing assistance systems to help less experienced or non-native writers better articulate the connection between their work and prior art. Additionally, users of citation recommendation systems can benefit from natural language explanations of recommendation system choices. Beyond the immediate utility of citation text generation systems, the task offers significant challenges for language understanding and generation research. A major challenge is how to represent the information in one or more scientific texts. These documents are longer than those in most other domains typically studied in NLP, and make use of a long-tailed, open-domain technical vocabulary. Often an important phrase in the citing sentence output occurs only in a specific cited document and not elsewhere in the corpus. This requires a model that can learn phrase meanings from very few exposures, an important but unsolved problem for text generation systems. Possibly more challenging is understanding and expressing the various and nuanced relationships between related scientific works. In this work, we introduce the task of citation text generation. Leveraging the full texts of English-language scientific articles, we construct a dataset of citation sentences in the computer science domain for training and evaluating citation text generation models. We investigate strong retrieval and neural baseline models against which future work can compare. For use cases where large models can be trained, we extend the successful GPT2 architecture BIBREF6 to the scientific domain with additional pre-training and subsequent fine-tuning on the citation generation task. We experiment with different kinds of document context in the fine-tuning and inference stages. We also explore retrieval-based techniques which may more easily generalize to lower-resource settings. These models retrieve citation sentences from training documents which are most similar to test inputs. Our evaluations show that these techniques often produce plausible citation sentences, but indicate clear directions for improvement. Code and artifacts are provided for future research. <<</Introduction>>> <<<Task>>> Given the important research challenges posed by the citation text generation task, along with the potential social benefits of its solutions, let us continue with a formalization of the problem. Citation text generation is the task of generating a natural language citing sentence which explains the relationship between two documents. Examples of such citing sentences can be found in scientific documents as in-text citations to a previous work. Thus, we will formally distinguish one document as the source document, from which we will draw citing sentences which reference the cited document. If we want to leverage powerful modern neural text generation systems, we are faced with the problem of how to represent the documents in a way that these models can consume. In particular, language models like GPT2 are trained to predict next token probabilities given long stretches of contiguous text from a single document. It is not clear how to mix information from more than one document when providing context to these models. An additional difficulty of the citation text generation task is the vocabulary. In this domain, low-frequency, highly meaningful terms regularly appear in output texts. These terms may be completely novel to a single or small collection of papers (consider the phrase “citation text generation”, for instance), yet they are necessary for explaining the paper. This framing suggests a supervised learning setup. Let $t$ denote a citing sentence drawn from $S$, and $S^{\prime }$ denote $S$ without $t$. Then let be the probability of $t$ given $S^{\prime }$, cited document $C$, and model parameters $\theta $. The goal of learning a citation text generation model would be to maximize this probability across a large number of $t,S,C$ triples, so long as the parameters also generalize to unseen instances. At inference time, the goal is to generate a sentence $t^\ast $ which accurately describes the relationship between $S$ and $C$. The most appropriate evaluation metric for most text generation tasks is human judgment by potential users of the system. Evaluating citation text requires human judges with scientific expertise. For exploratory purposes, we use the standard automatic metrics for text generation tasks described in Section SECREF4, and we an expert error analysis in Section SECREF14. For source and cited documents, we use English-language computer science articles and annotation from the S2-GORC dataset BIBREF7. S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets. Detailed statistics can be found in Table TABREF4. <<</Task>>> <<<Models>>> We explore two basic styles of model for citation text generation. Following current work in neural text generation, we fine-tune the predictions of a large pre-trained language model to the citation text generation task. Additionally, we investigate approximate nearest neighbor methods to retrieve plausible human-authored citation sentences from the training data. <<<Neural Text Generation>>> Recent work has shown that adapting large pre-trained language models to text generation tasks yields strong results BIBREF8. Due to its widespread use in text generation, we investigate the GPT model of BIBREF6 for the citation text generation task. GPT2 is a transformer model trained on 40 gigabytes of internet text with a language modeling objective BIBREF9. The adaptation process, called fine-tuning, involves continued training of the model on the target objective, in our case citation text generation. To fine-tune GPT2 for text generation, it is typical to concatenate the conditioning context $X = x_1 \ldots x_n$ and citing sentence $Y = y_1 \ldots y_m$ with a special separator token $\mho $. The model learns to approximate next token probabilities for each index after $\mho $: for $0<i<m$ and model parameters $\theta $. Cross-entropy loss is calculated for each $y_i$ and backpropagation is used find parameters $\theta $ which maximize $p(y_{i+1} \mid X,\mho ,y_1,\ldots ,y_i)$. To adapt Equation DISPLAY_FORM6 to the citation text generation task, we construct the conditioning context $X$ from the source and cited documents. We take $j$ tokens from source document $s_1,\ldots ,s_j$ along with $k$ tokens from the cited document $c_1,\ldots ,c_k$. (Which tokens are drawn from the two documents is an independent variable that we explore experimentally.) We then condition the generation of citing sentence $Y$ on $X = s_1,\ldots ,s_j,\mho ,c_1,\ldots ,c_k$. This model is trained to predict each token of $Y$ as described above. <<<Context>>> The primary question we investigate with this model is what kind of input is best for generating accurate and informative citation sentences. Prior works in citation recommendation have made use of abstracts, which perhaps act as sufficient summaries of document content for this task. Additionally, we explore variants of extended context, such as the introduction or first section after the abstract. Since scientific texts are too long to fit into the context window of our generation model, we also investigate a “sampling” approach which samples sentences from throughout the document until the context window is full. In this work, we combine either the abstract or introduction of the source document with each of the abstract, introduction, or sampled sentences from the cited document. <<</Context>>> <<</Neural Text Generation>>> <<<Retrieval with Approximate Nearest Neighbors>>> While neural text generation techniques have advanced significantly in recent years, they are still inferior to human authored texts. For some tasks, it is better to retrieve a relevant human-authored text rather than generating novel text automatically BIBREF10. Is this also the case for citation text generation? To answer this question, we adapt an approximate nearest neighbor search algorithm to find similar pairs of documents. The basic search procedure is as follows: Given a test instance input $(S,C)$ for source $S$ and cited document $C$, we find the set $\bf {N}_C$, the nearest neighbors to $C$ in the training data. For each document $N_C$ from $\bf {N}_C$, let $\bf {N}_S$ be the set of documents that cite $N_C$. This means that each $N_S \in {\bf N}_S$ contains at least one citing sentence $t^{\prime }$ which cites $N_C$. We return the $t^{\prime }$ associated with the $(N_S,N_C)$ pair from the training which is closest to $(S,C)$. We measure the closeness of two pairs of documents by measuring cosine distances between vector representations of their content. The abstract of each document is embedded into a single dense vector by averaging the contextualized embeddings provided by the SciBERT model of BIBREF11 and normalizing. The distance between $(S,C)$ and candidate $(N_S,N_C)$ is computed as: where $\alpha $ and $\beta $ control the relative contribution of the two document similarities. We explore setting both $\alpha $ and $\beta $ to 1, or tuning them to optimize either BLEU or BERTScore on the validation set. <<</Retrieval with Approximate Nearest Neighbors>>> <<<Language Model Pretraining>>> GPT2-based models have demonstrated an ability to capture long distance dependencies over hundreds of tokens, which we hypothesize will allow them to synthesize information in both the source and cited documents. But citation text generation models must also handle the challenging technical vocabulary of the scientific domain. Prior work has shown that pretraining on in-domain data improves the performance of large language models on domain-specific tasks BIBREF11. Inspired by this, we experiment with additional pretraining of GPT2 in the science domain. This model, SciGPT2, is trained for an additional 3 epochs over the full text of the documents in our corpus using a language modeling objective. We note that both SciGPT2 and the SciBERT language models used here have been exposed to citing sentences from the test and validation sets as in-line citations during their pre-training phases, which may improve their performance versus models without this exposure. Such exposure is typical when using pretrained language models, as text from test data cannot be guaranteed to be absent from the large task-independent corpora upon which these models are trained. <<</Language Model Pretraining>>> <<</Models>>> <<<Evaluation>>> We compare the different baseline systems using BLEU BIBREF12, ROUGE (specifically ROUGE 1, 2, and L; BIBREF13), and the recently introduced BertScore BIBREF14, a similarity metric based on BERT embeddings which has been shown to correlate well with human judgements on other tasks. To adapt the BertScore metric to the scientific text domain, we use SciBERT embeddings. Table TABREF7 (above the double line) shows the performance of the SciGPT2 model on the test set when provided with the different input context combinations outlined in Section SECREF5. We find that context does make a difference for this category of model, and that models which have access to the intro of the documents outperform those which use abstracts or sampling. Automatic evaluation of the retrieval-based methods on the test data are shown below the double line in Table TABREF7. This table shows that the retrieval methods perform well on this task. However we will show the limitations of these automatic metrics in Section SECREF14. We also observe that tuning the $\alpha $ and $\beta $ parameters on the validation set results in overfitting for this method. Outputs are largely unchanged by this tuning; fewer than 400 test datapoints differ from the untuned outputs. A larger validation split may alleviate this problem. Statistical significance is assessed for select results using bootstrapping with 1000 samples in each of 100 iterations. This test shows that conditioning on the introduction of the source document improves performance compared to conditioning on the abstract when using the SciGPT2 model. However, we see that IR methods perform better than the best neural models. We do not find enough evidence to reject the null hypothesis regarding what context from the cited document should be used. <<</Evaluation>>> <<<Analysis>>> In this section we take a closer look at the details of the SciGPT2 and IR system outputs on a collection of validation datapoints. We provide a quantitative error analysis as well as qualitative analysis and examples. <<<Errors>>> In order to better understand the performance of the models, we undertake a quantitative analysis of its output. One author randomly selected 200 datapoints from the validation set and their associated model outputs. Source and cited papers in the topic of NLP were used so as to facilitate expert judgement. For tractability, we limited the context presented to the annotator to the document abstracts and analyze the outputs of the abs $\times $ abs and IR systems. In this analysis, we ask whether the models are producing believable citing sentences given their input. In particular, we are interested in the relative believability of the SciGPT2 and IR systems, as well as how believability of a citing sentence changes when a reader can see the abstract of one document or both. We use 100 datapoints with outputs from the SciGPT2 system and 100 with outputs from the IR system. For 50 datapoints from each system, the cited document's abstract is initially masked such that only the source context is visible (Source, One Visible). Based only on the source context, the annotator judged whether the model output (1) could have convincingly been a citation in the source document based solely on the abstract (believable), (2) could have been a citation in the source document, but unclear from the abstract alone and depends on the rest of the paper's content (content-dependent), or (3) is unlikely to appear in this document (not believable). After making this judgment, the annotator was then shown the abstract of the cited document and asked to make the 3-way believability judgment based on both source and cited abstracts (Source, Both Visible). This process is repeated with the remaining 50 datapoints, but with the cited context masked initially (Cited, One Visible and Cited, Both Visible). The results of our analysis presented in Table TABREF13. We find that believability in the Cited, One Visible condition correlates well with the Cited, Both Visible condition. In the Source conditions, we see a greater difference in believability between One Visible and Both Visible. These findings makes sense: in-line citations often summarize a prior study rather than highlight the paper's own contributions. Together, these results indicate that the believability of citing sentences is more related to the cited document than to the source. Another interesting feature of this analysis is the difference between SciGPT2 and IR in terms of context-dependent citing sentences. We observe fewer such judgements in the IR outputs. This is probably due to the fact that neural text generation systems such as SciGPT2 will sometimes produce generic, uninformative outputs while the IR system outputs are usually specific enough that a stronger believability judgement can be made. We also observe an overall higher instance of not believable judgements of the IR model outputs. This implies that automatic metrics such as BLEU, where the IR system scored higher than SciGPT2, do not correlate with factual accuracy in citation text generation. Example citations and annotations are shown in Table TABREF15. We find that in the cases where the model generated outputs are unconvincing they are still on topic. All 10 cases in the Source, One Visible and 4 of the cases in Cited, One Visible that were no longer believable in the Both Visible conditions exhibit this quality. A common example (4 cases) of this phenomenon occurs when the model output references a dataset. While the dataset would be potentially relevant to both papers, the cited papers focus on modeling contributions and do not introduce a novel corpus. <<</Errors>>> <<<Examples>>> Example system outputs for randomly selected validation instances are shown in Table TABREF18. We see that both the SciGPT2 and IR model outputs regularly hit on the correct broad topic of the cited text (such “literary analysis” or “image captioning evaluation metrics”). It is notable that the SciGPT2 model outputs syntactically correct and coherent citation sentences, even given the difficulty of the vocabulary in this domain. This is a testament to the power of the domain-specific language model training. We also observe that the outputs of the SciGPT2 model are often shorter than the desired citing sentence. Brevity is a known issue for neural text generation and may be alleviated by penalizing brevity in the inference procedure. More problematic are the factual errors in the generated text. In the last example, for instance, we see that SciGPT2 fails to cite the specific image captioning dataset described in the cited paper (Pascal1K) and instead focuses on the more general evaluation metric for the image captioning task (CIDEr). This is typical of neural text generation systems, which often assign high probability to generic or frequent phrases and revert to these in the face of uncertainty. <<</Examples>>> <<<Future Work>>> The fluency and topical relevance of the baseline models show the plausibility of the citation text generation task as well as the utility of including pretrained scientific language models in future models. But based on the kinds of errors we have seen, future work should focus on two complementary goals: ensuring the factual accuracy of the generated text and improved modeling of the cited document. Factual accuracy is difficult to enforce in statistical text generation systems, especially where inference includes sampling procedures. Grounding to knowledge bases could help. For this task, knowledge extracted from candidate generations could be compared with knowledge from the full source and cited documents to prune false or irrelevant statements. Further, modeling input documents as knowledge graphs of their contents may help these algorithms better understand the cited document, resulting in better outputs. However, such a model will have to address the open problem of combining pretrained language models with graph encoding techniques. <<</Future Work>>> <<</Analysis>>> <<<Related Work>>> The current work builds on recent research in scientific document understanding, including citation recommendation and categorization, as well as scientific document summarization. Citation recommendation, or the task of selecting works related to a source document which would be suitable for citing, is a longstanding goal of AI research BIBREF15, BIBREF2, BIBREF16. Recently, researchers have sought to categorize citations using various ontologies of citation intents. BIBREF1 sought to discern “highly influential” citations from others. BIBREF17 uses six categories including “motivation”, “uses”, and “future work” among others. BIBREF3 condense this ontology to just three: “background”,“method”, and “result comparison”. We view the citation text generation task as an extension of these classification approaches with distinct advantages. While classification requires an extant citation link to exist, our generation task can describe possible relationships between works which do not cite each other, such as contemporaneous works. Additionally, because gold citation texts are readily available in scientific documents, the citation text generation task requires no task-specific annotated training data. In practice, citation classification is used to assist in suggesting relevant works to researchers; citation text generation complements this goal by providing rationales for the recommendation and furthering progress toward explainable AI. Generating a citation is also connected to summarizing scientific documents. There is a long history research on summarizing scientific documents BIBREF18, BIBREF19. More recently, researchers have included citing sentences as part of the input for summarization, hoping to capture the contribution of a work along with its content BIBREF20, BIBREF21, BIBREF5. Ours is the first to focus on the specific relationship between two documents when generating such sentences. Because of the emphasis on relational document understanding in our task, citation generation models can be used to assist with drafting papers as well, reducing researcher workload and providing non-native writers with a helpful first draft. Our work builds on recent advances in transfer learning in NLP. In particular, large pretrained models such as BERT BIBREF22 and GPT2 BIBREF6 have made strong advances on a number of tasks BIBREF23. It has also been shown that pretraining these models on domain-specific data further improves results on domain-speicific tasks BIBREF11, BIBREF24. In this work, we apply that methodology by adding an additional pretraining phase on in-domain data before finetuning a GPT2 model on the citation text generation task. <<</Related Work>>> <<<Conclusion>>> We have introduced the challenging but useful task of citation text generation. This task requires reasoning about the relationships between documents and expressing these relationships in natural language text. We have established a dataset for this task and studied the performance of contemporary neural text generation and information retrieval models. Our analysis shows that while these models produce fluent and topical outputs, more research is needed to ensure factual accuracy and specificity in the generated text. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Analysis" ], "type": "disordered_section" }
2004.04228
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Asking and Answering Questions to Evaluate the Factual Consistency of Summaries <<<Abstract>>> Practical applications of abstractive summarization models are limited by frequent factual inconsistencies with respect to their input. Existing automatic evaluation metrics for summarization are largely insensitive to such errors. We propose an automatic evaluation protocol called QAGS (pronounced"kags") that is designed to identify factual inconsistencies in a generated summary. QAGS is based on the intuition that if we ask questions about a summary and its source, we will receive similar answers if the summary is factually consistent with the source. To evaluate QAGS, we collect human judgments of factual consistency on model-generated summaries for the CNN/DailyMail (Hermann et al., 2015) and XSUM (Narayan et al., 2018) summarization datasets. QAGS has substantially higher correlations with these judgments than other automatic evaluation metrics. Also, QAGS offers a natural form of interpretability: The answers and questions generated while computing QAGS indicate which tokens of a summary are inconsistent and why. We believe QAGS is a promising tool in automatically generating usable and factually consistent text. <<</Abstract>>> <<<Introduction>>> Automatic summarization aims to produce summaries that are succinct, coherent, relevant, and — crucially — factually correct. Recent progress in conditional text generation has led to models that can generate fluent, topical summaries BIBREF2. However, model-generated summaries frequently contain factual inconsistencies, limiting their applicability BIBREF3. The problem of factual inconsistency is due in part to the lack of automatic evaluation metrics that can detect such errors. Standard metrics for evaluating generated text are predominantly based on counting $n$-grams, which weigh all $n$-grams equally and are insensitive to semantic errors. This inadequacy leaves human evaluation as the primary method for evaluating the factual consistencies, which has been noted to be challenging even for humans BIBREF4, BIBREF5, in addition to being slow and costly. We argue that evaluation metrics that are able to capture subtle semantic errors are required to build better models. In this work, we introduce a general framework for evaluating conditional text generation that is designed to detect factual inconsistencies in generated text with respect to some input. Our framework consists of three steps: (1) Given a generated text, a question generation (QG) model generates a set of questions about the text. (2) We then use question answering (QA) models to answer these questions given both the input and the generated text. (3) A quality score is computed based on the similarity of corresponding answers. This approach leverages recent progress in QA and QG to ask and answer human readable, on-topic questions BIBREF6, BIBREF7. It only assumes access to a question answering dataset to train the QG and QA models, and is applicable to any modality where a QA model is available, e.g. text, images, or knowledge graphs. We use this framework to develop QAGS (Question Answering and Generation for Summarization), a metric for evaluating the factual consistency of abstractive document summaries. Compared to commonly used automatic metrics such as ROUGE BIBREF8, QAGS shows dramatically higher correlations with human judgements of factuality, for example achieving a Pearson correlation coefficient of 54.52 on the CNN/DailyMail summarization task, compared to 17.72 for ROUGE-2. QAGS also achieves new state-of-the-art results on evaluating the factuality of summaries, outperforming recently proposed NLI models for this task BIBREF5. Finally, we analyse the robustness of QAGS through an ablation study. QAGS shows robustness to the quality of the underlying QG and QA models, the domain of the models, and the number of questions asked. Even under the worst ablation settings, QAGS still has stronger correlation with human judgments than other automatic metrics. Overall, we contribute the following: (1) We introduce QAGS, an automatic model-based evaluation metric for measuring the factual consistency of model-generated text. (2) We collect a new set of human judgments of factual consistency of model-generated summaries for two summarization datasets. We demonstrate that QAGS correlates with these judgments significantly better than other automatic metrics. (3) We show via ablations that QAGS is robust to a number of factors including underlying model quality and domain mismatch. (4) We analyze the questions and answers produced in computing QAGS to illustrate which parts of summaries are inconsistent. (5) We will release models and code to compute QAGS. <<</Introduction>>> <<<Background: Automatically Evaluating Machine Generated Text>>> Standard approaches to evaluating generated text are primarily based on counting $n$-gram overlap. These methods assume access to one or more reference texts, and score a generated summary based on the precision and recall of all reference $n$-grams in the generated summary. We briefly describe the most common metrics in this family, and refer readers to BIBREF9 for further discussion. ROUGE BIBREF8 was developed specifically for evaluating automatic summarization, and its variants are the de facto standard for such. The most common variant is ROUGE-$n$ (typically $n \in \lbrace 1, 2\rbrace $), which computes the F1 score for all reference $n$-grams in the generated summary. ROUGE-$L$, another commonly used variant, is the length of the longest common subsequence (possibly non-consecutive) between a summary and references. BLEU BIBREF10 is closely related to ROUGE but was developed for machine translation. BLEU computes the precision of the reference $n$-grams in the generated summary. METEOR BIBREF11 extends BLEU by using an alignment between the generated text and a reference, as well as using stemming and synonym replacement for more flexible $n$-gram matching. We identify two key deficiencies when using these $n$-gram based evaluation metrics to detect factual inconsistencies in generated text. First, these metrics require one or more reference texts to compare against. Obtaining references can be expensive and challenging, and as such many text generation datasets contain only a single reference. This problem is exacerbated with high-entropy generation tasks, such as summarization or dialogue, where there is a very large number of acceptable outputs. In these settings, comparing against a single reference is woefully inadequate. Second, given a reference to compare against, $n$-gram based approach weigh all portions of the text equally, even when only a small fraction of the $n$-grams carry most of the semantic content. Factual inconsistencies caused by minor changes may be drowned out by otherwise high $n$-gram overlap, making these metrics insensitive to these errors. For example, the sentences “I am writing my paper in Vancouver.” and “I am not writing my paper in Vancouver.” share nearly all unigrams and bigrams despite having the opposite meaning. <<</Background: Automatically Evaluating Machine Generated Text>>> <<<A Framework for Automatically Evaluating Factual Consistency>>> We introduce a framework for automatically detecting factual inconsistencies in generated text while also addressing the deficiencies of current approaches. Let $X$ and $Y$ be sequences of tokens coming from a vocabulary $V$ where $X$ is a source text and $Y$ is a summary of $X$. We define $p(Q|Y)$ as a distribution over all possible questions $Q$ given summary $Y$, and $p(A|Q, X)$ and $p(A|Q, Y)$ as distributions over all possible answers $A$ to a particular question $Q$ given either the source $X$ or the summary $Y$. We constrain the questions $Q$ and answers $A$ to also be sequences of tokens from $V$. Then the factual consistency of the summary $Y$ is where $D$ is some function measuring the similarity of the two answer distributions. This expression is maximized when $Y$ contains a subset of the information in $X$ such that it produces the same answer for any question from $p(Q|Y)$. This happens trivially when $Y=X$, e.g. we take $X$ as its own summary, but we usually have other desiderata of $Y$ such that this solution is undesirable. This framework addresses the two issues with $n$-gram based approaches. Instead of requiring a reference to compare against, our framework asks questions based on the generation itself, and compares answers with the provided source text. Also, the use of questions focuses the metric on the semantically relevant parts of the generated text, rather than weighting all parts of the text equally. In practice, exactly computing the expectation in Equation DISPLAY_FORM4 is intractable due to the large space of possible questions. One potential workaround is to randomly sample questions from $p(Q|Y)$, but this suffers from high variance and requires many samples to obtain a good estimate. Instead, we focus on producing highly probable questions, e.g. as produced by beam search, which may be biased in the limit, but will require fewer questions to estimate because of the higher quality of the questions. <<</A Framework for Automatically Evaluating Factual Consistency>>> <<<QAGS>>> Using this framework requires specifying the question distribution $p(Q|Y)$, the answer distribution $p(A|Q, Y)$ (or $X$), and the answer similarity function $D$. We apply this framework to summarization to develop QAGS and describe our instantiations of these components. <<<Question Generation>>> To instantiate $p(Q|Y)$, we draw on recent work on automatic question generation (QG), which models this distribution using neural seq2seq models BIBREF12, BIBREF13. We over-sample questions, and then filter out low quality questions as follows. First, we train and generate from answer-conditional QG models: The model receives both the answer and the source article, and is trained to maximize the likelihood of the paired question. At test time, we extract named entities and noun phrases as answers candidates using spaCy. Second, we filter out low-quality questions using a number of heuristics, such as duplicates and questions less than three tokens long. We also found it useful to run the QA model (see next section) on all of the candidate questions, and filter out questions for which the QA model predicted no answer. <<</Question Generation>>> <<<Question Answering>>> We instantiate the answer distributions $p(A|Q,*)$ as extractive QA models, for simplicity. We use extractive QA because we assume the facts are represented as text spans in the article and summary. Future work should explore using abstractive QA models, which could match paraphrases of the same answer. <<</Question Answering>>> <<<Answer Similarity>>> We use token-level F1 to compare answers, which is standard for extractive QA and equivalent to defining $D$ as <<</Answer Similarity>>> <<<The QAGS Score>>> Given these components, we obtain the QAGS score of a generation by (1) generating $K$ questions conditioned on the summary, (2) answering the questions using both the source article and the summary to get two sets of answers, (3) comparing corresponding answers using the answer similarity metric, and (4) averaging the answer similarity metric over all questions. We depict this process in Figure FIGREF3. <<</The QAGS Score>>> <<</QAGS>>> <<<Experiments>>> <<<Human Evaluation>>> We test whether QAGS accurately measures the factual consistency of a summary with respect to a source article by computing correlations with human judgments of factual consistency. <<<Datasets>>> We evaluate on two abstractive summarization datasets, CNN/Daily Mail BIBREF0, BIBREF14 and XSUM BIBREF1. Abstractive summarization is particularly interesting because factual consistency with the original text is crucial to usability, and a lack of such consistency has plagued abstractive neural summarization models BIBREF15, BIBREF16, BIBREF5. CNN/DM is a standard dataset for summarization that consists of CNN and DailyMail articles. Each reference summary consists of the concatenation of three editor-written, bullet point highlights. For summaries, we use 235 test outputs from BIBREF17. XSUM was created by taking the first sentence of a news article as the summary, and using the rest of the article as the source. Consequently, XSUM summaries are significantly more abstractive than those of CNN/DM, and extractive summarization models perform poorly on this dataset. We found that while the XSUM summaries are more abstractive, frequently there are facts (e.g. first names) in the summary that are not available in the “article”. This quirk made it especially difficult for humans and QAGS to tell when factual errors were being made by the summarization model. To remedy this, for human evaluation and QAGS, we prepend the summary back to the “article”. We use a subset of 239 test outputs from BART fine-tuned on XSUM BIBREF2. <<</Datasets>>> <<<Annotation Protocol>>> We collect human judgments on Amazon Mechanical Turk via ParlAI BIBREF18. We present summaries one sentence at a time, along with the entire article. For each summary sentence, the annotator makes a binary decision as to whether the sentence is factually consistent with the article. Workers are instructed to mark non-grammatical sentences as not consistent, and copies of article sentences as consistent. Workers are paid $1 per full summary annotated. See Appendix SECREF10 for further details. We collect 3 annotations per summary. To obtain a single “correctness” score per summary, we first take the majority vote for each sentence, then average the binary scores across summary sentences. Inter-annotator agreement as measured by Krippendorff's $\alpha $ is 0.51 and 0.34 for CNN/DM and XSUM, respectively indicating “moderate” and “fair” agreement BIBREF19. While not perfect, these agreement numbers are in-line with similar figures from previous work on summarization evaluation BIBREF4. <<</Annotation Protocol>>> <<</Human Evaluation>>> <<<Experimental Details>>> <<<Baselines>>> We compare against a number of automatic evaluation metrics: ROUGE BIBREF8, METEOR BIBREF11, BLEU BIBREF10, and BERTScore BIBREF24. The latter uses BERT representations to compute an alignment between generation and reference tokens, and which is then used to compute a soft version of unigram F1. We use the large-uncased BERT variant. <<</Baselines>>> <<</Experimental Details>>> <<<Results>>> We present results in Table . QAGS strongly outperforms other automatic evaluation metrics in terms of correlation with human judgments of factual consistency. BLEU and ROUGE perform comparably, and lower order $n$-gram metrics work better. BERTScore matches the best $n$-gram metrics on CNN/DM, but the worst overall on XSUM. On CNN/DM, QAGS obtains nearly twice the correlation of the next best automatic metric (BLEU-1). We speculate that this large increase is due to the sensitivity of the QA model to the sentence fusing behavior exhibited in many summarization models trained on CNN/DM BIBREF25. When two sentences are fused to produce an incorrect summary statement, the QA model produces different answers than when using the source article versus when using the summary. On XSUM, all metrics correlate worse with human judgments than on CNN/DM, which reflects the fact that XSUM is more abstractive. QAGS still outperforms the next best automatic metric. <<</Results>>> <<<Ablations>>> A potential issue with model-based evaluation is that the quality of the evaluation metric may depend heavily on specific hyperparameter settings. We explore whether this is true with QAGS by performing ablations on several factors. <<<Model Quality>>> We first consider the degree to which the quality of the underlying models impacts their evaluation capabilities. For QA quality, we answer this question by training QA models of varying quality by fine-tuning different versions of BERT on SQuAD. We present results in Table . The QA models perform similarly despite substantially different performances on the SQuAD development set. Surprisingly, using the best QA model (bert-large-wwm) does not lead to the best correlations with human judgments. On CNN/DM, bert-large-wwm slightly underperforms bert-base and bert-large. On XSUM, bert-base slightly outperforms the other two BERT variants. These results indicate that QAGS is fairly robust to the quality of the underlying QA model, though we note that BERT is a strong QA baseline, and using weaker QA models might lead to larger performance dropoffs. To ablate QG quality, we use models with increasing perplexity on the NewsQA development set. Results in Table show that QAGS is robust to the QG model quality, with some decrease in correlation with human judgments as perplexity increases on CNN/DM, and no clear trend on XSUM. Even the weakest QG model still significantly outperforms all other automatic metrics in Table . <<</Model Quality>>> <<<Domain Effects>>> Our approach relies on having a labeled dataset to train QG and QA models. However, for relatively niche domains, such a labeled QA/QG dataset may not exist. Instead, we may need to resort to using models trained on out-of-domain data, leading to domain shift effects that negatively impact the quality of the QAGS scores. We simulate this setting by fine-tuning the QG model on SQuAD, which is of similar size to NewsQA but drawn from Wikipedia articles rather than CNN articles, which exactly matches the genre of the summarization datasets. Evaluating with this QG model, we get correlations of 51.53 and 15.28 with human judgments on CNN/DM and XSUM respectively, versus 54.53 and 17.49 when using the NewsQA-tuned QG model. The drop in performance indicates a negative domain shift effect. However using the SQuAD-tuned QG model still substantially outperforms all other automatic metrics, again pointing to the robustness of QAGS. <<</Domain Effects>>> <<<Number of Questions>>> Next, we investigate the correlation with human judgments when varying the number of questions used. Results in Table show that increasing the number of questions used improves correlations with human judgments. We observe a large increase when moving from 10 to 20 questions, and a smaller increase from 20 to 50 questions, indicating decreasing marginal benefit moving beyond 50 questions. With just 5 questions, QAGS still substantially outperforms other automatic metrics, indicating its robustness. <<</Number of Questions>>> <<<Answer Similarity Metric>>> Finally, we consider using exact match as an alternative answer similarity metric. Exact match is another common evaluation metric for extractive QA, and is more restrictive than F1. When using EM, we obtain Pearson correlations with human judgments of 45.97 and 18.10 on CNN/DM and XSUM, as opposed to 54.53 and 17.49 when using F1. <<</Answer Similarity Metric>>> <<</Ablations>>> <<</Experiments>>> <<<Re-ranking with QAGS>>> Several works explore the use of natural language inference (NLI) models to detect factual consistency in generated text BIBREF26, BIBREF16. We compare against these methods by evaluating on the sentence ranking experiment from BIBREF16. The experiment uses 373 triplets of source sentences from CNN/DM and two summary sentences generated from the model from BIBREF27. One summary sentence is factually consistent with the source sentence, and the other is inconsistent. A metric (or model) is evaluated based on how often it ranks the consistent sentence higher than the inconsistent sentence. We present the results in Table . Results using two NLI models fine-tuned on MultiNLI BIBREF28, BERT NLI and ESIM BIBREF29, are from BIBREF16. FactCC BIBREF5 is an NLI-based fact-checking model that is trained on a dataset tailor made for detecting factual inconsistencies in generated text. QAGS outperforms these methods, while requiring no special supervision for this task. <<</Re-ranking with QAGS>>> <<<Qualitative Analysis>>> <<<Interpreting QAGS>>> The questions and answers produced in computing QAGS are directly interpretable, and highlight errors in summaries. We present examples of articles, summaries, and the QAGS questions and answers in Table . On the first example (Table , top), QAGS detects several factual inconsistencies in the generated summary: The summary mistakes the first name of the attacker, the location of the attack, and the weapons used. Because the QG model focuses on these details, QAGS is able to correctly penalize the summary for its hallucinations. Because the answer candidates used are mostly named entities and noun phrases, QAGS is particularly effective at detecting errors of this kind. Using more diverse answer candidates may broaden the set of inconsistencies that QAGS is able to detect. The second example (Table , bottom), illustrates failure modes of QAGS. For example, the QA model incorrectly marks question 2 as unanswerable. On question 4, both answers produced are correct, but because they have no common tokens, they are marked inconsistent by QAGS. <<</Interpreting QAGS>>> <<<Error Analysis>>> The interpretability of QAGS allows for error analysis on the metric. We manually annotate 400 triplets of generated questions, article answers, and summary answers that are produced in computing QAGS on the XSUM summaries, and label them by the quality of the generated questions, predicted answers, and answer similarity scores. Among the generated questions, 8.75% are nonsensical, while 3.00% are well-formed but unanswerable using the generated summary they were conditioned upon. These figures indicate that the vast majority of questions are understandable and on-topic. We frequently observe multiple questions with slightly different wordings, which is likely due to the low number of answer candidates in XSUM summaries (which are one sentence long) and due to beam search. 8.25% of questions are well-formed but unanswerable using the source, which is usually due to a hallucinated fact in the summary that the QG model turns into a question. Among predicted answers, 1.75% of questions are potentially answerable using the summary, but are incorrectly answered. This percentage increases to 32.50% for the article, which indicates that the transfer ability of the QA model is lacking. In a small number of cases, we found that while a question had a single answer in the summary, it could have multiple answers in the article. Finally, for 8.00% of the examples, the question is answered correctly using both the article and summary, but the answers have high lexical variation such that F1 score fails to detect their similarity. While this happens in a relatively small number of cases, exploring similarity metrics other than $n$-gram based approaches could be useful. <<</Error Analysis>>> <<<Limitations>>> We emphasize that QAGS and our overall framework are specifically designed to detect factual inconsistencies in generated summaries relative to the source article. QAGS does not measure other desirable properties of generated text, including fluency, readability, or factual recall. We therefore recommend using QAGS in conjunction with complementary evaluation metrics. The choices of QG and QA models in QAGS are particular to abstractive summarization and may require adaptation to be used for other conditional text generation tasks. For example, we expect that extractive summarization models may obtain nearly perfect QAGS scores because facts and statements are directly copied from the source article. <<</Limitations>>> <<</Qualitative Analysis>>> <<<Related Work>>> Automatic summarization and its evaluation are long-standing lines of work in NLP, dating at least as far back as the Document Understanding Conferences BIBREF30. The primary evaluation metric then and now is ROUGE BIBREF8, though much work has demonstrated the limited ability of ROUGE and its relatives to evaluate summaries BIBREF31, BIBREF32, BIBREF33. Other metrics have focused on specific aspects of summarization quality, including content selection BIBREF34, relevance prediction BIBREF4, and many more. There has been a recent resurgence of work leveraging NLU models for evaluating the factuality of generated text. BIBREF35 use information extraction models to measure factual overlap, but facts are restricted to pre-defined schemas. BIBREF16 investigate the use of NLI models to evaluate the factual correctness of CNN/DM summaries, and conclude that current NLI models are too brittle to be reliably used in this manner. BIBREF5 train a NLI-based fact-checking model by building a dataset of factual inconsistencies based on noise heuristic. Our QA approach allows a finer-grained analysis, because NLI operates on complete sentences, whereas QAGS can ask many questions about the same sentence. Most relatedly, BIBREF36 and BIBREF37 use QA models to evaluate summarization. We diverge from these works in two important ways. First, both works use Cloze-style questions, which are generated by masking entities in either the source document or the reference summary. We instead generate the questions with a model, allowing a much greater range of questions. Second, we produce questions conditioned on the generated summary, rather than the reference summary or source article. Producing questions from the generated summary is more appropriate for verifying the accuracy of the text, whereas using the reference or source measures content selection. <<</Related Work>>> <<<Conclusion>>> We introduce a framework for automatically detecting factual inconsistencies in conditionally generated texts and use this framework to develop QAGS, a metric for measuring inconsistencies in abstractive summarization. QAGS correlates with human judgments of factuality significantly better than standard automatic evaluation metrics for summarization, and outperforms related NLI-based approaches to factual consistency checking. QAGS is naturally interpretable: The questions and answers produced in computing QAGS indicate which tokens in a generated summary are inconsistent and why. Error analysis shows that future work should explore improved QA models. Our approach can also be applied to diverse modalities, such as translation and image captioning. Overall, we believe QAGS is useful in quantifying and incentivizing factually consistent text generation. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Background: Automatically Evaluating Machine Generated Text" ], "type": "disordered_section" }
2004.04228
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Asking and Answering Questions to Evaluate the Factual Consistency of Summaries <<<Abstract>>> Practical applications of abstractive summarization models are limited by frequent factual inconsistencies with respect to their input. Existing automatic evaluation metrics for summarization are largely insensitive to such errors. We propose an automatic evaluation protocol called QAGS (pronounced"kags") that is designed to identify factual inconsistencies in a generated summary. QAGS is based on the intuition that if we ask questions about a summary and its source, we will receive similar answers if the summary is factually consistent with the source. To evaluate QAGS, we collect human judgments of factual consistency on model-generated summaries for the CNN/DailyMail (Hermann et al., 2015) and XSUM (Narayan et al., 2018) summarization datasets. QAGS has substantially higher correlations with these judgments than other automatic evaluation metrics. Also, QAGS offers a natural form of interpretability: The answers and questions generated while computing QAGS indicate which tokens of a summary are inconsistent and why. We believe QAGS is a promising tool in automatically generating usable and factually consistent text. <<</Abstract>>> <<<Introduction>>> Automatic summarization aims to produce summaries that are succinct, coherent, relevant, and — crucially — factually correct. Recent progress in conditional text generation has led to models that can generate fluent, topical summaries BIBREF2. However, model-generated summaries frequently contain factual inconsistencies, limiting their applicability BIBREF3. The problem of factual inconsistency is due in part to the lack of automatic evaluation metrics that can detect such errors. Standard metrics for evaluating generated text are predominantly based on counting $n$-grams, which weigh all $n$-grams equally and are insensitive to semantic errors. This inadequacy leaves human evaluation as the primary method for evaluating the factual consistencies, which has been noted to be challenging even for humans BIBREF4, BIBREF5, in addition to being slow and costly. We argue that evaluation metrics that are able to capture subtle semantic errors are required to build better models. In this work, we introduce a general framework for evaluating conditional text generation that is designed to detect factual inconsistencies in generated text with respect to some input. Our framework consists of three steps: (1) Given a generated text, a question generation (QG) model generates a set of questions about the text. (2) We then use question answering (QA) models to answer these questions given both the input and the generated text. (3) A quality score is computed based on the similarity of corresponding answers. This approach leverages recent progress in QA and QG to ask and answer human readable, on-topic questions BIBREF6, BIBREF7. It only assumes access to a question answering dataset to train the QG and QA models, and is applicable to any modality where a QA model is available, e.g. text, images, or knowledge graphs. We use this framework to develop QAGS (Question Answering and Generation for Summarization), a metric for evaluating the factual consistency of abstractive document summaries. Compared to commonly used automatic metrics such as ROUGE BIBREF8, QAGS shows dramatically higher correlations with human judgements of factuality, for example achieving a Pearson correlation coefficient of 54.52 on the CNN/DailyMail summarization task, compared to 17.72 for ROUGE-2. QAGS also achieves new state-of-the-art results on evaluating the factuality of summaries, outperforming recently proposed NLI models for this task BIBREF5. Finally, we analyse the robustness of QAGS through an ablation study. QAGS shows robustness to the quality of the underlying QG and QA models, the domain of the models, and the number of questions asked. Even under the worst ablation settings, QAGS still has stronger correlation with human judgments than other automatic metrics. Overall, we contribute the following: (1) We introduce QAGS, an automatic model-based evaluation metric for measuring the factual consistency of model-generated text. (2) We collect a new set of human judgments of factual consistency of model-generated summaries for two summarization datasets. We demonstrate that QAGS correlates with these judgments significantly better than other automatic metrics. (3) We show via ablations that QAGS is robust to a number of factors including underlying model quality and domain mismatch. (4) We analyze the questions and answers produced in computing QAGS to illustrate which parts of summaries are inconsistent. (5) We will release models and code to compute QAGS. <<</Introduction>>> <<<Background: Automatically Evaluating Machine Generated Text>>> Standard approaches to evaluating generated text are primarily based on counting $n$-gram overlap. These methods assume access to one or more reference texts, and score a generated summary based on the precision and recall of all reference $n$-grams in the generated summary. We briefly describe the most common metrics in this family, and refer readers to BIBREF9 for further discussion. ROUGE BIBREF8 was developed specifically for evaluating automatic summarization, and its variants are the de facto standard for such. The most common variant is ROUGE-$n$ (typically $n \in \lbrace 1, 2\rbrace $), which computes the F1 score for all reference $n$-grams in the generated summary. ROUGE-$L$, another commonly used variant, is the length of the longest common subsequence (possibly non-consecutive) between a summary and references. BLEU BIBREF10 is closely related to ROUGE but was developed for machine translation. BLEU computes the precision of the reference $n$-grams in the generated summary. METEOR BIBREF11 extends BLEU by using an alignment between the generated text and a reference, as well as using stemming and synonym replacement for more flexible $n$-gram matching. We identify two key deficiencies when using these $n$-gram based evaluation metrics to detect factual inconsistencies in generated text. First, these metrics require one or more reference texts to compare against. Obtaining references can be expensive and challenging, and as such many text generation datasets contain only a single reference. This problem is exacerbated with high-entropy generation tasks, such as summarization or dialogue, where there is a very large number of acceptable outputs. In these settings, comparing against a single reference is woefully inadequate. Second, given a reference to compare against, $n$-gram based approach weigh all portions of the text equally, even when only a small fraction of the $n$-grams carry most of the semantic content. Factual inconsistencies caused by minor changes may be drowned out by otherwise high $n$-gram overlap, making these metrics insensitive to these errors. For example, the sentences “I am writing my paper in Vancouver.” and “I am not writing my paper in Vancouver.” share nearly all unigrams and bigrams despite having the opposite meaning. <<</Background: Automatically Evaluating Machine Generated Text>>> <<<A Framework for Automatically Evaluating Factual Consistency>>> We introduce a framework for automatically detecting factual inconsistencies in generated text while also addressing the deficiencies of current approaches. Let $X$ and $Y$ be sequences of tokens coming from a vocabulary $V$ where $X$ is a source text and $Y$ is a summary of $X$. We define $p(Q|Y)$ as a distribution over all possible questions $Q$ given summary $Y$, and $p(A|Q, X)$ and $p(A|Q, Y)$ as distributions over all possible answers $A$ to a particular question $Q$ given either the source $X$ or the summary $Y$. We constrain the questions $Q$ and answers $A$ to also be sequences of tokens from $V$. Then the factual consistency of the summary $Y$ is where $D$ is some function measuring the similarity of the two answer distributions. This expression is maximized when $Y$ contains a subset of the information in $X$ such that it produces the same answer for any question from $p(Q|Y)$. This happens trivially when $Y=X$, e.g. we take $X$ as its own summary, but we usually have other desiderata of $Y$ such that this solution is undesirable. This framework addresses the two issues with $n$-gram based approaches. Instead of requiring a reference to compare against, our framework asks questions based on the generation itself, and compares answers with the provided source text. Also, the use of questions focuses the metric on the semantically relevant parts of the generated text, rather than weighting all parts of the text equally. In practice, exactly computing the expectation in Equation DISPLAY_FORM4 is intractable due to the large space of possible questions. One potential workaround is to randomly sample questions from $p(Q|Y)$, but this suffers from high variance and requires many samples to obtain a good estimate. Instead, we focus on producing highly probable questions, e.g. as produced by beam search, which may be biased in the limit, but will require fewer questions to estimate because of the higher quality of the questions. <<</A Framework for Automatically Evaluating Factual Consistency>>> <<<QAGS>>> Using this framework requires specifying the question distribution $p(Q|Y)$, the answer distribution $p(A|Q, Y)$ (or $X$), and the answer similarity function $D$. We apply this framework to summarization to develop QAGS and describe our instantiations of these components. <<<Question Generation>>> To instantiate $p(Q|Y)$, we draw on recent work on automatic question generation (QG), which models this distribution using neural seq2seq models BIBREF12, BIBREF13. We over-sample questions, and then filter out low quality questions as follows. First, we train and generate from answer-conditional QG models: The model receives both the answer and the source article, and is trained to maximize the likelihood of the paired question. At test time, we extract named entities and noun phrases as answers candidates using spaCy. Second, we filter out low-quality questions using a number of heuristics, such as duplicates and questions less than three tokens long. We also found it useful to run the QA model (see next section) on all of the candidate questions, and filter out questions for which the QA model predicted no answer. <<</Question Generation>>> <<<Question Answering>>> We instantiate the answer distributions $p(A|Q,*)$ as extractive QA models, for simplicity. We use extractive QA because we assume the facts are represented as text spans in the article and summary. Future work should explore using abstractive QA models, which could match paraphrases of the same answer. <<</Question Answering>>> <<<Answer Similarity>>> We use token-level F1 to compare answers, which is standard for extractive QA and equivalent to defining $D$ as <<</Answer Similarity>>> <<<The QAGS Score>>> Given these components, we obtain the QAGS score of a generation by (1) generating $K$ questions conditioned on the summary, (2) answering the questions using both the source article and the summary to get two sets of answers, (3) comparing corresponding answers using the answer similarity metric, and (4) averaging the answer similarity metric over all questions. We depict this process in Figure FIGREF3. <<</The QAGS Score>>> <<</QAGS>>> <<<Experiments>>> <<<Human Evaluation>>> We test whether QAGS accurately measures the factual consistency of a summary with respect to a source article by computing correlations with human judgments of factual consistency. <<<Datasets>>> We evaluate on two abstractive summarization datasets, CNN/Daily Mail BIBREF0, BIBREF14 and XSUM BIBREF1. Abstractive summarization is particularly interesting because factual consistency with the original text is crucial to usability, and a lack of such consistency has plagued abstractive neural summarization models BIBREF15, BIBREF16, BIBREF5. CNN/DM is a standard dataset for summarization that consists of CNN and DailyMail articles. Each reference summary consists of the concatenation of three editor-written, bullet point highlights. For summaries, we use 235 test outputs from BIBREF17. XSUM was created by taking the first sentence of a news article as the summary, and using the rest of the article as the source. Consequently, XSUM summaries are significantly more abstractive than those of CNN/DM, and extractive summarization models perform poorly on this dataset. We found that while the XSUM summaries are more abstractive, frequently there are facts (e.g. first names) in the summary that are not available in the “article”. This quirk made it especially difficult for humans and QAGS to tell when factual errors were being made by the summarization model. To remedy this, for human evaluation and QAGS, we prepend the summary back to the “article”. We use a subset of 239 test outputs from BART fine-tuned on XSUM BIBREF2. <<</Datasets>>> <<<Annotation Protocol>>> We collect human judgments on Amazon Mechanical Turk via ParlAI BIBREF18. We present summaries one sentence at a time, along with the entire article. For each summary sentence, the annotator makes a binary decision as to whether the sentence is factually consistent with the article. Workers are instructed to mark non-grammatical sentences as not consistent, and copies of article sentences as consistent. Workers are paid $1 per full summary annotated. See Appendix SECREF10 for further details. We collect 3 annotations per summary. To obtain a single “correctness” score per summary, we first take the majority vote for each sentence, then average the binary scores across summary sentences. Inter-annotator agreement as measured by Krippendorff's $\alpha $ is 0.51 and 0.34 for CNN/DM and XSUM, respectively indicating “moderate” and “fair” agreement BIBREF19. While not perfect, these agreement numbers are in-line with similar figures from previous work on summarization evaluation BIBREF4. <<</Annotation Protocol>>> <<</Human Evaluation>>> <<<Experimental Details>>> <<<Baselines>>> We compare against a number of automatic evaluation metrics: ROUGE BIBREF8, METEOR BIBREF11, BLEU BIBREF10, and BERTScore BIBREF24. The latter uses BERT representations to compute an alignment between generation and reference tokens, and which is then used to compute a soft version of unigram F1. We use the large-uncased BERT variant. <<</Baselines>>> <<</Experimental Details>>> <<<Results>>> We present results in Table . QAGS strongly outperforms other automatic evaluation metrics in terms of correlation with human judgments of factual consistency. BLEU and ROUGE perform comparably, and lower order $n$-gram metrics work better. BERTScore matches the best $n$-gram metrics on CNN/DM, but the worst overall on XSUM. On CNN/DM, QAGS obtains nearly twice the correlation of the next best automatic metric (BLEU-1). We speculate that this large increase is due to the sensitivity of the QA model to the sentence fusing behavior exhibited in many summarization models trained on CNN/DM BIBREF25. When two sentences are fused to produce an incorrect summary statement, the QA model produces different answers than when using the source article versus when using the summary. On XSUM, all metrics correlate worse with human judgments than on CNN/DM, which reflects the fact that XSUM is more abstractive. QAGS still outperforms the next best automatic metric. <<</Results>>> <<<Ablations>>> A potential issue with model-based evaluation is that the quality of the evaluation metric may depend heavily on specific hyperparameter settings. We explore whether this is true with QAGS by performing ablations on several factors. <<<Model Quality>>> We first consider the degree to which the quality of the underlying models impacts their evaluation capabilities. For QA quality, we answer this question by training QA models of varying quality by fine-tuning different versions of BERT on SQuAD. We present results in Table . The QA models perform similarly despite substantially different performances on the SQuAD development set. Surprisingly, using the best QA model (bert-large-wwm) does not lead to the best correlations with human judgments. On CNN/DM, bert-large-wwm slightly underperforms bert-base and bert-large. On XSUM, bert-base slightly outperforms the other two BERT variants. These results indicate that QAGS is fairly robust to the quality of the underlying QA model, though we note that BERT is a strong QA baseline, and using weaker QA models might lead to larger performance dropoffs. To ablate QG quality, we use models with increasing perplexity on the NewsQA development set. Results in Table show that QAGS is robust to the QG model quality, with some decrease in correlation with human judgments as perplexity increases on CNN/DM, and no clear trend on XSUM. Even the weakest QG model still significantly outperforms all other automatic metrics in Table . <<</Model Quality>>> <<<Domain Effects>>> Our approach relies on having a labeled dataset to train QG and QA models. However, for relatively niche domains, such a labeled QA/QG dataset may not exist. Instead, we may need to resort to using models trained on out-of-domain data, leading to domain shift effects that negatively impact the quality of the QAGS scores. We simulate this setting by fine-tuning the QG model on SQuAD, which is of similar size to NewsQA but drawn from Wikipedia articles rather than CNN articles, which exactly matches the genre of the summarization datasets. Evaluating with this QG model, we get correlations of 51.53 and 15.28 with human judgments on CNN/DM and XSUM respectively, versus 54.53 and 17.49 when using the NewsQA-tuned QG model. The drop in performance indicates a negative domain shift effect. However using the SQuAD-tuned QG model still substantially outperforms all other automatic metrics, again pointing to the robustness of QAGS. <<</Domain Effects>>> <<<Number of Questions>>> Next, we investigate the correlation with human judgments when varying the number of questions used. Results in Table show that increasing the number of questions used improves correlations with human judgments. We observe a large increase when moving from 10 to 20 questions, and a smaller increase from 20 to 50 questions, indicating decreasing marginal benefit moving beyond 50 questions. With just 5 questions, QAGS still substantially outperforms other automatic metrics, indicating its robustness. <<</Number of Questions>>> <<<Answer Similarity Metric>>> Finally, we consider using exact match as an alternative answer similarity metric. Exact match is another common evaluation metric for extractive QA, and is more restrictive than F1. When using EM, we obtain Pearson correlations with human judgments of 45.97 and 18.10 on CNN/DM and XSUM, as opposed to 54.53 and 17.49 when using F1. <<</Answer Similarity Metric>>> <<</Ablations>>> <<</Experiments>>> <<<Re-ranking with QAGS>>> Several works explore the use of natural language inference (NLI) models to detect factual consistency in generated text BIBREF26, BIBREF16. We compare against these methods by evaluating on the sentence ranking experiment from BIBREF16. The experiment uses 373 triplets of source sentences from CNN/DM and two summary sentences generated from the model from BIBREF27. One summary sentence is factually consistent with the source sentence, and the other is inconsistent. A metric (or model) is evaluated based on how often it ranks the consistent sentence higher than the inconsistent sentence. We present the results in Table . Results using two NLI models fine-tuned on MultiNLI BIBREF28, BERT NLI and ESIM BIBREF29, are from BIBREF16. FactCC BIBREF5 is an NLI-based fact-checking model that is trained on a dataset tailor made for detecting factual inconsistencies in generated text. QAGS outperforms these methods, while requiring no special supervision for this task. <<</Re-ranking with QAGS>>> <<<Qualitative Analysis>>> <<<Interpreting QAGS>>> The questions and answers produced in computing QAGS are directly interpretable, and highlight errors in summaries. We present examples of articles, summaries, and the QAGS questions and answers in Table . On the first example (Table , top), QAGS detects several factual inconsistencies in the generated summary: The summary mistakes the first name of the attacker, the location of the attack, and the weapons used. Because the QG model focuses on these details, QAGS is able to correctly penalize the summary for its hallucinations. Because the answer candidates used are mostly named entities and noun phrases, QAGS is particularly effective at detecting errors of this kind. Using more diverse answer candidates may broaden the set of inconsistencies that QAGS is able to detect. The second example (Table , bottom), illustrates failure modes of QAGS. For example, the QA model incorrectly marks question 2 as unanswerable. On question 4, both answers produced are correct, but because they have no common tokens, they are marked inconsistent by QAGS. <<</Interpreting QAGS>>> <<<Error Analysis>>> The interpretability of QAGS allows for error analysis on the metric. We manually annotate 400 triplets of generated questions, article answers, and summary answers that are produced in computing QAGS on the XSUM summaries, and label them by the quality of the generated questions, predicted answers, and answer similarity scores. Among the generated questions, 8.75% are nonsensical, while 3.00% are well-formed but unanswerable using the generated summary they were conditioned upon. These figures indicate that the vast majority of questions are understandable and on-topic. We frequently observe multiple questions with slightly different wordings, which is likely due to the low number of answer candidates in XSUM summaries (which are one sentence long) and due to beam search. 8.25% of questions are well-formed but unanswerable using the source, which is usually due to a hallucinated fact in the summary that the QG model turns into a question. Among predicted answers, 1.75% of questions are potentially answerable using the summary, but are incorrectly answered. This percentage increases to 32.50% for the article, which indicates that the transfer ability of the QA model is lacking. In a small number of cases, we found that while a question had a single answer in the summary, it could have multiple answers in the article. Finally, for 8.00% of the examples, the question is answered correctly using both the article and summary, but the answers have high lexical variation such that F1 score fails to detect their similarity. While this happens in a relatively small number of cases, exploring similarity metrics other than $n$-gram based approaches could be useful. <<</Error Analysis>>> <<<Limitations>>> We emphasize that QAGS and our overall framework are specifically designed to detect factual inconsistencies in generated summaries relative to the source article. QAGS does not measure other desirable properties of generated text, including fluency, readability, or factual recall. We therefore recommend using QAGS in conjunction with complementary evaluation metrics. The choices of QG and QA models in QAGS are particular to abstractive summarization and may require adaptation to be used for other conditional text generation tasks. For example, we expect that extractive summarization models may obtain nearly perfect QAGS scores because facts and statements are directly copied from the source article. <<</Limitations>>> <<</Qualitative Analysis>>> <<<Related Work>>> Automatic summarization and its evaluation are long-standing lines of work in NLP, dating at least as far back as the Document Understanding Conferences BIBREF30. The primary evaluation metric then and now is ROUGE BIBREF8, though much work has demonstrated the limited ability of ROUGE and its relatives to evaluate summaries BIBREF31, BIBREF32, BIBREF33. Other metrics have focused on specific aspects of summarization quality, including content selection BIBREF34, relevance prediction BIBREF4, and many more. There has been a recent resurgence of work leveraging NLU models for evaluating the factuality of generated text. BIBREF35 use information extraction models to measure factual overlap, but facts are restricted to pre-defined schemas. BIBREF16 investigate the use of NLI models to evaluate the factual correctness of CNN/DM summaries, and conclude that current NLI models are too brittle to be reliably used in this manner. BIBREF5 train a NLI-based fact-checking model by building a dataset of factual inconsistencies based on noise heuristic. Our QA approach allows a finer-grained analysis, because NLI operates on complete sentences, whereas QAGS can ask many questions about the same sentence. Most relatedly, BIBREF36 and BIBREF37 use QA models to evaluate summarization. We diverge from these works in two important ways. First, both works use Cloze-style questions, which are generated by masking entities in either the source document or the reference summary. We instead generate the questions with a model, allowing a much greater range of questions. Second, we produce questions conditioned on the generated summary, rather than the reference summary or source article. Producing questions from the generated summary is more appropriate for verifying the accuracy of the text, whereas using the reference or source measures content selection. <<</Related Work>>> <<<Conclusion>>> We introduce a framework for automatically detecting factual inconsistencies in conditionally generated texts and use this framework to develop QAGS, a metric for measuring inconsistencies in abstractive summarization. QAGS correlates with human judgments of factuality significantly better than standard automatic evaluation metrics for summarization, and outperforms related NLI-based approaches to factual consistency checking. QAGS is naturally interpretable: The questions and answers produced in computing QAGS indicate which tokens in a generated summary are inconsistent and why. Error analysis shows that future work should explore improved QA models. Our approach can also be applied to diverse modalities, such as translation and image captioning. Overall, we believe QAGS is useful in quantifying and incentivizing factually consistent text generation. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Conclusion, Re-ranking with QAGS" ], "type": "disordered_section" }
1909.00161
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach <<<Abstract>>> Zero-shot text classification (0Shot-TC) is a challenging NLU problem to which little attention has been paid by the research community. 0Shot-TC aims to associate an appropriate label with a piece of text, irrespective of the text domain and the aspect (e.g., topic, emotion, event, etc.) described by the label. And there are only a few articles studying 0Shot-TC, all focusing only on topical categorization which, we argue, is just the tip of the iceberg in 0Shot-TC. In addition, the chaotic experiments in literature make no uniform comparison, which blurs the progress. ::: This work benchmarks the 0Shot-TC problem by providing unified datasets, standardized evaluations, and state-of-the-art baselines. Our contributions include: i) The datasets we provide facilitate studying 0Shot-TC relative to conceptually different and diverse aspects: the ``topic'' aspect includes ``sports'' and ``politics'' as labels; the ``emotion'' aspect includes ``joy'' and ``anger''; the ``situation'' aspect includes ``medical assistance'' and ``water shortage''. ii) We extend the existing evaluation setup (label-partially-unseen) -- given a dataset, train on some labels, test on all labels -- to include a more challenging yet realistic evaluation label-fully-unseen 0Shot-TC (Chang et al., 2008), aiming at classifying text snippets without seeing task specific training data at all. iii) We unify the 0Shot-TC of diverse aspects within a textual entailment formulation and study it this way. ::: Code & Data: this https URL <<</Abstract>>> <<<Introduction>>> Supervised text classification has achieved great success in the past decades due to the availability of rich training data and deep learning techniques. However, zero-shot text classification ($\textsc {0shot-tc}$) has attracted little attention despite its great potential in real world applications, e.g., the intent recognition of bank consumers. $\textsc {0shot-tc}$ is challenging because we often have to deal with classes that are compound, ultra-fine-grained, changing over time, and from different aspects such as topic, emotion, etc. Existing $\textsc {0shot-tc}$ studies have mainly the following three problems. <<<First problem.>>> The $\textsc {0shot-tc}$ problem was modeled in a too restrictive vision. Firstly, most work only explored a single task, which was mainly topic categorization, e.g., BIBREF1, BIBREF2, BIBREF3. We argue that this is only the tiny tip of the iceberg for $\textsc {0shot-tc}$. Secondly, there is often a precondition that a part of classes are seen and their labeled instances are available to train a model, as we define here as Definition-Restrictive: Definition-Restrictive ($\textsc {0shot-tc}$). Given labeled instances belonging to a set of seen classes $S$, $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where $Y=S\cup U$; $U$ is a set of unseen classes and belongs to the same aspect as $S$. In this work, we formulate the $\textsc {0shot-tc}$ in a broader vision. As Figure FIGREF2 demonstrates, a piece of text can be assigned labels which interpret the text in different aspects, such as the “topic” aspect, the “emotion” aspect, or the “situation” aspect described in the text. Different aspects, therefore, differ in interpreting the text. For instance, by “topic”, it means “this text is about {health, finance $\cdots $}”; by “emotion”, it means “this text expresses a sense of {joy, anger, $\cdots $}”; by “situation”, it means “the people there need {shelter, medical assistance, $\cdots $}”. Figure FIGREF2 also shows another essential property of $\textsc {0shot-tc}$ – the applicable label space for a piece of text has no boundary, e.g., “this text is news”, “the situation described in this text is serious”, etc. Therefore, we argue that we have to emphasize a more challenging scenario to satisfy the real-world problems: seeing no labels, no label-specific training data. Here is our new $\textsc {0shot-tc}$ definition: Definition-Wild ($\textsc {0shot-tc}$). $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where classifier $f(\cdot )$ never sees $Y$-specific labeled data in its model development. <<</First problem.>>> <<<Second problem.>>> Usually, conventional text classification denotes labels as indices {0,1,2, $\cdots $, $n$} without understanding neither the aspect's specific interpretation nor the meaning of the labels. This does not apply to $\textsc {0shot-tc}$ as we can not pre-define the size of the label space anymore, and we can not presume the availability of labeled data. Humans can easily decide the truth value of any upcoming labels because humans can interpret those aspects correctly and understand the meaning of those labels. The ultimate goal of $\textsc {0shot-tc}$ should be to develop machines to catch up with humans in this capability. To this end, making sure the system can understand the described aspect and the label meanings plays a key role. <<</Second problem.>>> <<<Third problem.>>> Prior work is mostly evaluated on different datasets and adopted different evaluation setups, which makes it hard to compare them fairly. For example, DBLPRiosK18 work on medical data while reporting R@K as metric; DBLPXiaZYCY18 work on SNIPS-NLU intent detection data while only unseen intents are in the label-searching space in evaluation. In this work, we benchmark the datasets and evaluation setups of $\textsc {0shot-tc}$. Furthermore, we propose a textual entailment approach to handle the $\textsc {0shot-tc}$ problem of diverse aspects in a unified paradigm. To be specific, we contribute in the following three aspects: <<</Third problem.>>> <<<Dataset.>>> We provide datasets for studying three aspects of $\textsc {0shot-tc}$: topic categorization, emotion detection, and situation frame detection – an event level recognition problem. For each dataset, we have standard split for train, dev, and test, and standard separation of seen and unseen classes. <<</Dataset.>>> <<<Evaluation.>>> Our standardized evaluations correspond to the Definition-Restrictive and Definition-Wild. i) Label-partially-unseen evaluation. This corresponds to the commonly studied $\textsc {0shot-tc}$ defined in Definition-Restrictive: for the set of labels of a specific aspect, given training data for a part of labels, predicting in the full label set. This is the most basic setup in $\textsc {0shot-tc}$. It checks whether the system can generalize to some labels in the same aspect. To satisfy Definition-Wild, we define a new evaluation: ii) Label-fully-unseen evaluation. In this setup, we assume the system is unaware of the upcoming aspects and can not access any labeled data for task-specific training. <<</Evaluation.>>> <<<Entailment approach.>>> Our Definition-Wild challenges the system design – how to develop a $\textsc {0shot-tc}$ system, without accessing any task-specific labeled data, to deal with labels from diverse aspects? In this work, we propose to treat $\textsc {0shot-tc}$ as a textual entailment problem. This is to imitate how humans decide the truth value of labels from any aspects. Usually, humans understand the problem described by the aspect and the meaning of the label candidates. Then humans mentally construct a hypothesis by filling a label candidate, e.g., “sports”, into the aspect-defined problem “the text is about $\underline{?}$”, and ask ourselves if this hypothesis is true, given the text. We treat $\textsc {0shot-tc}$ as a textual entailment problem so that our model can gain knowledge from entailment datasets, and we show that it applies to both Definition-Restrictive and Definition-Wild. Overall, this work aims at benchmarking the research of $\textsc {0shot-tc}$ by providing standardized datasets, evaluations, and a state-of-the-art entailment system. All datasets and codes are released. <<</Entailment approach.>>> <<</Introduction>>> <<<Related Work>>> $\textsc {Zero-stc}$ was first explored by the paradigm “Dataless Classification” BIBREF0. Dataless classification first maps the text and labels into a common space by Explicit Semantic Analysis (ESA) BIBREF4, then picks the label with the highest matching score. Dataless classification emphasizes that the representation of labels takes the equally crucial role as the representation learning of text. Then this idea was further developed in BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. With the prevalence of word embeddings, more and more work adopts pretrained word embeddings to represent the meaning of words, so as to provide the models with the knowledge of labels BIBREF10, BIBREF2, BIBREF11, BIBREF12. DBLPYogatamaDLB17 build generative LSTM to generate text given the embedded labels. DBLPRiosK18 use label embedding to attend the text representation in the developing of a multi-label classifier. But they report R@K, so it is unclear whether the system can really predict unseen labels. DBLPXiaZYCY18 study the zero-shot intent detection problem. The learned representations of intents are still the sum of word embeddings. But during testing, the intent space includes only new intents; seen intents are not covered. All of these studies can only meet the definition in Definition-Restrictive, so they do not really generalize to open aspects of $\textsc {0shot-tc}$. JiangqngGuo enrich the embedding representations by incorporating class descriptions, class hierarchy, and the word-to-label paths in ConceptNet. DBLPMitchellSL18 assume that some natural language explanations about new labels are available. Then those explanations are parsed into formal constraints which are further combined with unlabeled data to yield new label oriented classifiers through posterior regularization. However, those explanatory statements about new labels are collected from crowd-sourcing. This limits its application in real world $\textsc {0shot-tc}$ scenarios. There are a few works that study a specific zero-shot problem by indirect supervision from other problems. DBLPLevySCZ17 and obamuyide2018zero study zero-shot relation extraction by converting it into a machine comprehension and textual entailment problem respectively. Then, a supervised system pretrained on an existing machine comprehension dataset or textual entailment dataset is used to do inference. Our work studies the $\textsc {0shot-tc}$ by formulating a broader vision: datasets of multiple apsects and evaluations. Other zero-shot problems studied in NLP involve entity typing BIBREF13, sequence labeling BIBREF14, etc. <<</Related Work>>> <<<Benchmark the dataset>>> In this work, we standardize the datasets for $\textsc {0shot-tc}$ for three aspects: topic detection, emotion detection, and situation detection. For each dataset, we insist on two principles: i) Label-partially-unseen: A part of labels are unseen. This corresponds to Definition-Restrictive, enabling us to check the performance of unseen labels as well as seen labels. ii) Label-fully-unseen: All labels are unseen. This corresponds to Definition-Wild, enabling us to check the system performance in test-agnostic setups. <<<Topic detection>>> <<<Yahoo.>>> We use the large-scale Yahoo dataset released by DBLPZhangZL15. Yahoo has 10 classes: {“Society & Culture”, “Science & Mathematics”, “Health”, “Education & Reference”, “Computers & Internet”, “Sports”, “Business & Finance”, “Entertainment & Music”, “Family & Relationships”, “Politics & Government”}, with original split: 1.4M/60k in train/test (all labels are balanced distributed). We reorganize the dataset by first fixing the dev and test sets as follows: for dev, all 10 labels are included, with 6k labeled instances for each; For test, all 10 labels are included, with 10k instances for each. Then training sets are created on remaining instances as follows. For label-partially-unseen, we create two versions of Yahoo train for $\textsc {0shot-tc}$: Train-v0: 5 classes: {“Society & Culture”, “Health”, “Computers & Internet”, “Business & Finance”, “Family & Relationships”} are included; each is equipped with 130k labeled instances. Train-v1: 5 classes: { “Science & Mathematics”, “Education & Reference”, “Sports”, “Entertainment & Music”, “Politics & Government”} are included; each is equipped with 130k labeled instances. We always create two versions of train with non-overlapping labels so as to get rid of the model's over-fitting on one of them. Label-fully-unseen share the same test and dev with the label-partially-unseen except that it has no training set. It is worth mentioning that our setup of label-partially-unseen and label-fully-unseen enables us to compare the performance mutually; it can show the system's capabilities while seeing different sizes of classes. <<</Yahoo.>>> <<</Topic detection>>> <<<Emotion detection>>> <<<UnifyEmotion.>>> This emotion dataset was released by DBLPBostanK18. It was constructed by unifying the emotion labels of multiple public emotion datasets. This dataset consists of text from multiple domains: tweet, emotional events, fairy tale and artificial sentences, and it contains 9 emotion types {“sadness”, “joy”, “anger”, “disgust”, “fear”, “surprise”, “shame”, “guilt”, “love”} and “none” (if no emotion applies). We remove the multi-label instances (appro. 4k) so that the remaining instances always have a single positive label. The official evaluation metric is label-weighted F1. Since the labels in this dataset has unbalanced distribution. We first directly list the fixed $\emph {test}$ and $\emph {dev}$ in Table TABREF9 and Table TABREF10, respectively. They are shared by following label-partial-unseen and label-fully-unseen setups of train. Label-partial-unseen has the following two versions of train: Train-v0: 5 classes: {“sadness”, “anger”, “fear”, “shame”, “love”} are included. Train-v1: 4 classes: { “joy”, “disgust”, “surprise”, “guilt”} are included. For label-fully-unseen, no training set is provided. <<</UnifyEmotion.>>> <<</Emotion detection>>> <<<Situation detection>>> The situation frame typing is one example of an event-type classification task. A situation frame studied here is a need situation such as the need for water or medical aid, or an issue situation such as crime violence BIBREF16, BIBREF17. It was originally designed for low-resource situation detection, where annotated data is unavailable. This is why it is particularly suitable for $\textsc {0shot-tc}$. We use the Situation Typing dataset released by mayhewuniversity. It has 5,956 labeled instances. Totally 11 situation types: “food supply”, “infrastructure”, “medical assistance”, “search/rescue”, “shelter”, “utilities, energy, or sanitation”, “water supply”, “evacuation”, “regime change”, “terrisms”, “crime violence” and an extra type “none” – if none of the 11 types applies. This dataset is a multi-label classification, and label-wise weighted F1 is the official evaluation. The train, test and dev are listed in Table TABREF22. <<<Summary of @!START@$\textsc {0shot-tc}$@!END@ datasets.>>> Our three datasets covers single-label classification (i.e., “topic” and “emotion”) and multi-label classification (i.e., “situation”). In addition, a “none” type is adopted in “emotion” and “situation” tasks if no predefined types apply – this makes the problem more realistic. <<</Summary of @!START@$\textsc {0shot-tc}$@!END@ datasets.>>> <<</Situation detection>>> <<</Benchmark the dataset>>> <<<Benchmark the evaluation>>> How to evaluate a $\textsc {0shot-tc}$ system? This needs to review the original motivation of doing $\textsc {0shot-tc}$ research. As we discussed in Introduction section, ideally, we aim to build a system that works like humans – figuring out if a piece of text can be assigned with an open-defined label, without any constrains on the domains and the aspects described by the labels. Therefore, we challenge the system in two setups: label-partially-unseen and label-fully-unseen. <<<Label-partially-unseen.>>> This is the most common setup in existing $\textsc {0shot-tc}$ literature: for a given dataset of a specific problem such as topic categorization, emotion detection, etc, train a system on a part of the labels, then test on the whole label space. Usually all labels describe the same aspect of the text. <<</Label-partially-unseen.>>> <<<Label-fully-unseen.>>> In this setup, we push “zero-shot” to the extreme – no annotated data for any labels. So, we imagine that learning a system through whatever approaches, then testing it on $\textsc {0shot-tc}$ datasets of open aspects. This label-fully-unseen setup is more like the dataless learning principle BIBREF0, in which no task-specific annotated data is provided for training a model (since usually this kind of model fails to generalize in other domains and other tasks), therefore, we are encouraged to learn models with open-data or test-agnostic data. In this way, the learned models behave more like humans. <<</Label-fully-unseen.>>> <<</Benchmark the evaluation>>> <<<An entailment model for @!START@$\textsc {0shot-tc}$@!END@>>> As one contribution of this work, we propose to deal with $\textsc {0shot-tc}$ as a textual entailment problem. It is inspired by: i) text classification is essentially a textual entailment problem. Let us think about how humans do classification: we mentally think “whether this text is about sport?”, or “whether this text expresses a specific feeling?”, or “whether the people there need water supply?” and so on. The reason that conventional text classification did not employ entailment approach is it always has pre-defined, fixed-size of classes equipped with annotated data. However, in $\textsc {0shot-tc}$, we can neither estimate how many and what classes will be handled nor have annotated data to train class-specific parameters. Textual entailment, instead, does not preordain the boundary of the hypothesis space. ii) To pursue the ideal generalization of classifiers, we definitely need to make sure that the classifiers understand the problem encoded in the aspects and understand the meaning of labels. Conventional supervised classifiers fail in this aspect since label names are converted into indices – this means the classifiers do not really understand the labels, let alone the problem. Therefore, exploring $\textsc {0shot-tc}$ as a textual entailment paradigm is a reasonable way to achieve generalization. <<<Convert labels into hypotheses.>>> The first step of dealing with $\textsc {0shot-tc}$ as an entailment problem is to convert labels into hypotheses. To this end, we first convert each aspect into an interpretation (we discussed before that generally one aspect defines one interpretation). E.g., “topic” aspect to interpretation “the text is about the topic”. Table TABREF24 lists some examples for the three aspects: “topic”, “emotion” and “situation”. In this work, we just explored two simple methods to generate the hypotheses. As Table TABREF24 shows, one is to use the label name to complete the interpretation, the other is to use the label's definition in WordNet to complete the interpretation. In testing, once one of them results in an “entailment” decision, then we decide the corresponding label is positive. We can definitely create more natural hypotheses through crowd-sourcing, such as “food” into “the people there are starving”. Here we just set the baseline examples by automatic approaches, more explorations are left as future work, and we welcome the community to contribute. <<</Convert labels into hypotheses.>>> <<<Convert classification data into entailment data.>>> For a data split (train, dev and test), each input text, acting as the premise, has a positive hypothesis corresponding to the positive label, and all negative labels in the data split provide negative hypotheses. Note that unseen labels do not provide negative hypotheses for instances in train. <<</Convert classification data into entailment data.>>> <<<Entailment model learning.>>> In this work, we make use of the widely-recognized state of the art entailment technique – BERT BIBREF18, and train it on three mainstream entailment datasets: MNLI BIBREF19, GLUE RTE BIBREF20, BIBREF21 and FEVER BIBREF22, respectively. We convert all datasets into binary case: “entailment” vs. “non-entailment”, by changing the label “neutral” (if exist in some datasets) into “non-entailment”. For our label-fully-unseen setup, we directly apply this pretrained entailment model on the test sets of all $\textsc {0shot-tc}$ aspects. For label-partially-unseen setup in which we intentionally provide annotated data, we first pretrain BERT on the MNLI/FEVER/RTE, then fine-tune on the provided training data. <<</Entailment model learning.>>> <<<Harsh policy in testing.>>> Since seen labels have annotated data for training, we adopt different policies to pick up seen and unseen labels. To be specific, we pick a seen label with a harsher rule: i) In single-label classification, if both seen and unseen labels are predicted as positive, we pick the seen label only if its probability of being positive is higher than that of the unseen label by a hyperparameter $\alpha $. If only seen or unseen labels are predicted as positive, we pick the one with the highest probability; ii) In multi-label classification, if both seen and unseen labels are predicted as positive, we change the seen labels into “negative” if their probability of being positive is higher than that of the unseen label by less than $\alpha $. Finally, all labels labeled positive will be selected. If no positive labels, we choose “none” type. $\alpha $ = 0.05 in our systems, tuned on dev. <<</Harsh policy in testing.>>> <<</An entailment model for @!START@$\textsc {0shot-tc}$@!END@>>> <<<Experiments>>> <<<Label-partially-unseen evaluation>>> In this setup, there is annotated data for partial labels as train. So, we report performance for unseen classes as well as seen classes. We compare our entailment approaches, trained separately on MNLI, FEVER and RTE, with the following baselines. <<<Baselines.>>> Majority: the text picks the label of the largest size. ESA: A dataless classifier proposed in BIBREF0. It maps the words (in text and label names) into the title space of Wikipedia articles, then compares the text with label names. This method does not rely on train. We implemented ESA based on 08/01/2019 Wikipedia dump. There are about 6.1M words and 5.9M articles. Word2Vec BIBREF23: Both the representations of the text and the labels are the addition of word embeddings element-wisely. Then cosine similarity determines the labels. This method does not rely on train either. Binary-BERT: We fine-tune BERT on train, which will yield a binary classifier for entailment or not; then we test it on test – picking the label with the maximal probability in single-label scenarios while choosing all the labels with “entailment” decision in multi-label cases. <<</Baselines.>>> <<<Discussion.>>> The results of label-partially-unseen are listed in Table TABREF30. “ESA” performs slightly worse than “Word2Vec” in topic detection, mainly because the label names, i.e., topics such as “sports”, are closer than some keywords such as “basketball” in Word2Vec space. However, “ESA” is clearly better than “Word2Vec” in situation detection; this should be mainly due to the fact that the label names (e.g., “shelter”, “evaculation”, etc.) can hardly find close words in the text by Word2Vec embeddings. Quite the contrary, “ESA” is easier to make a class such as “shelter” closer to some keywords like “earthquake”. Unfortunately, both Word2Vec and ESA work poorly for emotion detection problem. We suspect that emotion detection requires more entailment capability. For example, the text snippet “when my brother was very late in arriving home from work”, its gold emotion “fear” requires some common-knowledge inference, rather than just word semantic matching through Word2Vec and ESA. The supervised method “Binary-BERT” is indeed strong in learning the seen-label-specific models – this is why it predicts very well for seen classes while performing much worse for unseen classes. Our entailment models, especially the one pretrained on MNLI, generally get competitive performance with the “Binary-BERT” for seen (slightly worse on “topic” and “emotion” while clearly better on “situation”) and improve the performance regarding unseen by large margins. At this stage, fine-tuning on an MNLI-based pretrained entailment model seems more powerful. <<</Discussion.>>> <<</Label-partially-unseen evaluation>>> <<<Label-fully-unseen evaluation>>> Regarding this label-fully-unseen evaluation, apart from our entailment models and three unsupervised baselines “Majority”, “Word2Vec” and “ESA”, we also report the following baseline: Wikipedia-based: We train a binary classifier based on BERT on a dataset collected from Wikipedia. Wikipedia is a corpus of general purpose, without targeting any specific $\textsc {0shot-tc}$ task. Collecting categorized articles from Wikipedia is popular way of creating training data for text categorization, such as BIBREF13. More specifically, we collected 100K articles along with their categories in the bottom of each article. For each article, apart from its attached positive categories, we randomly sample three negative categories. Then each article and its positive/negative categories act as training pairs for the binary classifier. We notice “Wikipedia-based” training indeed contributes a lot for the topic detection task; however, its performances on emotion and situation detection problems are far from satisfactory. We believe this is mainly because the Yahoo-based topic categorization task is much closer to the Wikipedia-based topic categorization task; emotion and situation categorizations, however, are relatively further. Our entailment models, pretrained on MNLI/FEVER/RTE respectively, perform more robust on the three $\textsc {0shot-tc}$ aspects (except for the RTE on emotion). Recall that they are not trained on any text classification data, and never know the domain and the aspects in the test. This clearly shows the great promise of developing textual entailment models for $\textsc {0shot-tc}$. Our ensemble approach further boosts the performances on all three tasks. An interesting phenomenon, comparing the label-partially-unseen results in Table TABREF30 and the label-fully-unseen results in Table TABREF32, is that the pretrained entailment models work in this order for label-fully-unseen case: RTE $>$ FEVER $>$MNLI; on the contrary, if we fine-tune them on the label-partially-unseen case, the MNLI-based model performs best. This could be due to a possibility that, on one hand, the constructed situation entailment dataset is closer to the RTE dataset than to the MNLI dataset, so an RTE-based model can generalize well to situation data, but, on the other hand, it could also be more likely to over-fit the training set of “situation” during fine-tuning. A deeper exploration of this is left as future work. <<</Label-fully-unseen evaluation>>> <<<How do the generated hypotheses influence>>> In Table TABREF24, we listed examples for converting class names into hypotheses. In this work, we only tried to make use of the class names and their definitions in WordNet. Table TABREF33 lists the fine-grained performance of three ways of generating hypotheses: “word”, “definition”, and “combination” (i.e., word&definition). This table indicates that: i) Definition alone usually does not work well in any of the three tasks, no matter which pretrained entailment model is used; ii) Whether “word” alone or “word&definition” works better depends on the specific task and the pretrained entailment model. For example, the pretrained MNLI model prefers “word&definition” in both “emotion” and “situation” detection tasks. However, the other two entailment models (RTE and FEVER) mostly prefer “word”. iii) Since it is unrealistic to adopt only one entailment model, such as from {RTE, FEVER, MNLI}, for any open $\textsc {0shot-tc}$ problem, an ensemble system should be preferred. However, the concrete implementation of the ensemble system also influences the strengths of different hypothesis generation approaches. In this work, our ensemble method reaches the top performance when combining the “word” and “definition”. More ensemble systems and hypothesis generation paradigms need to be studied in the future. To better understand the impact of generated hypotheses, we dive into the performance of each labels, taking “situation detection” as an example. Figure FIGREF47 illustrates the separate F1 scores for each situation class, predicted by the ensemble model for label-fully-unseen setup. This enables us to check in detail how easily the constructed hypotheses can be understood by the entailment model. Unfortunately, some classes are still challenging, such as “evacuation”, “infrastructure”, and “regime change”. This should be attributed to their over-abstract meaning. Some classes were well recognized, such as “water”, “shelter”, and “food”. One reason is that these labels mostly are common words – systems can more easily match them to the text; the other reason is that they are situation classes with higher frequencies (refer to Table TABREF22) – this is reasonable based on our common knowledge about disasters. <<</How do the generated hypotheses influence>>> <<</Experiments>>> <<<Summary>>> In this work, we analyzed the problems of existing research on zero-shot text classification ($\textsc {0shot-tc}$): restrictive problem definition, the weakness in understanding the problem and the labels' meaning, and the chaos of datasets and evaluation setups. Therefore, we are benchmarking $\textsc {0shot-tc}$ by standardizing the datasets and evaluations. More importantly, to tackle the broader-defined $\textsc {0shot-tc}$, we proposed a textual entailment framework which can work with or without the annotated data of seen labels. <<</Summary>>> <<<Acknowledgments>>> The authors would like to thank Jennifer Sheffield and the anonymous reviewers for insightful comments and suggestions. This work was supported by Contracts HR0011-15-C-0113 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. <<</Acknowledgments>>> <<</Title>>>
{ "references": [ "Related Work, Introduction" ], "type": "disordered_section" }
1909.00161
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach <<<Abstract>>> Zero-shot text classification (0Shot-TC) is a challenging NLU problem to which little attention has been paid by the research community. 0Shot-TC aims to associate an appropriate label with a piece of text, irrespective of the text domain and the aspect (e.g., topic, emotion, event, etc.) described by the label. And there are only a few articles studying 0Shot-TC, all focusing only on topical categorization which, we argue, is just the tip of the iceberg in 0Shot-TC. In addition, the chaotic experiments in literature make no uniform comparison, which blurs the progress. ::: This work benchmarks the 0Shot-TC problem by providing unified datasets, standardized evaluations, and state-of-the-art baselines. Our contributions include: i) The datasets we provide facilitate studying 0Shot-TC relative to conceptually different and diverse aspects: the ``topic'' aspect includes ``sports'' and ``politics'' as labels; the ``emotion'' aspect includes ``joy'' and ``anger''; the ``situation'' aspect includes ``medical assistance'' and ``water shortage''. ii) We extend the existing evaluation setup (label-partially-unseen) -- given a dataset, train on some labels, test on all labels -- to include a more challenging yet realistic evaluation label-fully-unseen 0Shot-TC (Chang et al., 2008), aiming at classifying text snippets without seeing task specific training data at all. iii) We unify the 0Shot-TC of diverse aspects within a textual entailment formulation and study it this way. ::: Code & Data: this https URL <<</Abstract>>> <<<Introduction>>> Supervised text classification has achieved great success in the past decades due to the availability of rich training data and deep learning techniques. However, zero-shot text classification ($\textsc {0shot-tc}$) has attracted little attention despite its great potential in real world applications, e.g., the intent recognition of bank consumers. $\textsc {0shot-tc}$ is challenging because we often have to deal with classes that are compound, ultra-fine-grained, changing over time, and from different aspects such as topic, emotion, etc. Existing $\textsc {0shot-tc}$ studies have mainly the following three problems. <<<First problem.>>> The $\textsc {0shot-tc}$ problem was modeled in a too restrictive vision. Firstly, most work only explored a single task, which was mainly topic categorization, e.g., BIBREF1, BIBREF2, BIBREF3. We argue that this is only the tiny tip of the iceberg for $\textsc {0shot-tc}$. Secondly, there is often a precondition that a part of classes are seen and their labeled instances are available to train a model, as we define here as Definition-Restrictive: Definition-Restrictive ($\textsc {0shot-tc}$). Given labeled instances belonging to a set of seen classes $S$, $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where $Y=S\cup U$; $U$ is a set of unseen classes and belongs to the same aspect as $S$. In this work, we formulate the $\textsc {0shot-tc}$ in a broader vision. As Figure FIGREF2 demonstrates, a piece of text can be assigned labels which interpret the text in different aspects, such as the “topic” aspect, the “emotion” aspect, or the “situation” aspect described in the text. Different aspects, therefore, differ in interpreting the text. For instance, by “topic”, it means “this text is about {health, finance $\cdots $}”; by “emotion”, it means “this text expresses a sense of {joy, anger, $\cdots $}”; by “situation”, it means “the people there need {shelter, medical assistance, $\cdots $}”. Figure FIGREF2 also shows another essential property of $\textsc {0shot-tc}$ – the applicable label space for a piece of text has no boundary, e.g., “this text is news”, “the situation described in this text is serious”, etc. Therefore, we argue that we have to emphasize a more challenging scenario to satisfy the real-world problems: seeing no labels, no label-specific training data. Here is our new $\textsc {0shot-tc}$ definition: Definition-Wild ($\textsc {0shot-tc}$). $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where classifier $f(\cdot )$ never sees $Y$-specific labeled data in its model development. <<</First problem.>>> <<<Second problem.>>> Usually, conventional text classification denotes labels as indices {0,1,2, $\cdots $, $n$} without understanding neither the aspect's specific interpretation nor the meaning of the labels. This does not apply to $\textsc {0shot-tc}$ as we can not pre-define the size of the label space anymore, and we can not presume the availability of labeled data. Humans can easily decide the truth value of any upcoming labels because humans can interpret those aspects correctly and understand the meaning of those labels. The ultimate goal of $\textsc {0shot-tc}$ should be to develop machines to catch up with humans in this capability. To this end, making sure the system can understand the described aspect and the label meanings plays a key role. <<</Second problem.>>> <<<Third problem.>>> Prior work is mostly evaluated on different datasets and adopted different evaluation setups, which makes it hard to compare them fairly. For example, DBLPRiosK18 work on medical data while reporting R@K as metric; DBLPXiaZYCY18 work on SNIPS-NLU intent detection data while only unseen intents are in the label-searching space in evaluation. In this work, we benchmark the datasets and evaluation setups of $\textsc {0shot-tc}$. Furthermore, we propose a textual entailment approach to handle the $\textsc {0shot-tc}$ problem of diverse aspects in a unified paradigm. To be specific, we contribute in the following three aspects: <<</Third problem.>>> <<<Dataset.>>> We provide datasets for studying three aspects of $\textsc {0shot-tc}$: topic categorization, emotion detection, and situation frame detection – an event level recognition problem. For each dataset, we have standard split for train, dev, and test, and standard separation of seen and unseen classes. <<</Dataset.>>> <<<Evaluation.>>> Our standardized evaluations correspond to the Definition-Restrictive and Definition-Wild. i) Label-partially-unseen evaluation. This corresponds to the commonly studied $\textsc {0shot-tc}$ defined in Definition-Restrictive: for the set of labels of a specific aspect, given training data for a part of labels, predicting in the full label set. This is the most basic setup in $\textsc {0shot-tc}$. It checks whether the system can generalize to some labels in the same aspect. To satisfy Definition-Wild, we define a new evaluation: ii) Label-fully-unseen evaluation. In this setup, we assume the system is unaware of the upcoming aspects and can not access any labeled data for task-specific training. <<</Evaluation.>>> <<<Entailment approach.>>> Our Definition-Wild challenges the system design – how to develop a $\textsc {0shot-tc}$ system, without accessing any task-specific labeled data, to deal with labels from diverse aspects? In this work, we propose to treat $\textsc {0shot-tc}$ as a textual entailment problem. This is to imitate how humans decide the truth value of labels from any aspects. Usually, humans understand the problem described by the aspect and the meaning of the label candidates. Then humans mentally construct a hypothesis by filling a label candidate, e.g., “sports”, into the aspect-defined problem “the text is about $\underline{?}$”, and ask ourselves if this hypothesis is true, given the text. We treat $\textsc {0shot-tc}$ as a textual entailment problem so that our model can gain knowledge from entailment datasets, and we show that it applies to both Definition-Restrictive and Definition-Wild. Overall, this work aims at benchmarking the research of $\textsc {0shot-tc}$ by providing standardized datasets, evaluations, and a state-of-the-art entailment system. All datasets and codes are released. <<</Entailment approach.>>> <<</Introduction>>> <<<Related Work>>> $\textsc {Zero-stc}$ was first explored by the paradigm “Dataless Classification” BIBREF0. Dataless classification first maps the text and labels into a common space by Explicit Semantic Analysis (ESA) BIBREF4, then picks the label with the highest matching score. Dataless classification emphasizes that the representation of labels takes the equally crucial role as the representation learning of text. Then this idea was further developed in BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. With the prevalence of word embeddings, more and more work adopts pretrained word embeddings to represent the meaning of words, so as to provide the models with the knowledge of labels BIBREF10, BIBREF2, BIBREF11, BIBREF12. DBLPYogatamaDLB17 build generative LSTM to generate text given the embedded labels. DBLPRiosK18 use label embedding to attend the text representation in the developing of a multi-label classifier. But they report R@K, so it is unclear whether the system can really predict unseen labels. DBLPXiaZYCY18 study the zero-shot intent detection problem. The learned representations of intents are still the sum of word embeddings. But during testing, the intent space includes only new intents; seen intents are not covered. All of these studies can only meet the definition in Definition-Restrictive, so they do not really generalize to open aspects of $\textsc {0shot-tc}$. JiangqngGuo enrich the embedding representations by incorporating class descriptions, class hierarchy, and the word-to-label paths in ConceptNet. DBLPMitchellSL18 assume that some natural language explanations about new labels are available. Then those explanations are parsed into formal constraints which are further combined with unlabeled data to yield new label oriented classifiers through posterior regularization. However, those explanatory statements about new labels are collected from crowd-sourcing. This limits its application in real world $\textsc {0shot-tc}$ scenarios. There are a few works that study a specific zero-shot problem by indirect supervision from other problems. DBLPLevySCZ17 and obamuyide2018zero study zero-shot relation extraction by converting it into a machine comprehension and textual entailment problem respectively. Then, a supervised system pretrained on an existing machine comprehension dataset or textual entailment dataset is used to do inference. Our work studies the $\textsc {0shot-tc}$ by formulating a broader vision: datasets of multiple apsects and evaluations. Other zero-shot problems studied in NLP involve entity typing BIBREF13, sequence labeling BIBREF14, etc. <<</Related Work>>> <<<Benchmark the dataset>>> In this work, we standardize the datasets for $\textsc {0shot-tc}$ for three aspects: topic detection, emotion detection, and situation detection. For each dataset, we insist on two principles: i) Label-partially-unseen: A part of labels are unseen. This corresponds to Definition-Restrictive, enabling us to check the performance of unseen labels as well as seen labels. ii) Label-fully-unseen: All labels are unseen. This corresponds to Definition-Wild, enabling us to check the system performance in test-agnostic setups. <<<Topic detection>>> <<<Yahoo.>>> We use the large-scale Yahoo dataset released by DBLPZhangZL15. Yahoo has 10 classes: {“Society & Culture”, “Science & Mathematics”, “Health”, “Education & Reference”, “Computers & Internet”, “Sports”, “Business & Finance”, “Entertainment & Music”, “Family & Relationships”, “Politics & Government”}, with original split: 1.4M/60k in train/test (all labels are balanced distributed). We reorganize the dataset by first fixing the dev and test sets as follows: for dev, all 10 labels are included, with 6k labeled instances for each; For test, all 10 labels are included, with 10k instances for each. Then training sets are created on remaining instances as follows. For label-partially-unseen, we create two versions of Yahoo train for $\textsc {0shot-tc}$: Train-v0: 5 classes: {“Society & Culture”, “Health”, “Computers & Internet”, “Business & Finance”, “Family & Relationships”} are included; each is equipped with 130k labeled instances. Train-v1: 5 classes: { “Science & Mathematics”, “Education & Reference”, “Sports”, “Entertainment & Music”, “Politics & Government”} are included; each is equipped with 130k labeled instances. We always create two versions of train with non-overlapping labels so as to get rid of the model's over-fitting on one of them. Label-fully-unseen share the same test and dev with the label-partially-unseen except that it has no training set. It is worth mentioning that our setup of label-partially-unseen and label-fully-unseen enables us to compare the performance mutually; it can show the system's capabilities while seeing different sizes of classes. <<</Yahoo.>>> <<</Topic detection>>> <<<Emotion detection>>> <<<UnifyEmotion.>>> This emotion dataset was released by DBLPBostanK18. It was constructed by unifying the emotion labels of multiple public emotion datasets. This dataset consists of text from multiple domains: tweet, emotional events, fairy tale and artificial sentences, and it contains 9 emotion types {“sadness”, “joy”, “anger”, “disgust”, “fear”, “surprise”, “shame”, “guilt”, “love”} and “none” (if no emotion applies). We remove the multi-label instances (appro. 4k) so that the remaining instances always have a single positive label. The official evaluation metric is label-weighted F1. Since the labels in this dataset has unbalanced distribution. We first directly list the fixed $\emph {test}$ and $\emph {dev}$ in Table TABREF9 and Table TABREF10, respectively. They are shared by following label-partial-unseen and label-fully-unseen setups of train. Label-partial-unseen has the following two versions of train: Train-v0: 5 classes: {“sadness”, “anger”, “fear”, “shame”, “love”} are included. Train-v1: 4 classes: { “joy”, “disgust”, “surprise”, “guilt”} are included. For label-fully-unseen, no training set is provided. <<</UnifyEmotion.>>> <<</Emotion detection>>> <<<Situation detection>>> The situation frame typing is one example of an event-type classification task. A situation frame studied here is a need situation such as the need for water or medical aid, or an issue situation such as crime violence BIBREF16, BIBREF17. It was originally designed for low-resource situation detection, where annotated data is unavailable. This is why it is particularly suitable for $\textsc {0shot-tc}$. We use the Situation Typing dataset released by mayhewuniversity. It has 5,956 labeled instances. Totally 11 situation types: “food supply”, “infrastructure”, “medical assistance”, “search/rescue”, “shelter”, “utilities, energy, or sanitation”, “water supply”, “evacuation”, “regime change”, “terrisms”, “crime violence” and an extra type “none” – if none of the 11 types applies. This dataset is a multi-label classification, and label-wise weighted F1 is the official evaluation. The train, test and dev are listed in Table TABREF22. <<<Summary of @!START@$\textsc {0shot-tc}$@!END@ datasets.>>> Our three datasets covers single-label classification (i.e., “topic” and “emotion”) and multi-label classification (i.e., “situation”). In addition, a “none” type is adopted in “emotion” and “situation” tasks if no predefined types apply – this makes the problem more realistic. <<</Summary of @!START@$\textsc {0shot-tc}$@!END@ datasets.>>> <<</Situation detection>>> <<</Benchmark the dataset>>> <<<Benchmark the evaluation>>> How to evaluate a $\textsc {0shot-tc}$ system? This needs to review the original motivation of doing $\textsc {0shot-tc}$ research. As we discussed in Introduction section, ideally, we aim to build a system that works like humans – figuring out if a piece of text can be assigned with an open-defined label, without any constrains on the domains and the aspects described by the labels. Therefore, we challenge the system in two setups: label-partially-unseen and label-fully-unseen. <<<Label-partially-unseen.>>> This is the most common setup in existing $\textsc {0shot-tc}$ literature: for a given dataset of a specific problem such as topic categorization, emotion detection, etc, train a system on a part of the labels, then test on the whole label space. Usually all labels describe the same aspect of the text. <<</Label-partially-unseen.>>> <<<Label-fully-unseen.>>> In this setup, we push “zero-shot” to the extreme – no annotated data for any labels. So, we imagine that learning a system through whatever approaches, then testing it on $\textsc {0shot-tc}$ datasets of open aspects. This label-fully-unseen setup is more like the dataless learning principle BIBREF0, in which no task-specific annotated data is provided for training a model (since usually this kind of model fails to generalize in other domains and other tasks), therefore, we are encouraged to learn models with open-data or test-agnostic data. In this way, the learned models behave more like humans. <<</Label-fully-unseen.>>> <<</Benchmark the evaluation>>> <<<An entailment model for @!START@$\textsc {0shot-tc}$@!END@>>> As one contribution of this work, we propose to deal with $\textsc {0shot-tc}$ as a textual entailment problem. It is inspired by: i) text classification is essentially a textual entailment problem. Let us think about how humans do classification: we mentally think “whether this text is about sport?”, or “whether this text expresses a specific feeling?”, or “whether the people there need water supply?” and so on. The reason that conventional text classification did not employ entailment approach is it always has pre-defined, fixed-size of classes equipped with annotated data. However, in $\textsc {0shot-tc}$, we can neither estimate how many and what classes will be handled nor have annotated data to train class-specific parameters. Textual entailment, instead, does not preordain the boundary of the hypothesis space. ii) To pursue the ideal generalization of classifiers, we definitely need to make sure that the classifiers understand the problem encoded in the aspects and understand the meaning of labels. Conventional supervised classifiers fail in this aspect since label names are converted into indices – this means the classifiers do not really understand the labels, let alone the problem. Therefore, exploring $\textsc {0shot-tc}$ as a textual entailment paradigm is a reasonable way to achieve generalization. <<<Convert labels into hypotheses.>>> The first step of dealing with $\textsc {0shot-tc}$ as an entailment problem is to convert labels into hypotheses. To this end, we first convert each aspect into an interpretation (we discussed before that generally one aspect defines one interpretation). E.g., “topic” aspect to interpretation “the text is about the topic”. Table TABREF24 lists some examples for the three aspects: “topic”, “emotion” and “situation”. In this work, we just explored two simple methods to generate the hypotheses. As Table TABREF24 shows, one is to use the label name to complete the interpretation, the other is to use the label's definition in WordNet to complete the interpretation. In testing, once one of them results in an “entailment” decision, then we decide the corresponding label is positive. We can definitely create more natural hypotheses through crowd-sourcing, such as “food” into “the people there are starving”. Here we just set the baseline examples by automatic approaches, more explorations are left as future work, and we welcome the community to contribute. <<</Convert labels into hypotheses.>>> <<<Convert classification data into entailment data.>>> For a data split (train, dev and test), each input text, acting as the premise, has a positive hypothesis corresponding to the positive label, and all negative labels in the data split provide negative hypotheses. Note that unseen labels do not provide negative hypotheses for instances in train. <<</Convert classification data into entailment data.>>> <<<Entailment model learning.>>> In this work, we make use of the widely-recognized state of the art entailment technique – BERT BIBREF18, and train it on three mainstream entailment datasets: MNLI BIBREF19, GLUE RTE BIBREF20, BIBREF21 and FEVER BIBREF22, respectively. We convert all datasets into binary case: “entailment” vs. “non-entailment”, by changing the label “neutral” (if exist in some datasets) into “non-entailment”. For our label-fully-unseen setup, we directly apply this pretrained entailment model on the test sets of all $\textsc {0shot-tc}$ aspects. For label-partially-unseen setup in which we intentionally provide annotated data, we first pretrain BERT on the MNLI/FEVER/RTE, then fine-tune on the provided training data. <<</Entailment model learning.>>> <<<Harsh policy in testing.>>> Since seen labels have annotated data for training, we adopt different policies to pick up seen and unseen labels. To be specific, we pick a seen label with a harsher rule: i) In single-label classification, if both seen and unseen labels are predicted as positive, we pick the seen label only if its probability of being positive is higher than that of the unseen label by a hyperparameter $\alpha $. If only seen or unseen labels are predicted as positive, we pick the one with the highest probability; ii) In multi-label classification, if both seen and unseen labels are predicted as positive, we change the seen labels into “negative” if their probability of being positive is higher than that of the unseen label by less than $\alpha $. Finally, all labels labeled positive will be selected. If no positive labels, we choose “none” type. $\alpha $ = 0.05 in our systems, tuned on dev. <<</Harsh policy in testing.>>> <<</An entailment model for @!START@$\textsc {0shot-tc}$@!END@>>> <<<Experiments>>> <<<Label-partially-unseen evaluation>>> In this setup, there is annotated data for partial labels as train. So, we report performance for unseen classes as well as seen classes. We compare our entailment approaches, trained separately on MNLI, FEVER and RTE, with the following baselines. <<<Baselines.>>> Majority: the text picks the label of the largest size. ESA: A dataless classifier proposed in BIBREF0. It maps the words (in text and label names) into the title space of Wikipedia articles, then compares the text with label names. This method does not rely on train. We implemented ESA based on 08/01/2019 Wikipedia dump. There are about 6.1M words and 5.9M articles. Word2Vec BIBREF23: Both the representations of the text and the labels are the addition of word embeddings element-wisely. Then cosine similarity determines the labels. This method does not rely on train either. Binary-BERT: We fine-tune BERT on train, which will yield a binary classifier for entailment or not; then we test it on test – picking the label with the maximal probability in single-label scenarios while choosing all the labels with “entailment” decision in multi-label cases. <<</Baselines.>>> <<<Discussion.>>> The results of label-partially-unseen are listed in Table TABREF30. “ESA” performs slightly worse than “Word2Vec” in topic detection, mainly because the label names, i.e., topics such as “sports”, are closer than some keywords such as “basketball” in Word2Vec space. However, “ESA” is clearly better than “Word2Vec” in situation detection; this should be mainly due to the fact that the label names (e.g., “shelter”, “evaculation”, etc.) can hardly find close words in the text by Word2Vec embeddings. Quite the contrary, “ESA” is easier to make a class such as “shelter” closer to some keywords like “earthquake”. Unfortunately, both Word2Vec and ESA work poorly for emotion detection problem. We suspect that emotion detection requires more entailment capability. For example, the text snippet “when my brother was very late in arriving home from work”, its gold emotion “fear” requires some common-knowledge inference, rather than just word semantic matching through Word2Vec and ESA. The supervised method “Binary-BERT” is indeed strong in learning the seen-label-specific models – this is why it predicts very well for seen classes while performing much worse for unseen classes. Our entailment models, especially the one pretrained on MNLI, generally get competitive performance with the “Binary-BERT” for seen (slightly worse on “topic” and “emotion” while clearly better on “situation”) and improve the performance regarding unseen by large margins. At this stage, fine-tuning on an MNLI-based pretrained entailment model seems more powerful. <<</Discussion.>>> <<</Label-partially-unseen evaluation>>> <<<Label-fully-unseen evaluation>>> Regarding this label-fully-unseen evaluation, apart from our entailment models and three unsupervised baselines “Majority”, “Word2Vec” and “ESA”, we also report the following baseline: Wikipedia-based: We train a binary classifier based on BERT on a dataset collected from Wikipedia. Wikipedia is a corpus of general purpose, without targeting any specific $\textsc {0shot-tc}$ task. Collecting categorized articles from Wikipedia is popular way of creating training data for text categorization, such as BIBREF13. More specifically, we collected 100K articles along with their categories in the bottom of each article. For each article, apart from its attached positive categories, we randomly sample three negative categories. Then each article and its positive/negative categories act as training pairs for the binary classifier. We notice “Wikipedia-based” training indeed contributes a lot for the topic detection task; however, its performances on emotion and situation detection problems are far from satisfactory. We believe this is mainly because the Yahoo-based topic categorization task is much closer to the Wikipedia-based topic categorization task; emotion and situation categorizations, however, are relatively further. Our entailment models, pretrained on MNLI/FEVER/RTE respectively, perform more robust on the three $\textsc {0shot-tc}$ aspects (except for the RTE on emotion). Recall that they are not trained on any text classification data, and never know the domain and the aspects in the test. This clearly shows the great promise of developing textual entailment models for $\textsc {0shot-tc}$. Our ensemble approach further boosts the performances on all three tasks. An interesting phenomenon, comparing the label-partially-unseen results in Table TABREF30 and the label-fully-unseen results in Table TABREF32, is that the pretrained entailment models work in this order for label-fully-unseen case: RTE $>$ FEVER $>$MNLI; on the contrary, if we fine-tune them on the label-partially-unseen case, the MNLI-based model performs best. This could be due to a possibility that, on one hand, the constructed situation entailment dataset is closer to the RTE dataset than to the MNLI dataset, so an RTE-based model can generalize well to situation data, but, on the other hand, it could also be more likely to over-fit the training set of “situation” during fine-tuning. A deeper exploration of this is left as future work. <<</Label-fully-unseen evaluation>>> <<<How do the generated hypotheses influence>>> In Table TABREF24, we listed examples for converting class names into hypotheses. In this work, we only tried to make use of the class names and their definitions in WordNet. Table TABREF33 lists the fine-grained performance of three ways of generating hypotheses: “word”, “definition”, and “combination” (i.e., word&definition). This table indicates that: i) Definition alone usually does not work well in any of the three tasks, no matter which pretrained entailment model is used; ii) Whether “word” alone or “word&definition” works better depends on the specific task and the pretrained entailment model. For example, the pretrained MNLI model prefers “word&definition” in both “emotion” and “situation” detection tasks. However, the other two entailment models (RTE and FEVER) mostly prefer “word”. iii) Since it is unrealistic to adopt only one entailment model, such as from {RTE, FEVER, MNLI}, for any open $\textsc {0shot-tc}$ problem, an ensemble system should be preferred. However, the concrete implementation of the ensemble system also influences the strengths of different hypothesis generation approaches. In this work, our ensemble method reaches the top performance when combining the “word” and “definition”. More ensemble systems and hypothesis generation paradigms need to be studied in the future. To better understand the impact of generated hypotheses, we dive into the performance of each labels, taking “situation detection” as an example. Figure FIGREF47 illustrates the separate F1 scores for each situation class, predicted by the ensemble model for label-fully-unseen setup. This enables us to check in detail how easily the constructed hypotheses can be understood by the entailment model. Unfortunately, some classes are still challenging, such as “evacuation”, “infrastructure”, and “regime change”. This should be attributed to their over-abstract meaning. Some classes were well recognized, such as “water”, “shelter”, and “food”. One reason is that these labels mostly are common words – systems can more easily match them to the text; the other reason is that they are situation classes with higher frequencies (refer to Table TABREF22) – this is reasonable based on our common knowledge about disasters. <<</How do the generated hypotheses influence>>> <<</Experiments>>> <<<Summary>>> In this work, we analyzed the problems of existing research on zero-shot text classification ($\textsc {0shot-tc}$): restrictive problem definition, the weakness in understanding the problem and the labels' meaning, and the chaos of datasets and evaluation setups. Therefore, we are benchmarking $\textsc {0shot-tc}$ by standardizing the datasets and evaluations. More importantly, to tackle the broader-defined $\textsc {0shot-tc}$, we proposed a textual entailment framework which can work with or without the annotated data of seen labels. <<</Summary>>> <<<Acknowledgments>>> The authors would like to thank Jennifer Sheffield and the anonymous reviewers for insightful comments and suggestions. This work was supported by Contracts HR0011-15-C-0113 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. <<</Acknowledgments>>> <<</Title>>>
{ "references": [ "Abstract, Summary" ], "type": "disordered_section" }
1909.08167
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis <<<Abstract>>> Cross-domain sentiment analysis is currently a hot topic in the research and engineering areas. One of the most popular frameworks in this field is the domain-invariant representation learning (DIRL) paradigm, which aims to learn a distribution-invariant feature representation across domains. However, in this work, we find out that applying DIRL may harm domain adaptation when the label distribution $\rm{P}(\rm{Y})$ changes across domains. To address this problem, we propose a modification to DIRL, obtaining a novel weighted domain-invariant representation learning (WDIRL) framework. We show that it is easy to transfer existing SOTA DIRL models to WDIRL. Empirical studies on extensive cross-domain sentiment analysis tasks verified our statements and showed the effectiveness of our proposed solution. <<</Abstract>>> <<<Introduction>>> Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains and the collection and preprocessing of large amounts of data for new domains is often time-consuming and expensive. Therefore, cross-domain sentiment analysis is currently a hot topic, which aims to transfer knowledge from a label-rich source domain (S) to the label-few target domain (T). In recent years, one of the most popular frameworks for cross-domain sentiment analysis is the domain invariant representation learning (DIRL) framework BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Methods of this framework follow the idea of extracting a domain-invariant feature representation, in which the data distributions of the source and target domains are similar. Based on the resultant representations, they learn the supervised classifier using source rich labeled data. The main difference among these methods is the applied technique to force the feature representations to be domain-invariant. However, in this work, we discover that applying DIRL may harm domain adaptation in the situation that the label distribution $\rm {P}(\rm {Y})$ shifts across domains. Specifically, let $\rm {X}$ and $\rm {Y}$ denote the input and label random variable, respectively, and $G(\rm {X})$ denote the feature representation of $\rm {X}$. We found out that when $\rm {P}(\rm {Y})$ changes across domains while $\rm {P}(\rm {X}|\rm {Y})$ stays the same, forcing $G(\rm {X})$ to be domain-invariant will make $G(\rm {X})$ uninformative to $\rm {Y}$. This will, in turn, harm the generation of the supervised classifier to the target domain. In addition, for the more general condition that both $\rm {P}(\rm {Y})$ and $\rm {P}(\rm {X}|\rm {Y})$ shift across domains, we deduced a conflict between the object of making the classification error small and that of making $G(\rm {X})$ domain-invariant. We argue that the problem is worthy of studying since the shift of $\rm {P}(\rm {Y})$ exists in many real-world cross-domain sentiment analysis tasks BIBREF0. For example, the marginal distribution of the sentiment of a product can be affected by the overall social environment and change in different time periods; and for different products, their marginal distributions of the sentiment are naturally considered different. Moreover, there are many factors, such as the original data distribution, data collection time, and data clearing method, that can affect $\rm {P}(\rm {Y})$ of the collected target domain unlabeled dataset. Note that in the real-world cross-domain tasks, we do not know the labels of the collected target domain data. Thus, we cannot previously align its label distribution $\rm {P}_T(\mathbf {Y})$ with that of source domain labeled data $\rm {P}_S(\mathbf {Y})$, as done in many previous works BIBREF0, BIBREF2, BIBREF5, BIBREF4, BIBREF6, BIBREF7. To address the problem of DIRL resulted from the shift of $\rm {P}(\rm {Y})$, we propose a modification to DIRL, obtaining a weighted domain-invariant representation learning (WDIRL) framework. This framework additionally introduces a class weight $\mathbf {w}$ to weigh source domain examples by class, hoping to make $\rm {P}(\rm {Y})$ of the weighted source domain close to that of the target domain. Based on $\mathbf {w}$, it resolves domain shift in two steps. In the first step, it forces the marginal distribution $\rm {P}(\rm {X})$ to be domain-invariant between the target domain and the weighted source domain instead of the original source, obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight $\mathbf {w}$. In the second step, it resolves the shift of $\rm {P}(\rm {Y}|\rm {X})$ by adjusting $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ for label prediction in the target domain. We detail these two steps in §SECREF4. Moreover, we will illustrate how to transfer existing DIRL models to their WDIRL counterparts, taking the representative metric-based CMD model BIBREF3 and the adversarial-learning-based DANN model BIBREF2 as an example, respectively. In summary, the contributions of this paper include: ($\mathbf {i}$) We theoretically and empirically analyse the problem of DIRL for domain adaptation when the marginal distribution $\rm {P}(\rm {Y})$ shifts across domains. ($\mathbf {ii}$) We proposed a novel method to address the problem and show how to incorporate it with existent DIRL models. ($\mathbf {iii}$) Experimental studies on extensive cross-domain sentiment analysis tasks show that models of our WDIRL framework can greatly outperform their DIRL counterparts. <<</Introduction>>> <<<Preliminary and Related Work>>> <<<Domain Adaptation>>> For expression consistency, in this work, we consider domain adaptation in the unsupervised setting (however, we argue that our analysis and solution also applies to the supervised and semi-supervised domain adaptation settings). In the unsupervised domain adaptation setting, there are two different distributions over $\rm {X} \times \rm {Y}$: the source domain $\rm {P}_S(\rm {X},\rm {Y})$ and the target domain $\rm {P}_T(\rm {X},\rm {Y})$. And there is a labeled data set $\mathcal {D}_S$ drawn $i.i.d$ from $\rm {P}_S(\rm {X},\rm {Y})$ and an unlabeled data set $\mathcal {D}_T$ drawn $i.i.d.$ from the marginal distribution $\rm {P}_T(\rm {X})$: The goal of domain adaptation is to build a classier $f:\rm {X} \rightarrow \rm {Y}$ that has good performance in the target domain using $\mathcal {D}_S$ and $\mathcal {D}_T$. For this purpose, many approaches have been proposed from different views, such as instance reweighting BIBREF8, pivot-based information passing BIBREF9, spectral feature alignment BIBREF10 subsampling BIBREF11, and of course the domain-invariant representation learning BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. <<</Domain Adaptation>>> <<<Domain Invariant Representation Learning>>> Domain invariant representation learning (DIRL) is a very popular framework for performing domain adaptation in the cross-domain sentiment analysis field BIBREF23, BIBREF4, BIBREF24, BIBREF7. It is heavily motivated by the following theorem BIBREF25. Theorem 1 For a hypothesis $h$, Here, $\mathcal {L}_S(h)$ denotes the expected loss with hypothesis $h$ in the source domain, $\mathcal {L}_T(h)$ denotes the counterpart in the target domain, $d_1$ is a measure of divergence between two distributions. Based on Theorem UNKREF3 and assuming that performing feature transform on $\rm {X}$ will not increase the values of the first and third terms of the right side of Ineq. (DISPLAY_FORM4), methods of the DIRL framework apply a feature map $G$ onto $\rm {X}$, hoping to obtain a feature representation $G(\rm {X})$ that has a lower value of ${d}_{1}(\rm {P}_S(G(\rm {X})), \rm {P}_T(G(\rm {X})))$. To this end, different methods have been proposed. These methods can be roughly divided into two directions. The first direction is to design a differentiable metric to explicitly evaluate the discrepancy between two distributions. We call methods of this direction as the metric-based DIRL methods. A representative work of this direction is the center-momentum-based model proposed by BIBREF3. In that work, they proposed a central moment discrepancy metric (CMD) to evaluate the discrepancy between two distributions. Specifically, let denote $\rm {X}_S$ and $\rm {X}_T$ an $M$ dimensional random vector on the compact interval $[a; b]^M$ over distribution $\rm {P}_S$ and $\rm {P}_T$, respectively. The CMD loss between $\rm {P}_S$ and $\rm {P}_T$ is defined by: Here, $\mathbb {E}(\rm {X})$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X})$, and is the $k$-th momentum, where $\rm {X}_i$ denotes the $i^{th}$ dimensional variable of $\rm {X}$. The second direction is to perform adversarial training between the feature generator $G$ and a domain discriminator $D$. We call methods of this direction as the adversarial-learning-based methods. As a representative, BIBREF2 trained $D$ to distinguish the domain of a given example $x$ based on its representation $G(x)$. At the same time, they encouraged $G$ to deceive $D$, i.e., to make $D$ unable to distinguish the domain of $x$. More specifically, $D$ was trained to minimize the loss: over its trainable parameters, while in contrast $G$ was trained to maximize $\mathcal {L}_d$. According to the work of BIBREF26, this is equivalent to minimize the Jensen-shannon divergence BIBREF27, BIBREF28 $\text{JSD}(\rm {P}_S, \rm {P}_T)$ between $\rm {P}_S(G(\rm {X}))$ and $\rm {P}_T(G(\rm {X}))$ over $G$. Here, for a concise expression, we write $\rm {P}$ as the shorthand for $\rm {P}(G(\rm {X}))$. The task loss is the combination of the supervised learning loss $\mathcal {L}_{sup}$ and the domain-invariant learning loss $\mathcal {L}_{inv}$, which are defined on $\mathcal {D}_S$ only and on the combination of $\mathcal {D}_S$ and $\mathcal {D}_T$, respectively: Here, $\alpha $ is a hyper-parameter for loss balance, and the aforementioned domain adversarial loss $\text{JSD}(\rm {P}_S, \rm {P}_T)$ and $\text{CMD}_K$ are two concrete forms of $\mathcal {L}_{inv}$. <<</Domain Invariant Representation Learning>>> <<</Preliminary and Related Work>>> <<<Problem of Domain-Invariant Representation Learning>>> In this work, we found out that applying DIRL may harm domain adaptation in the situation that $\rm {P}(\rm {Y})$ shifts across domains. Specifically, when $\rm {P}_S(\rm {Y})$ differs from $\rm {P}_T(\rm {Y})$, forcing the feature representations $G(\rm {X})$ to be domain-invariant may increase the value of $\mathcal {L}_S(h)$ in Ineq. (DISPLAY_FORM4) and consequently increase the value of $\mathcal {L}_T(h)$, which means the decrease of target domain performance. In the following, we start our analysis under the condition that $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$. Then, we consider the more general condition that $\rm {P}_S(\rm {X}|\rm {Y})$ also differs from $\rm {P}_T(\rm {X}|\rm {Y})$. When $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, we have the following theorem. Theorem 2 Given $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, if $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and a feature map $G$ makes $\rm {P}_S \left( \mathcal {M}(\rm {X}))=\rm {P}_T(\mathcal {M}(\rm {X}) \right)$, then $\rm {P}_S(\rm {Y}=i|\mathcal {M}(\rm {X}))=\rm {P}_S(\rm {Y}=i)$. Proofs appear in Appendix A. <<<Remark.>>> According to Theorem UNKREF8, we know that when $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$ and $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$, forcing $G(\rm {X})$ to be domain-invariant inclines to make data of class $i$ mix with data of other classes in the space of $G(\rm {X})$. This will make it difficult for the supervised classifier to distinguish inputs of class $i$ from inputs of the other classes. Think about such an extreme case that every instance $x$ is mapped to a consistent point $g_0$ in $G(\rm {X})$. In this case, $\rm {P}_S(G(\rm {X})=g_0)= \rm {P}_T(G(\rm {X})=g_0) = 1$. Therefore, $G(\rm {X})$ is domain-invariant. As a result, the supervised classifier will assign the label $y^* = \operatornamewithlimits{arg\,max}_y \rm {P}_S(\rm {Y}=y)$ to all input examples. This is definitely unacceptable. To give a more intuitive illustration of the above analysis, we offer several empirical studies on Theorem UNKREF8 in Appendix B. When $\rm {P}_S(\rm {Y})\ne \rm {P}_T(\rm {Y})$ and $\rm {P}_S(\rm {X}|\rm {Y}) \ne \rm {P}_T(\rm {X}|\rm {Y})$, we did not obtain such a strong conclusion as Theorem UNKREF8. Instead, we deduced a conflict between the object of achieving superior classification performance and that of making features domain-invariant. Suppose that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and instances of class $i$ are completely distinguishable from instances of the rest classes in $G(\rm {X})$, i.e.,: In DIRL, we hope that: Consider the region $x \in \mathcal {X}_i$, where $\rm {P}(G(\rm {X}=x)|\rm {Y}=i)>0$. According to the above assumption, we know that $\rm {P}(G(\rm {X}=x \in \mathcal {X}_i)|\rm {Y} \ne i) = 0$. Therefore, applying DIRL will force in region $x \in \mathcal {X}_i$. Taking the integral of $x$ over $\mathcal {X}_i$ for both sides of the equation, we have $\rm {P}_S(\rm {Y}=i) = \rm {P}_T(\rm {Y}=i)$. This deduction contradicts with the setting that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$. Therefore, $G(\rm {X})$ is impossible fully class-separable when it is domain-invariant. Note that the object of the supervised learning is exactly to make $G(\rm {X})$ class-separable. Thus, this actually indicates a conflict between the supervised learning and the domain-invariant representation learning. Based on the above analysis, we can conclude that it is impossible to obtain a feature representation $G(X)$ that is class-separable and at the same time, domain-invariant using the DIRL framework, when $\rm {P}(\rm {Y})$ shifts across domains. However, the shift of $\rm {P}(\rm {Y})$ can exist in many cross-domain sentiment analysis tasks. Therefore, it is worthy of studying in order to deal with the problem of DIRL. <<</Remark.>>> <<</Problem of Domain-Invariant Representation Learning>>> <<<Weighted Domain Invariant Representation Learning>>> According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\rm {P}(\rm {Y})$ to DIRL. The key idea of this framework is to first align $\rm {P}(\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\rm {P}(\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. In the first step, it applies DIRL on the target domain and the weighted source domain, aiming to alleviate the influence of the shift of $\rm {P}(\rm {Y})$ during the alignment of $\rm {P}(\rm {X}|\rm {Y})$. In the second step, it uses $\mathbf {w}$ to reweigh the supervised classifier $\rm {P}_S(\rm {Y}|\rm {X})$ obtained in the first step for target domain label prediction. We detail these two steps in §SECREF10 and §SECREF14, respectively. <<<Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight>>> The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\rm {P}(\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\mathbf {w}_i > 0$. Specifically, we hope that: and we denote $\mathbf {w}^*$ the value of $\mathbf {w}$ that makes this equation hold. We shall see that when $\mathbf {w}=\mathbf {w}^*$, DIRL is to align $\rm {P}_S(G(\rm {X})|\rm {Y})$ with $\rm {P}_T(G(\rm {X})|\rm {Y})$ without the shift of $\rm {P}(\rm {Y})$. According to our analysis, we know that due to the shift of $\rm {P}(\rm {Y})$, there is a conflict between the training objects of the supervised learning $\mathcal {L}_{sup}$ and the domain-invariant learning $\mathcal {L}_{inv}$. And the conflict degree will decrease as $\rm {P}_S(\rm {Y})$ getting close to $\rm {P}_T(\rm {Y})$. Therefore, during model training, $\mathbf {w}$ is expected to be optimized toward $\mathbf {w}^*$ since it will make $\rm {P}(\rm {Y})$ of the weighted source domain close to $\rm {P}_T(\rm {Y})$, so as to solve the conflict. We now show how to transfer existing DIRL models to their WDIRL counterparts with the above idea. Let $\mathbb {S}:\rm {P} \rightarrow {R}$ denote a statistic function defined over a distribution $\rm {P}$. For example, the expectation function $\mathbb {E}(\rm {X})$ in $\mathbb {E}(\rm {X}_S) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}))$ is a concrete instaintiation of $\mathbb {S}$. In general, to transfer models from DIRL to WDIRL, we should replace $\mathbb {S}(\rm {P}_S(\rm {X}))$ defined in $\mathcal {L}_{inv}$ with Take the CMD metric as an example. In WDIRL, the revised form of ${\text{CMD}}_K$ is defined by: Here, $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}|\rm {Y}=i))$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X}|\rm {Y}=i)$. Note that both $\rm {P}_S(\rm {Y}=i)$ and $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i)$ can be estimated using source labeled data, and $\mathbb {E}(\rm {X}_T)$ can be estimated using target unlabeled data. As for those adversarial-learning-based DIRL methods, e.g., DANN BIBREF2, the revised domain-invariant loss can be precisely defined by: During model training, $D$ is optimized in the direction to minimize $\hat{\mathcal {L}}_d$, while $G$ and $\mathbf {w}$ are optimized to maximize $\hat{\mathcal {L}}_d$. In the following, we denote $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$ the equivalent loss defined over $G$ for the revised version of domain adversarial learning. The general task loss in WDIRL is defined by: where $\hat{\mathcal {L}}_{inv}$ is a unified representation of the domain-invariant loss in WDIRL, such as $\widehat{\text{CMD}}_K$ and $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$. <<</Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight>>> <<<Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight>>> In the above step, we align $\rm {P}(\rm {X}|\rm {Y})$ across domains by performing domain-invariant learning on the class-weighted source domain and the original target domain. In this step, we deal with the shift of $\rm {P}(\rm {Y})$. Suppose that we have successfully resolved the shift of $\rm {P}(\rm {X}|\rm {Y})$ with $G$, i.e., $\rm {P}_S(G(\rm {X})|\rm {Y})=\rm {P}_T(G(\rm {X})|\rm {Y})$. Then, according to the work of BIBREF29, we have: where $\gamma (\rm {Y}=i)={\rm {P}_T(\rm {Y}=i)}/{\rm {P}_S(\rm {Y}=i)}$. Of course, in most of the real-world tasks, we do not know the value of $\gamma (\rm {Y}=i)$. However, note that $\gamma (\rm {Y}=i)$ is exactly the expected class weight $\mathbf {w}^*_i$. Therefore, a natural practice of this step is to estimate $\gamma (\rm {Y}=i)$ with the obtained $\mathbf {w}_i$ in the first step and estimate $\rm {P}_T(\rm {Y}|G(\rm {X}))$ with: In summary, to transfer methods of the DIRL paradigm to WDIRL, we should: first revise the definition of $\mathcal {L}_{inv}$, obtaining its corresponding WDIRL form $\hat{\mathcal {L}}_{inv}$; then perform supervised learning and domain-invariant representation learning on $\mathcal {D}_S$ and $\mathcal {D}_T$ according to Eq. (DISPLAY_FORM13), obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight vector $\mathbf {w}$; and finally, adjust $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ according to Eq. (DISPLAY_FORM16) and obtain the target domain classifier $\rm {P}_T(\rm {Y}|\rm {X}; \mathbf {\Phi })$. <<</Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight>>> <<</Weighted Domain Invariant Representation Learning>>> <<<Experiment>>> <<<Experiment Design>>> Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models: SO: the source-only model trained using source domain labeled data without any domain adaptation. CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{CMD}_K$. DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{JSD}(\rm {P}_S, \rm {P}_T)$. $\text{CMD}^\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method. $\text{DANN}^\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method. $\text{CMD}^{\dagger \dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method. $\text{DANN}^{\dagger \dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method. $\text{CMD}^{*}$: a variant of $\text{CMD}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ (estimate from target labeled data) to $\mathbf {w}$ and fixes this value during model training. $\text{DANN}^{*}$: a variant of $\text{DANN}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ to $\mathbf {w}$ and fixes this value during model training. Intrinsically, SO can provide an empirical lowerbound for those domain adaptation methods. $\text{CMD}^{*}$ and $\text{DANN}^{*}$ can provide the empirical upbound of $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$, respectively. In addition, by comparing performance of $\text{CMD}^{*}$ and $\text{DANN}^{*}$ with that of $\text{SO}$, we can know the effectiveness of the DIRL framework when $\rm {P}(\rm {Y})$ dose not shift across domains. By comparing $\text{CMD}^\dagger $ with $\text{CMD}$, or comparing $\text{DANN}^\dagger $ with $\text{DANN}$, we can know the effectiveness of the first step of our proposed method. By comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}^{\dagger }$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}^{\dagger }$, we can know the impact of the second step of our proposed method. And finally, by comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}$, we can know the general effectiveness of our proposed solution. <<</Experiment Design>>> <<<Dataset and Task Design>>> We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams. <<<Binary-Class.>>> From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\rm {P}(\rm {Y})$ shift, which was evaluated by the max value of $\rm {P}_S(\rm {Y}=i)/\rm {P}_T(\rm {Y}=i), \forall i=1, \cdots , L$. Please refer to Appendix C for more detail about the task design for this study. <<</Binary-Class.>>> <<<Multi-Class.>>> We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\mathcal {D}_S$ contained 1000 examples of each class, and $\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. <<</Multi-Class.>>> <<</Dataset and Task Design>>> <<<Implementation Detail>>> For all studied models, we implemented $G$ and $f$ using the same architectures as those in BIBREF3. For those DANN-based methods (i.e., DANN, $\text{DANN}^{\dagger }$, $\text{DANN}^{\dagger \dagger }$, and $\text{DANN}^{*}$), we implemented the discriminator $D$ using a 50 dimensional hidden layer with relu activation functions and a linear classification layer. Hyper-parameter $K$ of $\text{CMD}_K$ and $\widehat{\text{CMD}}_K$ was set to 5 as suggested by BIBREF3. Model optimization was performed using RmsProp BIBREF30. Initial learning rate of $\mathbf {w}$ was set to 0.01, while that of other parameters was set to 0.005 for all tasks. Hyper-parameter $\alpha $ was set to 1 for all of the tested models. We searched for this value in range $\alpha =[1, \cdots , 10]$ on task B $\rightarrow $ K. Within the search, label distribution was set to be uniform, i.e., $\rm {P}(\rm {Y}=i)=1/L$, for both domain B and K. We chose the value that maximize the performance of CMD on testing data of domain K. You may notice that this practice conflicts with the setting of unsupervised domain adaptation that we do not have labeled data of the target domain for training or developing. However, we argue that this practice would not make it unfair for model comparison since all of the tested models shared the same value of $\alpha $ and $\alpha $ was not directly fine-tuned on any tested task. With the same consideration, for every tested model, we reported its best performance achieved on testing data of the target domain during its training. To initialize $\mathbf {w}$, we used label prediction of the source-only model. Specifically, let $\rm {P}_{SO}(\rm {Y}|\rm {X}; \mathbf {\theta }_{SO})$ denote the trained source-only model. We initialized $\mathbf {w}_i$ by: Here, $\mathbb {I}$ denotes the indication function. To offer an intuitive understanding to this strategy, we report performance of WCMD$^{\dagger \dagger }$ over different initializations of $\mathbf {w}$ on 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) binary-class domain adaptation tasks in Figure FIGREF33. Here, we say that domain B and D are of a group, and domain E and K are of another group since B and D are similar, as are E and K, but the two groups are different from one another BIBREF9. Note that $\rm {P}_{S}(\rm {Y}=1)=0.5$ is a constant, which is estimated using source labeled data. From the figure, we can obtain three main observations. First, WCMD$^{\dagger \dagger }$ generally outperformed its CMD counterparts with different initialization of $\mathbf {w}$. Second, it was better to initialize $\mathbf {w}$ with a relatively balanced value, i.e., $\mathbf {w}_i \rm {P}_S(\rm {Y}=i) \rightarrow \frac{1}{L}$ (in this experiment, $L=2$). Finally, $\mathbf {w}^0$ was often a good initialization of $\mathbf {w}$, indicating the effectiveness of the above strategy. <<</Implementation Detail>>> <<<Main Result>>> Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\text{CMD}^{\dagger \dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\text{DANN}^{\dagger \dagger }$ with that of DANN and SO. Third, $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ consistently outperformed $\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ outperforms $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\text{Acc}(\text{CMD})-\text{Acc}(\text{SO}))/\text{Acc}(\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\rm {P}(\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\rm {P}(\rm {Y})$ shift. In contrast, our proposed model $\text{CMD}^{\dagger \dagger }$ performed robustly to the varying of $\rm {P}(\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\text{CMD}^{*}$. This again verified the effectiveness of our solution. Table TABREF34 reports model performance on the 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and the 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) multi-class domain adaptation tasks (You can refer to Appendix D for results on the other tasks). From this table, we observe that on some tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ did not greatly outperform or even slightly underperformed $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. A possible explanation of this phenomenon is that the distribution of $\mathcal {D}_T$ also differs from that of the target domain testing dataset. Therefore, the estimated or learned value of $\mathbf {w}$ using $\mathcal {D}_T$ is not fully suitable for application to the testing dataset. This explanation is verified by the observation that $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ also slightly outperforms $\text{CMD}^{*}$ and $\text{DANN}^{*}$ on these tasks, respectively. <<</Main Result>>> <<</Experiment>>> <<<Conclusion>>> In this paper, we studied the problem of the popular domain-invariant representation learning (DIRL) framework for domain adaptation, when $\rm {P}(\rm {Y})$ changes across domains. To address the problem, we proposed a weighted version of DIRL (WDIRL). We showed that existing methods of the DIRL framework can be easily transferred to our WDIRL framework. Extensive experimental studies on benchmark cross-domain sentiment analysis datasets verified our analysis and showed the effectiveness of our proposed solution. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Problem of Domain-Invariant Representation Learning" ], "type": "disordered_section" }
1909.08167
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis <<<Abstract>>> Cross-domain sentiment analysis is currently a hot topic in the research and engineering areas. One of the most popular frameworks in this field is the domain-invariant representation learning (DIRL) paradigm, which aims to learn a distribution-invariant feature representation across domains. However, in this work, we find out that applying DIRL may harm domain adaptation when the label distribution $\rm{P}(\rm{Y})$ changes across domains. To address this problem, we propose a modification to DIRL, obtaining a novel weighted domain-invariant representation learning (WDIRL) framework. We show that it is easy to transfer existing SOTA DIRL models to WDIRL. Empirical studies on extensive cross-domain sentiment analysis tasks verified our statements and showed the effectiveness of our proposed solution. <<</Abstract>>> <<<Introduction>>> Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains and the collection and preprocessing of large amounts of data for new domains is often time-consuming and expensive. Therefore, cross-domain sentiment analysis is currently a hot topic, which aims to transfer knowledge from a label-rich source domain (S) to the label-few target domain (T). In recent years, one of the most popular frameworks for cross-domain sentiment analysis is the domain invariant representation learning (DIRL) framework BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Methods of this framework follow the idea of extracting a domain-invariant feature representation, in which the data distributions of the source and target domains are similar. Based on the resultant representations, they learn the supervised classifier using source rich labeled data. The main difference among these methods is the applied technique to force the feature representations to be domain-invariant. However, in this work, we discover that applying DIRL may harm domain adaptation in the situation that the label distribution $\rm {P}(\rm {Y})$ shifts across domains. Specifically, let $\rm {X}$ and $\rm {Y}$ denote the input and label random variable, respectively, and $G(\rm {X})$ denote the feature representation of $\rm {X}$. We found out that when $\rm {P}(\rm {Y})$ changes across domains while $\rm {P}(\rm {X}|\rm {Y})$ stays the same, forcing $G(\rm {X})$ to be domain-invariant will make $G(\rm {X})$ uninformative to $\rm {Y}$. This will, in turn, harm the generation of the supervised classifier to the target domain. In addition, for the more general condition that both $\rm {P}(\rm {Y})$ and $\rm {P}(\rm {X}|\rm {Y})$ shift across domains, we deduced a conflict between the object of making the classification error small and that of making $G(\rm {X})$ domain-invariant. We argue that the problem is worthy of studying since the shift of $\rm {P}(\rm {Y})$ exists in many real-world cross-domain sentiment analysis tasks BIBREF0. For example, the marginal distribution of the sentiment of a product can be affected by the overall social environment and change in different time periods; and for different products, their marginal distributions of the sentiment are naturally considered different. Moreover, there are many factors, such as the original data distribution, data collection time, and data clearing method, that can affect $\rm {P}(\rm {Y})$ of the collected target domain unlabeled dataset. Note that in the real-world cross-domain tasks, we do not know the labels of the collected target domain data. Thus, we cannot previously align its label distribution $\rm {P}_T(\mathbf {Y})$ with that of source domain labeled data $\rm {P}_S(\mathbf {Y})$, as done in many previous works BIBREF0, BIBREF2, BIBREF5, BIBREF4, BIBREF6, BIBREF7. To address the problem of DIRL resulted from the shift of $\rm {P}(\rm {Y})$, we propose a modification to DIRL, obtaining a weighted domain-invariant representation learning (WDIRL) framework. This framework additionally introduces a class weight $\mathbf {w}$ to weigh source domain examples by class, hoping to make $\rm {P}(\rm {Y})$ of the weighted source domain close to that of the target domain. Based on $\mathbf {w}$, it resolves domain shift in two steps. In the first step, it forces the marginal distribution $\rm {P}(\rm {X})$ to be domain-invariant between the target domain and the weighted source domain instead of the original source, obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight $\mathbf {w}$. In the second step, it resolves the shift of $\rm {P}(\rm {Y}|\rm {X})$ by adjusting $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ for label prediction in the target domain. We detail these two steps in §SECREF4. Moreover, we will illustrate how to transfer existing DIRL models to their WDIRL counterparts, taking the representative metric-based CMD model BIBREF3 and the adversarial-learning-based DANN model BIBREF2 as an example, respectively. In summary, the contributions of this paper include: ($\mathbf {i}$) We theoretically and empirically analyse the problem of DIRL for domain adaptation when the marginal distribution $\rm {P}(\rm {Y})$ shifts across domains. ($\mathbf {ii}$) We proposed a novel method to address the problem and show how to incorporate it with existent DIRL models. ($\mathbf {iii}$) Experimental studies on extensive cross-domain sentiment analysis tasks show that models of our WDIRL framework can greatly outperform their DIRL counterparts. <<</Introduction>>> <<<Preliminary and Related Work>>> <<<Domain Adaptation>>> For expression consistency, in this work, we consider domain adaptation in the unsupervised setting (however, we argue that our analysis and solution also applies to the supervised and semi-supervised domain adaptation settings). In the unsupervised domain adaptation setting, there are two different distributions over $\rm {X} \times \rm {Y}$: the source domain $\rm {P}_S(\rm {X},\rm {Y})$ and the target domain $\rm {P}_T(\rm {X},\rm {Y})$. And there is a labeled data set $\mathcal {D}_S$ drawn $i.i.d$ from $\rm {P}_S(\rm {X},\rm {Y})$ and an unlabeled data set $\mathcal {D}_T$ drawn $i.i.d.$ from the marginal distribution $\rm {P}_T(\rm {X})$: The goal of domain adaptation is to build a classier $f:\rm {X} \rightarrow \rm {Y}$ that has good performance in the target domain using $\mathcal {D}_S$ and $\mathcal {D}_T$. For this purpose, many approaches have been proposed from different views, such as instance reweighting BIBREF8, pivot-based information passing BIBREF9, spectral feature alignment BIBREF10 subsampling BIBREF11, and of course the domain-invariant representation learning BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. <<</Domain Adaptation>>> <<<Domain Invariant Representation Learning>>> Domain invariant representation learning (DIRL) is a very popular framework for performing domain adaptation in the cross-domain sentiment analysis field BIBREF23, BIBREF4, BIBREF24, BIBREF7. It is heavily motivated by the following theorem BIBREF25. Theorem 1 For a hypothesis $h$, Here, $\mathcal {L}_S(h)$ denotes the expected loss with hypothesis $h$ in the source domain, $\mathcal {L}_T(h)$ denotes the counterpart in the target domain, $d_1$ is a measure of divergence between two distributions. Based on Theorem UNKREF3 and assuming that performing feature transform on $\rm {X}$ will not increase the values of the first and third terms of the right side of Ineq. (DISPLAY_FORM4), methods of the DIRL framework apply a feature map $G$ onto $\rm {X}$, hoping to obtain a feature representation $G(\rm {X})$ that has a lower value of ${d}_{1}(\rm {P}_S(G(\rm {X})), \rm {P}_T(G(\rm {X})))$. To this end, different methods have been proposed. These methods can be roughly divided into two directions. The first direction is to design a differentiable metric to explicitly evaluate the discrepancy between two distributions. We call methods of this direction as the metric-based DIRL methods. A representative work of this direction is the center-momentum-based model proposed by BIBREF3. In that work, they proposed a central moment discrepancy metric (CMD) to evaluate the discrepancy between two distributions. Specifically, let denote $\rm {X}_S$ and $\rm {X}_T$ an $M$ dimensional random vector on the compact interval $[a; b]^M$ over distribution $\rm {P}_S$ and $\rm {P}_T$, respectively. The CMD loss between $\rm {P}_S$ and $\rm {P}_T$ is defined by: Here, $\mathbb {E}(\rm {X})$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X})$, and is the $k$-th momentum, where $\rm {X}_i$ denotes the $i^{th}$ dimensional variable of $\rm {X}$. The second direction is to perform adversarial training between the feature generator $G$ and a domain discriminator $D$. We call methods of this direction as the adversarial-learning-based methods. As a representative, BIBREF2 trained $D$ to distinguish the domain of a given example $x$ based on its representation $G(x)$. At the same time, they encouraged $G$ to deceive $D$, i.e., to make $D$ unable to distinguish the domain of $x$. More specifically, $D$ was trained to minimize the loss: over its trainable parameters, while in contrast $G$ was trained to maximize $\mathcal {L}_d$. According to the work of BIBREF26, this is equivalent to minimize the Jensen-shannon divergence BIBREF27, BIBREF28 $\text{JSD}(\rm {P}_S, \rm {P}_T)$ between $\rm {P}_S(G(\rm {X}))$ and $\rm {P}_T(G(\rm {X}))$ over $G$. Here, for a concise expression, we write $\rm {P}$ as the shorthand for $\rm {P}(G(\rm {X}))$. The task loss is the combination of the supervised learning loss $\mathcal {L}_{sup}$ and the domain-invariant learning loss $\mathcal {L}_{inv}$, which are defined on $\mathcal {D}_S$ only and on the combination of $\mathcal {D}_S$ and $\mathcal {D}_T$, respectively: Here, $\alpha $ is a hyper-parameter for loss balance, and the aforementioned domain adversarial loss $\text{JSD}(\rm {P}_S, \rm {P}_T)$ and $\text{CMD}_K$ are two concrete forms of $\mathcal {L}_{inv}$. <<</Domain Invariant Representation Learning>>> <<</Preliminary and Related Work>>> <<<Problem of Domain-Invariant Representation Learning>>> In this work, we found out that applying DIRL may harm domain adaptation in the situation that $\rm {P}(\rm {Y})$ shifts across domains. Specifically, when $\rm {P}_S(\rm {Y})$ differs from $\rm {P}_T(\rm {Y})$, forcing the feature representations $G(\rm {X})$ to be domain-invariant may increase the value of $\mathcal {L}_S(h)$ in Ineq. (DISPLAY_FORM4) and consequently increase the value of $\mathcal {L}_T(h)$, which means the decrease of target domain performance. In the following, we start our analysis under the condition that $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$. Then, we consider the more general condition that $\rm {P}_S(\rm {X}|\rm {Y})$ also differs from $\rm {P}_T(\rm {X}|\rm {Y})$. When $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, we have the following theorem. Theorem 2 Given $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, if $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and a feature map $G$ makes $\rm {P}_S \left( \mathcal {M}(\rm {X}))=\rm {P}_T(\mathcal {M}(\rm {X}) \right)$, then $\rm {P}_S(\rm {Y}=i|\mathcal {M}(\rm {X}))=\rm {P}_S(\rm {Y}=i)$. Proofs appear in Appendix A. <<<Remark.>>> According to Theorem UNKREF8, we know that when $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$ and $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$, forcing $G(\rm {X})$ to be domain-invariant inclines to make data of class $i$ mix with data of other classes in the space of $G(\rm {X})$. This will make it difficult for the supervised classifier to distinguish inputs of class $i$ from inputs of the other classes. Think about such an extreme case that every instance $x$ is mapped to a consistent point $g_0$ in $G(\rm {X})$. In this case, $\rm {P}_S(G(\rm {X})=g_0)= \rm {P}_T(G(\rm {X})=g_0) = 1$. Therefore, $G(\rm {X})$ is domain-invariant. As a result, the supervised classifier will assign the label $y^* = \operatornamewithlimits{arg\,max}_y \rm {P}_S(\rm {Y}=y)$ to all input examples. This is definitely unacceptable. To give a more intuitive illustration of the above analysis, we offer several empirical studies on Theorem UNKREF8 in Appendix B. When $\rm {P}_S(\rm {Y})\ne \rm {P}_T(\rm {Y})$ and $\rm {P}_S(\rm {X}|\rm {Y}) \ne \rm {P}_T(\rm {X}|\rm {Y})$, we did not obtain such a strong conclusion as Theorem UNKREF8. Instead, we deduced a conflict between the object of achieving superior classification performance and that of making features domain-invariant. Suppose that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and instances of class $i$ are completely distinguishable from instances of the rest classes in $G(\rm {X})$, i.e.,: In DIRL, we hope that: Consider the region $x \in \mathcal {X}_i$, where $\rm {P}(G(\rm {X}=x)|\rm {Y}=i)>0$. According to the above assumption, we know that $\rm {P}(G(\rm {X}=x \in \mathcal {X}_i)|\rm {Y} \ne i) = 0$. Therefore, applying DIRL will force in region $x \in \mathcal {X}_i$. Taking the integral of $x$ over $\mathcal {X}_i$ for both sides of the equation, we have $\rm {P}_S(\rm {Y}=i) = \rm {P}_T(\rm {Y}=i)$. This deduction contradicts with the setting that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$. Therefore, $G(\rm {X})$ is impossible fully class-separable when it is domain-invariant. Note that the object of the supervised learning is exactly to make $G(\rm {X})$ class-separable. Thus, this actually indicates a conflict between the supervised learning and the domain-invariant representation learning. Based on the above analysis, we can conclude that it is impossible to obtain a feature representation $G(X)$ that is class-separable and at the same time, domain-invariant using the DIRL framework, when $\rm {P}(\rm {Y})$ shifts across domains. However, the shift of $\rm {P}(\rm {Y})$ can exist in many cross-domain sentiment analysis tasks. Therefore, it is worthy of studying in order to deal with the problem of DIRL. <<</Remark.>>> <<</Problem of Domain-Invariant Representation Learning>>> <<<Weighted Domain Invariant Representation Learning>>> According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\rm {P}(\rm {Y})$ to DIRL. The key idea of this framework is to first align $\rm {P}(\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\rm {P}(\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. In the first step, it applies DIRL on the target domain and the weighted source domain, aiming to alleviate the influence of the shift of $\rm {P}(\rm {Y})$ during the alignment of $\rm {P}(\rm {X}|\rm {Y})$. In the second step, it uses $\mathbf {w}$ to reweigh the supervised classifier $\rm {P}_S(\rm {Y}|\rm {X})$ obtained in the first step for target domain label prediction. We detail these two steps in §SECREF10 and §SECREF14, respectively. <<<Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight>>> The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\rm {P}(\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\mathbf {w}_i > 0$. Specifically, we hope that: and we denote $\mathbf {w}^*$ the value of $\mathbf {w}$ that makes this equation hold. We shall see that when $\mathbf {w}=\mathbf {w}^*$, DIRL is to align $\rm {P}_S(G(\rm {X})|\rm {Y})$ with $\rm {P}_T(G(\rm {X})|\rm {Y})$ without the shift of $\rm {P}(\rm {Y})$. According to our analysis, we know that due to the shift of $\rm {P}(\rm {Y})$, there is a conflict between the training objects of the supervised learning $\mathcal {L}_{sup}$ and the domain-invariant learning $\mathcal {L}_{inv}$. And the conflict degree will decrease as $\rm {P}_S(\rm {Y})$ getting close to $\rm {P}_T(\rm {Y})$. Therefore, during model training, $\mathbf {w}$ is expected to be optimized toward $\mathbf {w}^*$ since it will make $\rm {P}(\rm {Y})$ of the weighted source domain close to $\rm {P}_T(\rm {Y})$, so as to solve the conflict. We now show how to transfer existing DIRL models to their WDIRL counterparts with the above idea. Let $\mathbb {S}:\rm {P} \rightarrow {R}$ denote a statistic function defined over a distribution $\rm {P}$. For example, the expectation function $\mathbb {E}(\rm {X})$ in $\mathbb {E}(\rm {X}_S) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}))$ is a concrete instaintiation of $\mathbb {S}$. In general, to transfer models from DIRL to WDIRL, we should replace $\mathbb {S}(\rm {P}_S(\rm {X}))$ defined in $\mathcal {L}_{inv}$ with Take the CMD metric as an example. In WDIRL, the revised form of ${\text{CMD}}_K$ is defined by: Here, $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}|\rm {Y}=i))$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X}|\rm {Y}=i)$. Note that both $\rm {P}_S(\rm {Y}=i)$ and $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i)$ can be estimated using source labeled data, and $\mathbb {E}(\rm {X}_T)$ can be estimated using target unlabeled data. As for those adversarial-learning-based DIRL methods, e.g., DANN BIBREF2, the revised domain-invariant loss can be precisely defined by: During model training, $D$ is optimized in the direction to minimize $\hat{\mathcal {L}}_d$, while $G$ and $\mathbf {w}$ are optimized to maximize $\hat{\mathcal {L}}_d$. In the following, we denote $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$ the equivalent loss defined over $G$ for the revised version of domain adversarial learning. The general task loss in WDIRL is defined by: where $\hat{\mathcal {L}}_{inv}$ is a unified representation of the domain-invariant loss in WDIRL, such as $\widehat{\text{CMD}}_K$ and $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$. <<</Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight>>> <<<Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight>>> In the above step, we align $\rm {P}(\rm {X}|\rm {Y})$ across domains by performing domain-invariant learning on the class-weighted source domain and the original target domain. In this step, we deal with the shift of $\rm {P}(\rm {Y})$. Suppose that we have successfully resolved the shift of $\rm {P}(\rm {X}|\rm {Y})$ with $G$, i.e., $\rm {P}_S(G(\rm {X})|\rm {Y})=\rm {P}_T(G(\rm {X})|\rm {Y})$. Then, according to the work of BIBREF29, we have: where $\gamma (\rm {Y}=i)={\rm {P}_T(\rm {Y}=i)}/{\rm {P}_S(\rm {Y}=i)}$. Of course, in most of the real-world tasks, we do not know the value of $\gamma (\rm {Y}=i)$. However, note that $\gamma (\rm {Y}=i)$ is exactly the expected class weight $\mathbf {w}^*_i$. Therefore, a natural practice of this step is to estimate $\gamma (\rm {Y}=i)$ with the obtained $\mathbf {w}_i$ in the first step and estimate $\rm {P}_T(\rm {Y}|G(\rm {X}))$ with: In summary, to transfer methods of the DIRL paradigm to WDIRL, we should: first revise the definition of $\mathcal {L}_{inv}$, obtaining its corresponding WDIRL form $\hat{\mathcal {L}}_{inv}$; then perform supervised learning and domain-invariant representation learning on $\mathcal {D}_S$ and $\mathcal {D}_T$ according to Eq. (DISPLAY_FORM13), obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight vector $\mathbf {w}$; and finally, adjust $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ according to Eq. (DISPLAY_FORM16) and obtain the target domain classifier $\rm {P}_T(\rm {Y}|\rm {X}; \mathbf {\Phi })$. <<</Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight>>> <<</Weighted Domain Invariant Representation Learning>>> <<<Experiment>>> <<<Experiment Design>>> Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models: SO: the source-only model trained using source domain labeled data without any domain adaptation. CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{CMD}_K$. DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{JSD}(\rm {P}_S, \rm {P}_T)$. $\text{CMD}^\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method. $\text{DANN}^\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method. $\text{CMD}^{\dagger \dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method. $\text{DANN}^{\dagger \dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method. $\text{CMD}^{*}$: a variant of $\text{CMD}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ (estimate from target labeled data) to $\mathbf {w}$ and fixes this value during model training. $\text{DANN}^{*}$: a variant of $\text{DANN}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ to $\mathbf {w}$ and fixes this value during model training. Intrinsically, SO can provide an empirical lowerbound for those domain adaptation methods. $\text{CMD}^{*}$ and $\text{DANN}^{*}$ can provide the empirical upbound of $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$, respectively. In addition, by comparing performance of $\text{CMD}^{*}$ and $\text{DANN}^{*}$ with that of $\text{SO}$, we can know the effectiveness of the DIRL framework when $\rm {P}(\rm {Y})$ dose not shift across domains. By comparing $\text{CMD}^\dagger $ with $\text{CMD}$, or comparing $\text{DANN}^\dagger $ with $\text{DANN}$, we can know the effectiveness of the first step of our proposed method. By comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}^{\dagger }$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}^{\dagger }$, we can know the impact of the second step of our proposed method. And finally, by comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}$, we can know the general effectiveness of our proposed solution. <<</Experiment Design>>> <<<Dataset and Task Design>>> We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams. <<<Binary-Class.>>> From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\rm {P}(\rm {Y})$ shift, which was evaluated by the max value of $\rm {P}_S(\rm {Y}=i)/\rm {P}_T(\rm {Y}=i), \forall i=1, \cdots , L$. Please refer to Appendix C for more detail about the task design for this study. <<</Binary-Class.>>> <<<Multi-Class.>>> We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\mathcal {D}_S$ contained 1000 examples of each class, and $\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. <<</Multi-Class.>>> <<</Dataset and Task Design>>> <<<Implementation Detail>>> For all studied models, we implemented $G$ and $f$ using the same architectures as those in BIBREF3. For those DANN-based methods (i.e., DANN, $\text{DANN}^{\dagger }$, $\text{DANN}^{\dagger \dagger }$, and $\text{DANN}^{*}$), we implemented the discriminator $D$ using a 50 dimensional hidden layer with relu activation functions and a linear classification layer. Hyper-parameter $K$ of $\text{CMD}_K$ and $\widehat{\text{CMD}}_K$ was set to 5 as suggested by BIBREF3. Model optimization was performed using RmsProp BIBREF30. Initial learning rate of $\mathbf {w}$ was set to 0.01, while that of other parameters was set to 0.005 for all tasks. Hyper-parameter $\alpha $ was set to 1 for all of the tested models. We searched for this value in range $\alpha =[1, \cdots , 10]$ on task B $\rightarrow $ K. Within the search, label distribution was set to be uniform, i.e., $\rm {P}(\rm {Y}=i)=1/L$, for both domain B and K. We chose the value that maximize the performance of CMD on testing data of domain K. You may notice that this practice conflicts with the setting of unsupervised domain adaptation that we do not have labeled data of the target domain for training or developing. However, we argue that this practice would not make it unfair for model comparison since all of the tested models shared the same value of $\alpha $ and $\alpha $ was not directly fine-tuned on any tested task. With the same consideration, for every tested model, we reported its best performance achieved on testing data of the target domain during its training. To initialize $\mathbf {w}$, we used label prediction of the source-only model. Specifically, let $\rm {P}_{SO}(\rm {Y}|\rm {X}; \mathbf {\theta }_{SO})$ denote the trained source-only model. We initialized $\mathbf {w}_i$ by: Here, $\mathbb {I}$ denotes the indication function. To offer an intuitive understanding to this strategy, we report performance of WCMD$^{\dagger \dagger }$ over different initializations of $\mathbf {w}$ on 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) binary-class domain adaptation tasks in Figure FIGREF33. Here, we say that domain B and D are of a group, and domain E and K are of another group since B and D are similar, as are E and K, but the two groups are different from one another BIBREF9. Note that $\rm {P}_{S}(\rm {Y}=1)=0.5$ is a constant, which is estimated using source labeled data. From the figure, we can obtain three main observations. First, WCMD$^{\dagger \dagger }$ generally outperformed its CMD counterparts with different initialization of $\mathbf {w}$. Second, it was better to initialize $\mathbf {w}$ with a relatively balanced value, i.e., $\mathbf {w}_i \rm {P}_S(\rm {Y}=i) \rightarrow \frac{1}{L}$ (in this experiment, $L=2$). Finally, $\mathbf {w}^0$ was often a good initialization of $\mathbf {w}$, indicating the effectiveness of the above strategy. <<</Implementation Detail>>> <<<Main Result>>> Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\text{CMD}^{\dagger \dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\text{DANN}^{\dagger \dagger }$ with that of DANN and SO. Third, $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ consistently outperformed $\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ outperforms $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\text{Acc}(\text{CMD})-\text{Acc}(\text{SO}))/\text{Acc}(\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\rm {P}(\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\rm {P}(\rm {Y})$ shift. In contrast, our proposed model $\text{CMD}^{\dagger \dagger }$ performed robustly to the varying of $\rm {P}(\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\text{CMD}^{*}$. This again verified the effectiveness of our solution. Table TABREF34 reports model performance on the 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and the 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) multi-class domain adaptation tasks (You can refer to Appendix D for results on the other tasks). From this table, we observe that on some tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ did not greatly outperform or even slightly underperformed $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. A possible explanation of this phenomenon is that the distribution of $\mathcal {D}_T$ also differs from that of the target domain testing dataset. Therefore, the estimated or learned value of $\mathbf {w}$ using $\mathcal {D}_T$ is not fully suitable for application to the testing dataset. This explanation is verified by the observation that $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ also slightly outperforms $\text{CMD}^{*}$ and $\text{DANN}^{*}$ on these tasks, respectively. <<</Main Result>>> <<</Experiment>>> <<<Conclusion>>> In this paper, we studied the problem of the popular domain-invariant representation learning (DIRL) framework for domain adaptation, when $\rm {P}(\rm {Y})$ changes across domains. To address the problem, we proposed a weighted version of DIRL (WDIRL). We showed that existing methods of the DIRL framework can be easily transferred to our WDIRL framework. Extensive experimental studies on benchmark cross-domain sentiment analysis datasets verified our analysis and showed the effectiveness of our proposed solution. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Abstract, Conclusion" ], "type": "disordered_section" }
1909.08167
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis <<<Abstract>>> Cross-domain sentiment analysis is currently a hot topic in the research and engineering areas. One of the most popular frameworks in this field is the domain-invariant representation learning (DIRL) paradigm, which aims to learn a distribution-invariant feature representation across domains. However, in this work, we find out that applying DIRL may harm domain adaptation when the label distribution $\rm{P}(\rm{Y})$ changes across domains. To address this problem, we propose a modification to DIRL, obtaining a novel weighted domain-invariant representation learning (WDIRL) framework. We show that it is easy to transfer existing SOTA DIRL models to WDIRL. Empirical studies on extensive cross-domain sentiment analysis tasks verified our statements and showed the effectiveness of our proposed solution. <<</Abstract>>> <<<Introduction>>> Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains and the collection and preprocessing of large amounts of data for new domains is often time-consuming and expensive. Therefore, cross-domain sentiment analysis is currently a hot topic, which aims to transfer knowledge from a label-rich source domain (S) to the label-few target domain (T). In recent years, one of the most popular frameworks for cross-domain sentiment analysis is the domain invariant representation learning (DIRL) framework BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Methods of this framework follow the idea of extracting a domain-invariant feature representation, in which the data distributions of the source and target domains are similar. Based on the resultant representations, they learn the supervised classifier using source rich labeled data. The main difference among these methods is the applied technique to force the feature representations to be domain-invariant. However, in this work, we discover that applying DIRL may harm domain adaptation in the situation that the label distribution $\rm {P}(\rm {Y})$ shifts across domains. Specifically, let $\rm {X}$ and $\rm {Y}$ denote the input and label random variable, respectively, and $G(\rm {X})$ denote the feature representation of $\rm {X}$. We found out that when $\rm {P}(\rm {Y})$ changes across domains while $\rm {P}(\rm {X}|\rm {Y})$ stays the same, forcing $G(\rm {X})$ to be domain-invariant will make $G(\rm {X})$ uninformative to $\rm {Y}$. This will, in turn, harm the generation of the supervised classifier to the target domain. In addition, for the more general condition that both $\rm {P}(\rm {Y})$ and $\rm {P}(\rm {X}|\rm {Y})$ shift across domains, we deduced a conflict between the object of making the classification error small and that of making $G(\rm {X})$ domain-invariant. We argue that the problem is worthy of studying since the shift of $\rm {P}(\rm {Y})$ exists in many real-world cross-domain sentiment analysis tasks BIBREF0. For example, the marginal distribution of the sentiment of a product can be affected by the overall social environment and change in different time periods; and for different products, their marginal distributions of the sentiment are naturally considered different. Moreover, there are many factors, such as the original data distribution, data collection time, and data clearing method, that can affect $\rm {P}(\rm {Y})$ of the collected target domain unlabeled dataset. Note that in the real-world cross-domain tasks, we do not know the labels of the collected target domain data. Thus, we cannot previously align its label distribution $\rm {P}_T(\mathbf {Y})$ with that of source domain labeled data $\rm {P}_S(\mathbf {Y})$, as done in many previous works BIBREF0, BIBREF2, BIBREF5, BIBREF4, BIBREF6, BIBREF7. To address the problem of DIRL resulted from the shift of $\rm {P}(\rm {Y})$, we propose a modification to DIRL, obtaining a weighted domain-invariant representation learning (WDIRL) framework. This framework additionally introduces a class weight $\mathbf {w}$ to weigh source domain examples by class, hoping to make $\rm {P}(\rm {Y})$ of the weighted source domain close to that of the target domain. Based on $\mathbf {w}$, it resolves domain shift in two steps. In the first step, it forces the marginal distribution $\rm {P}(\rm {X})$ to be domain-invariant between the target domain and the weighted source domain instead of the original source, obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight $\mathbf {w}$. In the second step, it resolves the shift of $\rm {P}(\rm {Y}|\rm {X})$ by adjusting $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ for label prediction in the target domain. We detail these two steps in §SECREF4. Moreover, we will illustrate how to transfer existing DIRL models to their WDIRL counterparts, taking the representative metric-based CMD model BIBREF3 and the adversarial-learning-based DANN model BIBREF2 as an example, respectively. In summary, the contributions of this paper include: ($\mathbf {i}$) We theoretically and empirically analyse the problem of DIRL for domain adaptation when the marginal distribution $\rm {P}(\rm {Y})$ shifts across domains. ($\mathbf {ii}$) We proposed a novel method to address the problem and show how to incorporate it with existent DIRL models. ($\mathbf {iii}$) Experimental studies on extensive cross-domain sentiment analysis tasks show that models of our WDIRL framework can greatly outperform their DIRL counterparts. <<</Introduction>>> <<<Preliminary and Related Work>>> <<<Domain Adaptation>>> For expression consistency, in this work, we consider domain adaptation in the unsupervised setting (however, we argue that our analysis and solution also applies to the supervised and semi-supervised domain adaptation settings). In the unsupervised domain adaptation setting, there are two different distributions over $\rm {X} \times \rm {Y}$: the source domain $\rm {P}_S(\rm {X},\rm {Y})$ and the target domain $\rm {P}_T(\rm {X},\rm {Y})$. And there is a labeled data set $\mathcal {D}_S$ drawn $i.i.d$ from $\rm {P}_S(\rm {X},\rm {Y})$ and an unlabeled data set $\mathcal {D}_T$ drawn $i.i.d.$ from the marginal distribution $\rm {P}_T(\rm {X})$: The goal of domain adaptation is to build a classier $f:\rm {X} \rightarrow \rm {Y}$ that has good performance in the target domain using $\mathcal {D}_S$ and $\mathcal {D}_T$. For this purpose, many approaches have been proposed from different views, such as instance reweighting BIBREF8, pivot-based information passing BIBREF9, spectral feature alignment BIBREF10 subsampling BIBREF11, and of course the domain-invariant representation learning BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. <<</Domain Adaptation>>> <<<Domain Invariant Representation Learning>>> Domain invariant representation learning (DIRL) is a very popular framework for performing domain adaptation in the cross-domain sentiment analysis field BIBREF23, BIBREF4, BIBREF24, BIBREF7. It is heavily motivated by the following theorem BIBREF25. Theorem 1 For a hypothesis $h$, Here, $\mathcal {L}_S(h)$ denotes the expected loss with hypothesis $h$ in the source domain, $\mathcal {L}_T(h)$ denotes the counterpart in the target domain, $d_1$ is a measure of divergence between two distributions. Based on Theorem UNKREF3 and assuming that performing feature transform on $\rm {X}$ will not increase the values of the first and third terms of the right side of Ineq. (DISPLAY_FORM4), methods of the DIRL framework apply a feature map $G$ onto $\rm {X}$, hoping to obtain a feature representation $G(\rm {X})$ that has a lower value of ${d}_{1}(\rm {P}_S(G(\rm {X})), \rm {P}_T(G(\rm {X})))$. To this end, different methods have been proposed. These methods can be roughly divided into two directions. The first direction is to design a differentiable metric to explicitly evaluate the discrepancy between two distributions. We call methods of this direction as the metric-based DIRL methods. A representative work of this direction is the center-momentum-based model proposed by BIBREF3. In that work, they proposed a central moment discrepancy metric (CMD) to evaluate the discrepancy between two distributions. Specifically, let denote $\rm {X}_S$ and $\rm {X}_T$ an $M$ dimensional random vector on the compact interval $[a; b]^M$ over distribution $\rm {P}_S$ and $\rm {P}_T$, respectively. The CMD loss between $\rm {P}_S$ and $\rm {P}_T$ is defined by: Here, $\mathbb {E}(\rm {X})$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X})$, and is the $k$-th momentum, where $\rm {X}_i$ denotes the $i^{th}$ dimensional variable of $\rm {X}$. The second direction is to perform adversarial training between the feature generator $G$ and a domain discriminator $D$. We call methods of this direction as the adversarial-learning-based methods. As a representative, BIBREF2 trained $D$ to distinguish the domain of a given example $x$ based on its representation $G(x)$. At the same time, they encouraged $G$ to deceive $D$, i.e., to make $D$ unable to distinguish the domain of $x$. More specifically, $D$ was trained to minimize the loss: over its trainable parameters, while in contrast $G$ was trained to maximize $\mathcal {L}_d$. According to the work of BIBREF26, this is equivalent to minimize the Jensen-shannon divergence BIBREF27, BIBREF28 $\text{JSD}(\rm {P}_S, \rm {P}_T)$ between $\rm {P}_S(G(\rm {X}))$ and $\rm {P}_T(G(\rm {X}))$ over $G$. Here, for a concise expression, we write $\rm {P}$ as the shorthand for $\rm {P}(G(\rm {X}))$. The task loss is the combination of the supervised learning loss $\mathcal {L}_{sup}$ and the domain-invariant learning loss $\mathcal {L}_{inv}$, which are defined on $\mathcal {D}_S$ only and on the combination of $\mathcal {D}_S$ and $\mathcal {D}_T$, respectively: Here, $\alpha $ is a hyper-parameter for loss balance, and the aforementioned domain adversarial loss $\text{JSD}(\rm {P}_S, \rm {P}_T)$ and $\text{CMD}_K$ are two concrete forms of $\mathcal {L}_{inv}$. <<</Domain Invariant Representation Learning>>> <<</Preliminary and Related Work>>> <<<Problem of Domain-Invariant Representation Learning>>> In this work, we found out that applying DIRL may harm domain adaptation in the situation that $\rm {P}(\rm {Y})$ shifts across domains. Specifically, when $\rm {P}_S(\rm {Y})$ differs from $\rm {P}_T(\rm {Y})$, forcing the feature representations $G(\rm {X})$ to be domain-invariant may increase the value of $\mathcal {L}_S(h)$ in Ineq. (DISPLAY_FORM4) and consequently increase the value of $\mathcal {L}_T(h)$, which means the decrease of target domain performance. In the following, we start our analysis under the condition that $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$. Then, we consider the more general condition that $\rm {P}_S(\rm {X}|\rm {Y})$ also differs from $\rm {P}_T(\rm {X}|\rm {Y})$. When $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, we have the following theorem. Theorem 2 Given $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, if $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and a feature map $G$ makes $\rm {P}_S \left( \mathcal {M}(\rm {X}))=\rm {P}_T(\mathcal {M}(\rm {X}) \right)$, then $\rm {P}_S(\rm {Y}=i|\mathcal {M}(\rm {X}))=\rm {P}_S(\rm {Y}=i)$. Proofs appear in Appendix A. <<<Remark.>>> According to Theorem UNKREF8, we know that when $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$ and $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$, forcing $G(\rm {X})$ to be domain-invariant inclines to make data of class $i$ mix with data of other classes in the space of $G(\rm {X})$. This will make it difficult for the supervised classifier to distinguish inputs of class $i$ from inputs of the other classes. Think about such an extreme case that every instance $x$ is mapped to a consistent point $g_0$ in $G(\rm {X})$. In this case, $\rm {P}_S(G(\rm {X})=g_0)= \rm {P}_T(G(\rm {X})=g_0) = 1$. Therefore, $G(\rm {X})$ is domain-invariant. As a result, the supervised classifier will assign the label $y^* = \operatornamewithlimits{arg\,max}_y \rm {P}_S(\rm {Y}=y)$ to all input examples. This is definitely unacceptable. To give a more intuitive illustration of the above analysis, we offer several empirical studies on Theorem UNKREF8 in Appendix B. When $\rm {P}_S(\rm {Y})\ne \rm {P}_T(\rm {Y})$ and $\rm {P}_S(\rm {X}|\rm {Y}) \ne \rm {P}_T(\rm {X}|\rm {Y})$, we did not obtain such a strong conclusion as Theorem UNKREF8. Instead, we deduced a conflict between the object of achieving superior classification performance and that of making features domain-invariant. Suppose that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and instances of class $i$ are completely distinguishable from instances of the rest classes in $G(\rm {X})$, i.e.,: In DIRL, we hope that: Consider the region $x \in \mathcal {X}_i$, where $\rm {P}(G(\rm {X}=x)|\rm {Y}=i)>0$. According to the above assumption, we know that $\rm {P}(G(\rm {X}=x \in \mathcal {X}_i)|\rm {Y} \ne i) = 0$. Therefore, applying DIRL will force in region $x \in \mathcal {X}_i$. Taking the integral of $x$ over $\mathcal {X}_i$ for both sides of the equation, we have $\rm {P}_S(\rm {Y}=i) = \rm {P}_T(\rm {Y}=i)$. This deduction contradicts with the setting that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$. Therefore, $G(\rm {X})$ is impossible fully class-separable when it is domain-invariant. Note that the object of the supervised learning is exactly to make $G(\rm {X})$ class-separable. Thus, this actually indicates a conflict between the supervised learning and the domain-invariant representation learning. Based on the above analysis, we can conclude that it is impossible to obtain a feature representation $G(X)$ that is class-separable and at the same time, domain-invariant using the DIRL framework, when $\rm {P}(\rm {Y})$ shifts across domains. However, the shift of $\rm {P}(\rm {Y})$ can exist in many cross-domain sentiment analysis tasks. Therefore, it is worthy of studying in order to deal with the problem of DIRL. <<</Remark.>>> <<</Problem of Domain-Invariant Representation Learning>>> <<<Weighted Domain Invariant Representation Learning>>> According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\rm {P}(\rm {Y})$ to DIRL. The key idea of this framework is to first align $\rm {P}(\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\rm {P}(\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. In the first step, it applies DIRL on the target domain and the weighted source domain, aiming to alleviate the influence of the shift of $\rm {P}(\rm {Y})$ during the alignment of $\rm {P}(\rm {X}|\rm {Y})$. In the second step, it uses $\mathbf {w}$ to reweigh the supervised classifier $\rm {P}_S(\rm {Y}|\rm {X})$ obtained in the first step for target domain label prediction. We detail these two steps in §SECREF10 and §SECREF14, respectively. <<<Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight>>> The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\rm {P}(\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\mathbf {w}_i > 0$. Specifically, we hope that: and we denote $\mathbf {w}^*$ the value of $\mathbf {w}$ that makes this equation hold. We shall see that when $\mathbf {w}=\mathbf {w}^*$, DIRL is to align $\rm {P}_S(G(\rm {X})|\rm {Y})$ with $\rm {P}_T(G(\rm {X})|\rm {Y})$ without the shift of $\rm {P}(\rm {Y})$. According to our analysis, we know that due to the shift of $\rm {P}(\rm {Y})$, there is a conflict between the training objects of the supervised learning $\mathcal {L}_{sup}$ and the domain-invariant learning $\mathcal {L}_{inv}$. And the conflict degree will decrease as $\rm {P}_S(\rm {Y})$ getting close to $\rm {P}_T(\rm {Y})$. Therefore, during model training, $\mathbf {w}$ is expected to be optimized toward $\mathbf {w}^*$ since it will make $\rm {P}(\rm {Y})$ of the weighted source domain close to $\rm {P}_T(\rm {Y})$, so as to solve the conflict. We now show how to transfer existing DIRL models to their WDIRL counterparts with the above idea. Let $\mathbb {S}:\rm {P} \rightarrow {R}$ denote a statistic function defined over a distribution $\rm {P}$. For example, the expectation function $\mathbb {E}(\rm {X})$ in $\mathbb {E}(\rm {X}_S) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}))$ is a concrete instaintiation of $\mathbb {S}$. In general, to transfer models from DIRL to WDIRL, we should replace $\mathbb {S}(\rm {P}_S(\rm {X}))$ defined in $\mathcal {L}_{inv}$ with Take the CMD metric as an example. In WDIRL, the revised form of ${\text{CMD}}_K$ is defined by: Here, $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}|\rm {Y}=i))$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X}|\rm {Y}=i)$. Note that both $\rm {P}_S(\rm {Y}=i)$ and $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i)$ can be estimated using source labeled data, and $\mathbb {E}(\rm {X}_T)$ can be estimated using target unlabeled data. As for those adversarial-learning-based DIRL methods, e.g., DANN BIBREF2, the revised domain-invariant loss can be precisely defined by: During model training, $D$ is optimized in the direction to minimize $\hat{\mathcal {L}}_d$, while $G$ and $\mathbf {w}$ are optimized to maximize $\hat{\mathcal {L}}_d$. In the following, we denote $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$ the equivalent loss defined over $G$ for the revised version of domain adversarial learning. The general task loss in WDIRL is defined by: where $\hat{\mathcal {L}}_{inv}$ is a unified representation of the domain-invariant loss in WDIRL, such as $\widehat{\text{CMD}}_K$ and $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$. <<</Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight>>> <<<Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight>>> In the above step, we align $\rm {P}(\rm {X}|\rm {Y})$ across domains by performing domain-invariant learning on the class-weighted source domain and the original target domain. In this step, we deal with the shift of $\rm {P}(\rm {Y})$. Suppose that we have successfully resolved the shift of $\rm {P}(\rm {X}|\rm {Y})$ with $G$, i.e., $\rm {P}_S(G(\rm {X})|\rm {Y})=\rm {P}_T(G(\rm {X})|\rm {Y})$. Then, according to the work of BIBREF29, we have: where $\gamma (\rm {Y}=i)={\rm {P}_T(\rm {Y}=i)}/{\rm {P}_S(\rm {Y}=i)}$. Of course, in most of the real-world tasks, we do not know the value of $\gamma (\rm {Y}=i)$. However, note that $\gamma (\rm {Y}=i)$ is exactly the expected class weight $\mathbf {w}^*_i$. Therefore, a natural practice of this step is to estimate $\gamma (\rm {Y}=i)$ with the obtained $\mathbf {w}_i$ in the first step and estimate $\rm {P}_T(\rm {Y}|G(\rm {X}))$ with: In summary, to transfer methods of the DIRL paradigm to WDIRL, we should: first revise the definition of $\mathcal {L}_{inv}$, obtaining its corresponding WDIRL form $\hat{\mathcal {L}}_{inv}$; then perform supervised learning and domain-invariant representation learning on $\mathcal {D}_S$ and $\mathcal {D}_T$ according to Eq. (DISPLAY_FORM13), obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight vector $\mathbf {w}$; and finally, adjust $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ according to Eq. (DISPLAY_FORM16) and obtain the target domain classifier $\rm {P}_T(\rm {Y}|\rm {X}; \mathbf {\Phi })$. <<</Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight>>> <<</Weighted Domain Invariant Representation Learning>>> <<<Experiment>>> <<<Experiment Design>>> Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models: SO: the source-only model trained using source domain labeled data without any domain adaptation. CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{CMD}_K$. DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{JSD}(\rm {P}_S, \rm {P}_T)$. $\text{CMD}^\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method. $\text{DANN}^\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method. $\text{CMD}^{\dagger \dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method. $\text{DANN}^{\dagger \dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method. $\text{CMD}^{*}$: a variant of $\text{CMD}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ (estimate from target labeled data) to $\mathbf {w}$ and fixes this value during model training. $\text{DANN}^{*}$: a variant of $\text{DANN}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ to $\mathbf {w}$ and fixes this value during model training. Intrinsically, SO can provide an empirical lowerbound for those domain adaptation methods. $\text{CMD}^{*}$ and $\text{DANN}^{*}$ can provide the empirical upbound of $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$, respectively. In addition, by comparing performance of $\text{CMD}^{*}$ and $\text{DANN}^{*}$ with that of $\text{SO}$, we can know the effectiveness of the DIRL framework when $\rm {P}(\rm {Y})$ dose not shift across domains. By comparing $\text{CMD}^\dagger $ with $\text{CMD}$, or comparing $\text{DANN}^\dagger $ with $\text{DANN}$, we can know the effectiveness of the first step of our proposed method. By comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}^{\dagger }$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}^{\dagger }$, we can know the impact of the second step of our proposed method. And finally, by comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}$, we can know the general effectiveness of our proposed solution. <<</Experiment Design>>> <<<Dataset and Task Design>>> We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams. <<<Binary-Class.>>> From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\rm {P}(\rm {Y})$ shift, which was evaluated by the max value of $\rm {P}_S(\rm {Y}=i)/\rm {P}_T(\rm {Y}=i), \forall i=1, \cdots , L$. Please refer to Appendix C for more detail about the task design for this study. <<</Binary-Class.>>> <<<Multi-Class.>>> We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\mathcal {D}_S$ contained 1000 examples of each class, and $\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. <<</Multi-Class.>>> <<</Dataset and Task Design>>> <<<Implementation Detail>>> For all studied models, we implemented $G$ and $f$ using the same architectures as those in BIBREF3. For those DANN-based methods (i.e., DANN, $\text{DANN}^{\dagger }$, $\text{DANN}^{\dagger \dagger }$, and $\text{DANN}^{*}$), we implemented the discriminator $D$ using a 50 dimensional hidden layer with relu activation functions and a linear classification layer. Hyper-parameter $K$ of $\text{CMD}_K$ and $\widehat{\text{CMD}}_K$ was set to 5 as suggested by BIBREF3. Model optimization was performed using RmsProp BIBREF30. Initial learning rate of $\mathbf {w}$ was set to 0.01, while that of other parameters was set to 0.005 for all tasks. Hyper-parameter $\alpha $ was set to 1 for all of the tested models. We searched for this value in range $\alpha =[1, \cdots , 10]$ on task B $\rightarrow $ K. Within the search, label distribution was set to be uniform, i.e., $\rm {P}(\rm {Y}=i)=1/L$, for both domain B and K. We chose the value that maximize the performance of CMD on testing data of domain K. You may notice that this practice conflicts with the setting of unsupervised domain adaptation that we do not have labeled data of the target domain for training or developing. However, we argue that this practice would not make it unfair for model comparison since all of the tested models shared the same value of $\alpha $ and $\alpha $ was not directly fine-tuned on any tested task. With the same consideration, for every tested model, we reported its best performance achieved on testing data of the target domain during its training. To initialize $\mathbf {w}$, we used label prediction of the source-only model. Specifically, let $\rm {P}_{SO}(\rm {Y}|\rm {X}; \mathbf {\theta }_{SO})$ denote the trained source-only model. We initialized $\mathbf {w}_i$ by: Here, $\mathbb {I}$ denotes the indication function. To offer an intuitive understanding to this strategy, we report performance of WCMD$^{\dagger \dagger }$ over different initializations of $\mathbf {w}$ on 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) binary-class domain adaptation tasks in Figure FIGREF33. Here, we say that domain B and D are of a group, and domain E and K are of another group since B and D are similar, as are E and K, but the two groups are different from one another BIBREF9. Note that $\rm {P}_{S}(\rm {Y}=1)=0.5$ is a constant, which is estimated using source labeled data. From the figure, we can obtain three main observations. First, WCMD$^{\dagger \dagger }$ generally outperformed its CMD counterparts with different initialization of $\mathbf {w}$. Second, it was better to initialize $\mathbf {w}$ with a relatively balanced value, i.e., $\mathbf {w}_i \rm {P}_S(\rm {Y}=i) \rightarrow \frac{1}{L}$ (in this experiment, $L=2$). Finally, $\mathbf {w}^0$ was often a good initialization of $\mathbf {w}$, indicating the effectiveness of the above strategy. <<</Implementation Detail>>> <<<Main Result>>> Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\text{CMD}^{\dagger \dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\text{DANN}^{\dagger \dagger }$ with that of DANN and SO. Third, $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ consistently outperformed $\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ outperforms $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\text{Acc}(\text{CMD})-\text{Acc}(\text{SO}))/\text{Acc}(\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\rm {P}(\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\rm {P}(\rm {Y})$ shift. In contrast, our proposed model $\text{CMD}^{\dagger \dagger }$ performed robustly to the varying of $\rm {P}(\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\text{CMD}^{*}$. This again verified the effectiveness of our solution. Table TABREF34 reports model performance on the 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and the 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) multi-class domain adaptation tasks (You can refer to Appendix D for results on the other tasks). From this table, we observe that on some tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ did not greatly outperform or even slightly underperformed $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. A possible explanation of this phenomenon is that the distribution of $\mathcal {D}_T$ also differs from that of the target domain testing dataset. Therefore, the estimated or learned value of $\mathbf {w}$ using $\mathcal {D}_T$ is not fully suitable for application to the testing dataset. This explanation is verified by the observation that $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ also slightly outperforms $\text{CMD}^{*}$ and $\text{DANN}^{*}$ on these tasks, respectively. <<</Main Result>>> <<</Experiment>>> <<<Conclusion>>> In this paper, we studied the problem of the popular domain-invariant representation learning (DIRL) framework for domain adaptation, when $\rm {P}(\rm {Y})$ changes across domains. To address the problem, we proposed a weighted version of DIRL (WDIRL). We showed that existing methods of the DIRL framework can be easily transferred to our WDIRL framework. Extensive experimental studies on benchmark cross-domain sentiment analysis datasets verified our analysis and showed the effectiveness of our proposed solution. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Experiment, Abstract" ], "type": "disordered_section" }
1909.04181
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> BERT-Based Arabic Social Media Author Profiling <<<Abstract>>> We report our models for detecting age, language variety, and gender from social media data in the context of the Arabic author profiling and deception detection shared task (APDA). We build simple models based on pre-trained bidirectional encoders from transformers (BERT). We first fine-tune the pre-trained BERT model on each of the three datasets with shared task released data. Then we augment shared task data with in-house data for gender and dialect, showing the utility of augmenting training data. Our best models on the shared task test data are acquired with a majority voting of various BERT models trained under different data conditions. We acquire 54.72% accuracy for age, 93.75% for dialect, 81.67% for gender, and 40.97% joint accuracy across the three tasks. <<</Abstract>>> <<<Introduction>>> The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twitter). Availability of such data have made it possible to make discoveries about individuals and communities, mobilizing social and psychological research and employing natural language processing methods. In this work, we focus on predicting social media user age, dialect, and gender based on posted language. More specifically, we use the total of 100 tweets from each manually-labeled user to predict each of these attributes. Our dataset comes from the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focus on building simple models using pre-trained bidirectional encoders from transformers(BERT) BIBREF1 under various data conditions. Our results show (1) the utility of augmenting training data, and (2) the benefit of using majority votes from our simple classifiers. In the rest of the paper, we introduce the dataset, followed by our experimental conditions and results. We then provide a literature review and conclude. <<</Introduction>>> <<<Data>>> For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}. <<</Data>>> <<<Experiments>>> As explained earlier, the shared task is set up at the user level where the age, dialect, and gender of each user are the required predictions. In our experiments, we first model the task at the tweet level and then port these predictions at the user level. For our core modelling, we fine-tune BERT on the shared task data. We also introduce an additional in-house dataset labeled with dialect and gender tags to the task as we will explain below. As a baseline, we use a small gated recurrent units (GRU) model. We now introduce our tweet-level models. <<<Tweet-Level Models>>> <<<Baseline GRU.>>> Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs. <<</Baseline GRU.>>> <<<BERT.>>> For each task, we fine-tune on the BERT-Base Muultilingual Cased model relesed by the authors BIBREF1 . The model was pre-trained on Wikipedia of 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The vocabulary of the model is 119,547 shared WordPices. We fine-tune the model with maximum sequence length of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs. We use the same network architecture and parameters across the 3 tasks. As Table TABREF7 shows, comparing with GRU, BERT is 3.16% better for age, 4.85% better for dialect, and 2.45% higher for gender. <<</BERT.>>> <<<Data Augmentation.>>> To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender. <<</Data Augmentation.>>> <<</Tweet-Level Models>>> <<<User-Level Models>>> Our afore-mentioned models identify user's profiling on the tweet-level, rather than directly detecting the labels of a user. Hence, we follow the work of Zhang & Abdul-Mageed BIBREF4 to identify user-level labels. For each of the three tasks, we use tweet-level predicted labels (and associated softmax values) as a proxy for user-level labels. For each predicted label, we use the softmax value as a threshold for including only highest confidently predicted tweets. Since in some cases softmax values can be low, we try all values between 0.00 and 0.99 to take a softmax-based majority class as the user-level predicted label, fine-tuning on our DEV set. Using this method, we acquire the following results at the user level: BERT models obtain an accuracy of 55.56% for age, 96.00% for dialect, and 80.00% for gender. BERT_EXT models achieve 95.56% accuracy for dialect and 84.00% accuracy for gender. <<</User-Level Models>>> <<<APDA@FIRE2019 submission>>> First submission. For the shared task submission, we use the predictions of BERT_EXT as out first submission for gender and dialect, but only BERT for age (since we have no BERT_EXT models for age, as explained earlier). In each case, we acquire results at tweet-level first, then port the labels at the user-level as explained in the previous section. For our second and third submitted models, we also follow this method of going from tweet to user level. Second submission. We combine our DEV data with our EXTENDED_Dialect and EXTENDED_Gender data, for dialect and gender respectively, and train our second submssions for the two tasks. For age second submsision, we concatenate DEV data to TRAIN and fine-tune the BERT model. We refer to the settings for our second submission models collectively as BERT_EXT+DEV. Third submission. Finally, for our third submission, we use a majority vote of (1) first submission, (2) second submission, and (3) predictions from our user-level BERT model. These majority class models (i.e., our third submission) achieve best results on the official test data. We acquire 54.72% accuracy for age, 81.67% accuracy for gender, 93.75% accuracy for dialect, and 40.97% joint accuracy. <<</APDA@FIRE2019 submission>>> <<</Experiments>>> <<<Conclusion>>> In this work, we described our submitted models to the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focused on detecting age, dialect, and gender using BERT models under various data conditions, showing the utility of additional, in-house data on the task. We also showed that a majority vote of our models trained under different conditions outperforms single models on the official evaluation. In the future, we will investigate automatically extending training data for these tasks as well as better representation learning methods. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Introduction, Experiments" ], "type": "disordered_section" }
1909.04181
In the given paper, there are two sections whose positions are swapped, leading to ill-organise paper structure. Please identify the two sections and output the corresponding section names. The output should be in the format "Swapped sections: section 1, section 2". Please do not output additional content. Context: <<<Title>>> BERT-Based Arabic Social Media Author Profiling <<<Abstract>>> We report our models for detecting age, language variety, and gender from social media data in the context of the Arabic author profiling and deception detection shared task (APDA). We build simple models based on pre-trained bidirectional encoders from transformers (BERT). We first fine-tune the pre-trained BERT model on each of the three datasets with shared task released data. Then we augment shared task data with in-house data for gender and dialect, showing the utility of augmenting training data. Our best models on the shared task test data are acquired with a majority voting of various BERT models trained under different data conditions. We acquire 54.72% accuracy for age, 93.75% for dialect, 81.67% for gender, and 40.97% joint accuracy across the three tasks. <<</Abstract>>> <<<Introduction>>> The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twitter). Availability of such data have made it possible to make discoveries about individuals and communities, mobilizing social and psychological research and employing natural language processing methods. In this work, we focus on predicting social media user age, dialect, and gender based on posted language. More specifically, we use the total of 100 tweets from each manually-labeled user to predict each of these attributes. Our dataset comes from the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focus on building simple models using pre-trained bidirectional encoders from transformers(BERT) BIBREF1 under various data conditions. Our results show (1) the utility of augmenting training data, and (2) the benefit of using majority votes from our simple classifiers. In the rest of the paper, we introduce the dataset, followed by our experimental conditions and results. We then provide a literature review and conclude. <<</Introduction>>> <<<Data>>> For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}. <<</Data>>> <<<Experiments>>> As explained earlier, the shared task is set up at the user level where the age, dialect, and gender of each user are the required predictions. In our experiments, we first model the task at the tweet level and then port these predictions at the user level. For our core modelling, we fine-tune BERT on the shared task data. We also introduce an additional in-house dataset labeled with dialect and gender tags to the task as we will explain below. As a baseline, we use a small gated recurrent units (GRU) model. We now introduce our tweet-level models. <<<Tweet-Level Models>>> <<<Baseline GRU.>>> Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs. <<</Baseline GRU.>>> <<<BERT.>>> For each task, we fine-tune on the BERT-Base Muultilingual Cased model relesed by the authors BIBREF1 . The model was pre-trained on Wikipedia of 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The vocabulary of the model is 119,547 shared WordPices. We fine-tune the model with maximum sequence length of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs. We use the same network architecture and parameters across the 3 tasks. As Table TABREF7 shows, comparing with GRU, BERT is 3.16% better for age, 4.85% better for dialect, and 2.45% higher for gender. <<</BERT.>>> <<<Data Augmentation.>>> To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender. <<</Data Augmentation.>>> <<</Tweet-Level Models>>> <<<User-Level Models>>> Our afore-mentioned models identify user's profiling on the tweet-level, rather than directly detecting the labels of a user. Hence, we follow the work of Zhang & Abdul-Mageed BIBREF4 to identify user-level labels. For each of the three tasks, we use tweet-level predicted labels (and associated softmax values) as a proxy for user-level labels. For each predicted label, we use the softmax value as a threshold for including only highest confidently predicted tweets. Since in some cases softmax values can be low, we try all values between 0.00 and 0.99 to take a softmax-based majority class as the user-level predicted label, fine-tuning on our DEV set. Using this method, we acquire the following results at the user level: BERT models obtain an accuracy of 55.56% for age, 96.00% for dialect, and 80.00% for gender. BERT_EXT models achieve 95.56% accuracy for dialect and 84.00% accuracy for gender. <<</User-Level Models>>> <<<APDA@FIRE2019 submission>>> First submission. For the shared task submission, we use the predictions of BERT_EXT as out first submission for gender and dialect, but only BERT for age (since we have no BERT_EXT models for age, as explained earlier). In each case, we acquire results at tweet-level first, then port the labels at the user-level as explained in the previous section. For our second and third submitted models, we also follow this method of going from tweet to user level. Second submission. We combine our DEV data with our EXTENDED_Dialect and EXTENDED_Gender data, for dialect and gender respectively, and train our second submssions for the two tasks. For age second submsision, we concatenate DEV data to TRAIN and fine-tune the BERT model. We refer to the settings for our second submission models collectively as BERT_EXT+DEV. Third submission. Finally, for our third submission, we use a majority vote of (1) first submission, (2) second submission, and (3) predictions from our user-level BERT model. These majority class models (i.e., our third submission) achieve best results on the official test data. We acquire 54.72% accuracy for age, 81.67% accuracy for gender, 93.75% accuracy for dialect, and 40.97% joint accuracy. <<</APDA@FIRE2019 submission>>> <<</Experiments>>> <<<Conclusion>>> In this work, we described our submitted models to the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focused on detecting age, dialect, and gender using BERT models under various data conditions, showing the utility of additional, in-house data on the task. We also showed that a majority vote of our models trained under different conditions outperforms single models on the official evaluation. In the future, we will investigate automatically extending training data for these tasks as well as better representation learning methods. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Conclusion, Abstract" ], "type": "disordered_section" }